Search Results
Found 15 results
510(k) Data Aggregation
(27 days)
4970602
ISRAEL
Re: K250850
Trade/Device Name: Nanox.ARC X
Regulation Number: 21 CFR 892.1740
4970602
ISRAEL
Re: K250850
Trade/Device Name: Nanox.ARC X
Regulation Number: 21 CFR 892.1740
System, X-Ray, Tomographic
Classification Name: Tomographic x-ray system
Regulation Number: 892.1740
Nanox.ARC X is a stationary X-ray system intended to produce tomographic images for general use including human musculoskeletal system, pulmonary, intra abdominal, and paranasal sinus indications, adjunctive to conventional radiography, on adult patients.
This device is intended to be used in professional healthcare facilities or radiological environments, such as hospitals, clinics, imaging centers, and other medical practices by trained radiographers, radiologists, physicists.
Digital Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep. Applications can be performed with the patient in prone, supine, and lateral positions.
This device is not intended for mammographic, angiographic, cardiac, intra-cranial, interventional, or fluoroscopic applications. This device is not intended for imaging pediatric or neonatal patients.
Nanox.ARC X is a stationary, floor-mounted, stand-alone digital tomosynthesis system intended to produce tomographic images for general use including human musculoskeletal system, pulmonary, intra-abdominal, and paranasal sinus indications, from a single tomographic sweep. It serves as an adjunct to conventional radiography, for adult patients in recumbent positions. The system is intended for use in professional healthcare settings such as hospitals, clinics, and imaging centers by trained radiographers, radiologists, and physicists
The Nanox.ARC X includes a secured, dedicated off-the-shelf handheld operator console, a multisource, tiltable arc gantry with five identical tubes, a motorized patient table, and a flat panel detector of a scintillator-photodetector type. The image reconstruction service and DICOMization services can be hosted either locally or as part of the secured Nanox.CLOUD, according to customer preference. Nanox.CLOUD also hosts a protocol database service package.
The Nanox.ARC X X-ray tubes are operated sequentially, one at a time, generating multiple low-dose images acquired from different angles, during a single sweep, dividing the overall power requirements among the tubes. The sweep is performed over a motorized patient table. Patients can be placed in prone, supine, and lateral positions.
The acquired projection imaging data is anonymized and automatically reconstructed to form tomographic slices of the imaged object, with each slice parallel to the table plane. The Tomosynthesis image result reduces the effect of overlying structures and provides depth information on structures of interest. The resultant images are re-identified and sent using the DICOM protocol.
Here's an analysis of the provided FDA 510(k) clearance letter for Nanox.ARC X, focusing on the acceptance criteria and the study that proves the device meets those criteria.
Key Observation: The provided document is a 510(k) Clearance Letter. These letters primarily address the "substantial equivalence" of a new device to a predicate device, rather than providing detailed clinical efficacy trial results as would be found in a Premarket Approval (PMA) application or a de novo classification request. This type of clearance often relies heavily on non-clinical bench testing and technological comparisons to demonstrate that the new device is as safe and effective as a legally marketed predicate.
Therefore, the information regarding in-depth clinical studies (like MRMC studies, specific ground truth methods, or detailed acceptance criteria for diagnostic accuracy) is limited or absent in this document because it's not typically required for a 510(k) clearance based on substantial equivalence to an existing device with similar technological characteristics. The focus is on demonstrating that the modifications to the predicate device (Nanox.ARC) do not negatively impact its safety or effectiveness.
Acceptance Criteria and Device Performance Assessment
Based on the provided document, the "acceptance criteria" are primarily framed around demonstrating that the modified device (Nanox.ARC X) is as safe and effective as its predicate (Nanox.ARC), despite minor technological changes. The proof relies heavily on non-clinical bench testing.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of a 510(k) summary focused on substantial equivalence and technological comparison, the "acceptance criteria" are inferred from the types of non-clinical tests performed to ensure the new device functions as intended and is as safe and effective as the predicate. The "reported device performance" are the general conclusions drawn from these tests.
Acceptance Criterion (Inferred from testing performed) | Reported Device Performance |
---|---|
System Electrical Qualification | Functioned as intended. |
System Performance | Functioned as intended. |
Longevity and Consistency | Functioned as intended. |
Tube Longevity and Reliability | Functioned as intended. |
Functional Verification | Functioned as intended. |
Motion Control | Functioned as intended. |
Dimensional and Mechanical Properties | Functioned as intended. |
Image Quality | Functioned as intended. |
Tube Comparison CEI and Nanox Korea | Functioned as intended. |
Human Factors Summary | Functioned as intended. |
Phantom Validation | Functioned as intended. |
Weight Considerations | Functioned as intended. |
Transportation | Functioned as intended. |
Software Verification and Validation | Functioned as intended. |
Overall Safety and Effectiveness | Similar to predicate device. |
Note: The level of detail provided in a 510(k) letter doesn't include specific quantitative metrics for each test, only a general statement that the system "functioned as intended" and overall safety/effectiveness are similar to the predicate.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated in terms of patient data. The testing described primarily involves bench testing, phantom studies, and system-level verification and validation. There is no indication of a clinical test set involving human patients as one might expect for a diagnostic accuracy study.
- Data Provenance: Not applicable in the context of clinical patient data for this 510(k) pathway, as no clinical tests were performed. The "data" comes from the results of the various non-clinical bench tests.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not applicable. Since no clinical tests were performed on human patients and no diagnostic accuracy claims are being established through reader studies, there was no need for expert ground truth establishment for a clinical test set.
- Qualifications of Experts: N/A.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable. No clinical test set requiring expert adjudication was conducted.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was explicitly NOT done. The document states: "No clinical tests were performed for the subject device." This type of study would be a clinical test.
- Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was done.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Standalone Performance: The document does not describe a standalone diagnostic accuracy study of an AI algorithm. The device is a tomographic X-ray system, not an AI diagnostic algorithm, although it does include "image reconstruction service" and "DICOMization services." These are intrinsic functionalities of the imaging system itself, not separate AI components whose standalone diagnostic performance would be evaluated. The "Software Verification and Validation" likely covers the functional correctness of these reconstruction algorithms.
7. The Type of Ground Truth Used for the Test Set
- Type of Ground Truth: Not applicable for a clinical test set. The "ground truth" for the non-clinical tests would be the established engineering specifications, phantom measurements, and functional requirements against which the device's performance was measured (e.g., a known phantom structure for image quality, or expected electrical parameters for qualification).
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable. This 510(k) is for a hardware device (X-ray system) with associated software for image reconstruction. It is not an AI/ML algorithm that undergoes a distinct "training" phase on a specific dataset for diagnostic interpretation. The image reconstruction algorithms are typically deterministic or based on established physics and signal processing, not on deep learning models trained on large image datasets.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: Not applicable, as there isn't a "training set" in the context of an AI/ML diagnostic algorithm for which ground truth would be established. The "ground truth" for the development of image reconstruction algorithms would be based on mathematical models, physical principles of X-ray interaction, and calibrated phantom data to optimize image quality and accuracy.
Ask a specific question about this device
(113 days)
3486 Petah Tikva, 4970602 ISRAEL
Re: K242395
Trade/Device Name: Nanox.ARC Regulation Number: 21 CFR 892.1740
| | | |
| Regulation Number | | 892.1740
Nanox.ARC is a stationary X-ray system intended to produce tomographic images for general use including human musculoskeletal system, pulmonary, intra-abdominal, and paranasal sinus indications, adjunctive to conventional radiography, on adult patients.
This device is intended to be used in professional healthcare facilities or radiological environments, such as hospitals, clinics, imaging centers, and other medical practices by trained radiographers, radiologists, and physicists.
Digital Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep. Applications can be performed with the patient in prone, supine, and lateral positions.
This device is not intended for mammographic, cardiac, intra-cranial, interventional, or fluoroscopic applications. This device is not intended for imaging pediatric or neonatal patients.
Nanox.ARC is a stationary, floor-mounted, stand-alone digital tomosynthesis system intended to produce tomographic images for general use including human musculoskeletal system, pulmonary, intra-abdominal, and paranasal sinus indications, from a single tomographic sweep. It serves as an adjunct to conventional radiography, for adult patients in recumbent positions. The system is intended for use in professional healthcare settings such as hospitals, clinics, and imaging centers by trained radiologists, and physicists.
The Nanox.ARC includes a secured, dedicated off-the-shelf handheld operator console, a multisource, tiltable arc gantry with five identical tubes, a motorized patient table, and a flat panel detector type. The image reconstruction service and DICOMization services can be hosted either locally or as part of the secured Nanox.CLOUD, according to customer preference.
Nanox.CLOUD also hosts a protocol database service package.
The Nanox.ARC X-ray tubes are operated sequentially, one at a time, generating multiple low-dose images acquired from different angles, during a single sweep, dividing the overall power requirements among the tubes. The sweep is performed over a motorized patient table. Patients can be placed in prone, supine, and lateral positions.
The acquired projection imaging data is anonymized and automatically reconstructed to form tomographic slices of the imaged object, with each slice parallel to the table plane. The Tomosynthesis reduces the effect of overlying structures and provides depth information on structures of interest. The resultant images are re-identified and sent using the DICOM protocol.
The provided text is a 510(k) summary for the Nanox.ARC device. It mentions a "Clinical Sample Data evaluation" and confirms that the device can "generate diagnostic-quality images for the expanded Indications for Use," but it does not provide specific details on acceptance criteria or the study design and results as requested in the prompt.
Therefore, I cannot provide a table of acceptance criteria, reported performance, sample sizes (for test/training), ground truth details, expert qualifications, or adjudication methods directly from the provided text. The document states that "The non-clinical performance testing conducted on the predicate device submitted under K222934 remain applicable to the subject device," implying that some of the detailed testing justification might reside in the predicate device's 510(k) submission (K222934).
However, I can extract the information that is present:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated in terms of quantitative metrics (e.g., sensitivity, specificity, image quality scores).
- Reported Device Performance: "Nanox.ARC System functioned as intended" and "generate diagnostic-quality images for the expanded Indications for Use. This includes the evaluation of complex and abnormalities of various sizes and shapes."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size (test set): Not specified. The document mentions "clinical sample data" but not the number of cases.
- Data Provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not specified.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- An MRMC study is not mentioned. The device is described as an imaging system intended to produce tomographic images, with "adjunctive to conventional radiography." This phrasing suggests human interpretation of the images produced by the device, but not necessarily an AI-assisted interpretation workflow and its comparative effectiveness.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- The document implies the device generates images for human interpretation ("trained radiographers, radiologists, and physicists"). A standalone algorithm performance (without human-in-the-loop) is not discussed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not specified. The document uses terms like "diagnostic-quality images" and "evaluation of complex and abnormalities," which generally implies comparison against established diagnostic standards, likely expert-interpreted images or clinical findings, but the specific type of ground truth (e.g., expert consensus, pathology, follow-up) is not detailed.
8. The sample size for the training set
- Not specified.
9. How the ground truth for the training set was established
- Not specified.
Ask a specific question about this device
(214 days)
1 Neve Ilan. 9085000 ISRAEL
Re: K222934
Trade/Device Name: Nanox.ARC Regulation Number: 21 CFR 892.1740
Classification Name: | Tomographic X-ray System |
| Regulation: | 21 CFR §892.1740
|
| Regulation number | 21 CFR § 892.1740
| 21 CFR § 892.1740
principles of operation and are also identical in the following ways:
- · Regulation number 21 CFR §892.1740
Nanox.ARC is a stationary X-ray system intended to produce tomographic images of the human musculoskeletal system adjunctive to conventional radiography, on adult patients. This device is intended to be used in professional healthcare facilities or radiological environments, such as hospitals, clinics, imaging centers, and other medical practices by trained radiographers, radiologists, and physicists. Digital Tomosynthesize tomographic slices from a single tomographic sweep. Applications can be performed with the patient in prone, supine, and lateral positions. This device is not intended for mammographic, cardiac, pulmonary, intra-abdominal, intra-cranial, intra-cranial, interventional, or fluoroscopic applications. This device is not intended for imaging pediatric or neonatal patients.
Nanox.ARC is a tomographic and solid-state X-ray system (product codes IZF and MQB) intended to produce tomographic images of the human musculoskeletal system from a single tomographic sweep, as an adjunct to conventional radiography, on adult patients.
Nanox.ARC is a floor-mounted tomographic system that consists of a user control console, a multisource, tiltable arc gantry with five alternately-switched tubes, a motorized patient table, a flatpanel detector of a scintillator-photodetector type, and Protocols database and Image processing software packages.
Nanox.ARC utilizes several small-sized X-ray tubes that are independently and electronically switched, thereby dividing the overall power requirements over multiple tubes. Nanox.ARC utilizes a tilting imaging ring with five X-ray tubes, operated sequentially, one at a time, used to generate multiple low-dose X-ray projection images acquired from different angles during a single spherical (non-linear) sweep. The sweep is performed over a motorized patient table. Patients can be placed in prone, supine, and lateral positions.
The acquired projection imaging data is automatically reconstructed to form tomographic slices of the imaged object, with each slice parallel to the table plane. The Tomosynthesis image result reduces the effect of overlying structures and provides depth information on structures of interest. The image reconstruction service, as well as the system's protocol database and DICOMization services, can be hosted either locally or as part of the Nanox.CLOUD, according to customer preference. The resultant images are sent using the DICOM protocol.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The document doesn't explicitly list specific quantitative acceptance criteria in a table format with separate reported device performance values for each criterion. Instead, it states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
The "Table 2: Non-clinical Performance Data" lists various tests performed and reports a "PASS" for each, indicating that the device met the acceptance criteria for those specific tests.
Table of Acceptance Criteria (Implied) and Reported Device Performance:
Acceptance Criterion (Implied by Test Description) | Reported Device Performance |
---|---|
System Electrical Qualification | PASS |
System Performance (Motion resolution & accuracy) | PASS |
System Longevity & Consistency | PASS |
Tube Longevity and Reliability | PASS |
Functional Verification | PASS |
Motion Control stability | PASS |
Detector and image acquisition functionality | PASS |
Usability Summative (Safety, effectiveness, no failures) | PASS |
Transportation safety | PASS |
Dimensional and Mechanical Properties | PASS |
Image Quality | PASS |
Phantom Validation (Diagnostic quality vs. predicate) | PASS |
Software verification and validation | PASS |
Compliance to 21 CFR 1020.30 and 1020.31 | PASS |
Electrical Safety & EMC (IEC 60601-1, IEC 60601-1-2) | PASS |
Radiation Safety (IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54) | PASS |
Biocompatibility (ISO 10993-1) | PASS |
Study Details:
-
Sample size used for the test set and the data provenance:
- Clinical Sample Evaluation (for image quality): Nine (9) Digital Tomosynthesis image cases were acquired from healthy adult human subjects (patients).
- Phantom Performance Exams: Twelve (12) Digital Tomosynthesis phantom performance exams (total cases = 9 human + 12 phantom = 21 cases).
- Data Provenance: From a clinical study conducted at Shamir Medical Center in Israel. The study appears to be prospective as it states "image cases were acquired from healthy adult human subjects (patients) from a clinical study conducted at Shamir Medical Center in Israel."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: One (1)
- Qualifications: An ABR-certified radiologist.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated, but with only one radiologist reviewing, there was no multi-expert adjudication mentioned (e.g., 2+1, 3+1). If only one expert makes the determination, it's effectively "none" in terms of reaching a consensus among multiple experts.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not conducted. The clinical sample evaluation involved a single ABR-certified radiologist evaluating the diagnostic quality of the Nanox.ARC images themselves, "against a reference comparison which was the standard of care radiographies." This was a direct comparison of images, not a study on human reader performance with or without AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, the described "Bench Testing" and "Non-clinical Performance Data" table largely represent standalone algorithm and system performance without human intervention in the diagnostic interpretation loop. The "Image Quality" and "Phantom Validation" tests also assessed the device's output directly. The clinical sample evaluation by the radiologist was to evaluate the diagnostic quality of the images produced by the device, effectively assessing the device's standalone output for clinical utility.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: For the clinical sample evaluation, the diagnostic quality of the Nanox.ARC images was evaluated by an ABR-certified radiologist "against a reference comparison which was the standard of care radiographies." This implies the "ground truth" was essentially the interpretive diagnostic quality determined by a single expert, compared to standard of care imaging. For the phantom studies, the ground truth would be based on the known physical properties and measurements within the phantoms.
-
The sample size for the training set:
- Training Set Sample Size: The document does not provide any information regarding the sample size used for the training set of the Nanox.ARC system's image reconstruction or processing algorithms.
-
How the ground truth for the training set was established:
- Training Set Ground Truth: The document does not provide any information on how ground truth was established for the training set.
Ask a specific question about this device
(205 days)
Trade/Device Name: ADAPTIX 3D Orthopedic Imaging System ("Ortho Device") Regulation Number: 21 CFR 892.1740
| Tomographic x-ray system |
| Regulation Number: | §892.1740
|
| Regulation Number: | §892.1740
The Ortho Device is intended to generate tomosynthesis images of human anatomy for diagnostic purposes of the hand, elbow and foot in patients of all ages.
The imaging will provide the physician visualized information about anatomical structures to facilitate assessment in orthopedic cases such as:
• Fractures of bones in finger, metacarpus or wrist
• Fractures of foot, ankle or elbow joint
• Arthritis
The Ortho Device is a 3D tomographic X-ray device intended to be used to produce radiological images of a specific cross-sectional plane of the body. The device is comprised of a Flat Panel X-ray source combined with a digital detector within a mounting frame, a control unit and a workstation. It is intended to offer 3D imaging of orthopedic structures by using a panel of X-ray sources that construct a 3D tomosynthesis image with the associated reconstruction software from individual images; it is also possible to create synthetic 2D images of the desired anatomy.
The Ortho Device is a portable system that can be mounted on a stand for tabletop applications or on a trolley cart for added mobility with motorized vertical positioning. The C-Arm and Control Unit components are both designed to be carriable by a single person. To allow for the ideal positioning of the anatomy (hand and weight-bearing foot images) in the beam path and to achieve the desired plane of view, the Ortho Device C-Arm can be manually rotated by up to 90°. The central beam is aligned perpendicularly to the image receptor.
The "Ortho Device" was created to fill a diagnostic niche in orthopedic medicine for cost effective and portable imaging for patients and is used, amongst other applications, for 3D-radiographic diagnostic imaging of hand, elbow and foot in orthopedic and radiological practices as well as in emergency departments of hospitals. The Ortho Device results are detailed multi-slice 3D images of patients that allow radiologist interpretation of clinical image data and by this support medical professionals decisionmaking on human anatomy.
The Ortho Device system is designed to meet the requirements in accordance with relevant sections of 21CFR 1020.30-1020.31.
The provided text is a K221949 510(k) summary for the ADAPTIX 3D Orthopedic Imaging System ("Ortho Device"). It does not contain information about acceptance criteria, detailed study designs, or reader study results with explicit performance metrics. The document primarily focuses on demonstrating substantial equivalence to predicate devices through technical comparisons and non-clinical testing.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and reported device performance.
- Sample size used for the test set and data provenance.
- Number of experts and their qualifications for ground truth establishment.
- Adjudication method for the test set.
- MRMC comparative effectiveness study results or effect sizes.
- Standalone performance details.
- Type of ground truth used (expert consensus, pathology, outcomes data, etc.) for the test set.
- Sample size for the training set.
- How ground truth for the training set was established.
However, based on the section "9. Non-Clinical Performance Data," I can extract the following relevant information regarding performance evaluation, albeit without specific quantitative acceptance criteria or detailed study methodologies:
The study that "proves the device meets the acceptance criteria" in this context refers to a series of non-clinical tests summarized in Section 9. While "acceptance criteria" for specific performance metrics are not explicitly stated with quantitative thresholds, the document implies that these criteria were met by stating "Passed" for each test.
1. A table of acceptance criteria and the reported device performance:
Since explicit quantitative acceptance criteria are not provided, the table below lists the performance aspects tested and the reported outcome.
Performance Aspect Tested | Reported Device Performance/Outcome |
---|---|
In vitro Cytotoxicity (per ISO 10993-5) | Passed |
Irritation and skin Sensitization (per ISO 10993-10) | Passed |
Systemic toxicity (per ISO 10993-11) | Passed |
Electrical safety (per IEC 60601-1) | Passed |
Electromagnetic Disturbance (EMD) (per IEC 60601-1-2) | Passed |
Radiation protection (per IEC 60601-1-3) | Passed |
Medical Electrical Equipment Usability (per IEC 60601-1-6) | Passed |
Safety and essential performance of X-ray tube assemblies (per IEC 60601-2-28 and IEC 60601-2-54) | Passed |
Particular electrical testing performance req. for Radiation dose documentation (per IEC 61910-1) | Passed |
Digital Imaging and Communications in Medicine (DICOM) (per NEMA PS 3.1) | Passed |
Transportation Testing (per ASTM D4169) | Passed |
Image quality (spatial and contrast resolution, homogeneity, linearity) | Passed |
Ability of device to image all intended body parts (fingers, metacarpus/wrist, elbow, foot, ankle) | Evaluated and confirmed by radiologists |
Ability of device to provide imaging data for assessment of bone fracture and arthritis | Evaluated and confirmed by radiologists |
Software verification and validation (functional level, system compatibility, risk analysis per IEC 62304/FDA Guidance) | Completed for Moderate Level of Concern software |
Risk Management (per EN ISO 14971) | All requirements met, risks reduced |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
The document mentions "sample clinical images" being evaluated by radiologists but does not specify the sample size, data provenance, or whether the study was retrospective or prospective for these clinical image evaluations. For other non-clinical tests (e.g., toxicity, electrical safety), the "sample size" would refer to the number of device units or components tested, which is not stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
The document states "Evaluation of sample clinical images by radiologists". It does not specify the number of radiologists, their qualifications, or their experience levels.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
The document does not describe any adjudication method used for the evaluation of clinical images.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
The document does not mention an MRMC comparative effectiveness study or any evaluation of human reader performance with or without AI assistance. The device is an imaging system, not explicitly described as having AI for interpretation in this summary.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document describes the device as an "imaging system" that "results are detailed multi-slice 3D images of patients that allow radiologist interpretation of clinical image data and by this support medical professionals decision-making." This implies the device provides images for human interpretation, and there is no mention of an algorithm performing standalone diagnoses.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc):
For the evaluation of clinical images, "Evaluation of sample clinical images by radiologists to demonstrate that the device is able to image all intended body parts" and "to help clinician for the assessment of bone fracture and arthritis" implies that the ground truth for these evaluations was based on expert assessment/consensus (i.e., the radiologists' judgment). No pathology or outcomes data is mentioned as ground truth.
8. The sample size for the training set:
Not applicable/not provided. This document describes a medical imaging device, not a machine learning algorithm that requires a separate training set. The "software verification and validation testing" mentioned refers to the device's operational software, not an AI training process.
9. How the ground truth for the training set was established:
Not applicable/not provided, as there is no mention of a training set for an AI algorithm.
Ask a specific question about this device
(33 days)
40026 ITALY
Re: K213081
Trade/Device Name: CLISIS SYSTEMS, Discovery RF180 Regulation Number: 21 CFR 892.1740
The CLISIS SYSTEMS, Discovery RF180 is indicated for performing general radiography, fluoroscopy and angiography procedures.
Applications and techniques:
- Gastroenterology
- Skeleton
- Thorax and lungs
- Paediatrics
- Urology and gynecology
- Emergency/traumatology
- Digital angiography
- Linear tomography
- Auto Image Paste (Stitching)
- Tomosynthesis
Not Found
The provided text is an FDA 510(k) clearance letter for the CLISIS SYSTEMS, Discovery RF180, a tomographic x-ray system. It states the device is substantially equivalent to legally marketed predicate devices and outlines the indications for use.
However, the provided text DOES NOT contain any information regarding acceptance criteria or the study that proves the device meets those criteria. It is a regulatory clearance document, not a detailed technical report or clinical study summary.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and the reported device performance
- Sample size used for the test set and the data provenance
- Number of experts used to establish the ground truth
- Adjudication method
- MRMC comparative effectiveness study details
- Standalone performance
- Type of ground truth used
- Training set sample size
- How ground truth for the training set was established
The document focuses on the regulatory approval process and the intended uses of the device, not the technical details of its performance or the studies conducted to validate that performance. To obtain that information, one would typically need to refer to separate study reports, a detailed 510(k) summary (if available publicly with more technical details), or an Instructions for Use (IFU) document if it contains performance specifications.
Ask a specific question about this device
(53 days)
Tomographic Classification Name: Tomographic X-ray System Regulatory Classification: Class II, 21 CFR 892.1740
The DR 800 with DSA system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:
- · Positioning fluoroscopy procedures
- · Gastro-intestinal examinations
- · Urogenital tract examinations
- · Angiography
- · Digital Subtraction Angiography
It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for projection radiography of all body parts.
In addition, the system provides the Agfa Tomosynthesis option, which is intended to acquire tomographic slices of human anatomy and to be used with Agfa DR X-ray systems. Digital Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep.
Not intended for cardiovascular and cerebrovascular contrast studies. Not intended for mammography applications.
Agfa's DR 800 with DSA medical device is a fluoroscopic x-ray system that includes digital angiography (product code JAA) intended to capture tomographic, static and dynamic images of the human body. The DR 800 is a floor-mounted radiographic, fluoroscopic and tomographic system that consists of a tube and operator console with a motorized tilting patient table. FLFS overlay and bucky with optional wall stand and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital, wide dynamic range and angiographic image capture. It is capable of replacing other direct radiography, tomography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
This submission is to add the newest version of the DR 800 with Digital Subtraction Angiography (DSA) to Agfa's radiography portfolio.
Here's an analysis of the acceptance criteria and study information for the Agfa DR 800 with DSA, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria in a table format for performance metrics. Instead, it describes a more qualitative approach, focusing on equivalence to predicate devices and confirmation through expert evaluation.
Acceptance Criteria (Inferred from text) | Reported Device Performance |
---|---|
Bench Testing (General Performance) | "Technical and acceptance testing was completed on the DR 800 with DSA in order to confirm the medical device functions and performs as intended. All deviations or variances are documented in a defect database and addressed in the CRD documentation and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed." |
Functionality and Usability | "Performance functionality and usability evaluations were conducted with qualified experts. The results of these tests fell within the acceptance criteria for the DR 800 with DSA; therefore, the DR 800 supports GenRad, Full Leg/ Full Spine (FLFS), roadmapping and Digital Subtraction Angiography (DSA) workflow." |
Clinical Image Quality (DSA) | "Clinical image validation was conducted using anthropomorphic phantoms and evaluated by qualified experts. The radiographers evaluated the DSA image quality on the DR 800 by comparing overall image quality with the primary predicate A device (K190373). Diagnostic confidence for DSA image quality and roadmapping on the DR 800 was between good and excellent." The document also states, "Clinical image quality evaluation is not essential in establishing substantial equivalence for the DR 800 with DSA. Adequate Bench Testing results should be sufficient to determine device safety and effectiveness." This indicates that while performed, it wasn't a strict acceptance criterion in the same vein as quantitative safety/effectiveness thresholds. |
Software Verification & Validation (Safety/Risk) | "The complete device has been certified and validated. During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with conventional x-ray film previously released to the field." "For the NX 23 (NX Orion) software there are a total of 535 risks in the broadly acceptable region and 37 risks in the ALARP region with only four of these risks identified. Zero risks were identified in the Not Acceptable Region. Therefore, the device is assumed to be safe, the benefits of the device are assumed to outweigh the residual risk." |
Electrical Safety and Electromagnetic Compatibility (EMC) Testing | The device is compliant with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, and IEC 60601-2-54. The DR 800 is also compliant with FDA Subchapter J mandated performance standards 21 CFR 1020.30 - 1020.32. |
Quality Management, Risk Management, DICOM, Usability Engineering | The company's in-house procedures conform to ISO 13485, ISO 14971, ACR/NEMA PS3.1-3.20 (DICOM), and IEC 62366-1. (This implies compliance with these standards as part of overall acceptance). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated in terms of number of images or cases. The document mentions "anthropomorphic phantoms" for clinical image validation.
- Data Provenance: The study used "anthropomorphic phantoms," which are physical models designed to simulate human anatomy for imaging purposes. This indicates a laboratory/phantom study rather than real patient data. The country of origin for the phantom data is not specified, but the submission is from Agfa N.V. (Belgium). It is a prospective study in the sense that the new device was evaluated with these phantoms.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated. The document mentions "qualified experts" and "radiographers."
- Qualifications of Experts: Described as "qualified experts" and "radiographers." No specific experience levels (e.g., "10 years of experience") are provided.
4. Adjudication Method for the Test Set
Not specified. The document states "evaluated by qualified experts" and "radiographers evaluated...by comparing overall image quality with the primary predicate A device," implying a comparative evaluation rather than a strict adjudication process for ground truth establishment.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If so, What was the effect size of how much human readers improve with AI vs without AI assistance
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not conducted. This is not an AI-assisted diagnostic device; it's a conventional X-ray system with digital image processing and DSA capabilities. The study compared the device's image quality to a predicate device, focusing on equivalence, not human reader improvement with AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, in essence, the "Bench Testing" and "Software Verification and Validation Testing" sections describe standalone performance evaluations of the device's functions and image processing algorithms. The "Clinical image validation" with phantoms also focuses on the device's output (image quality) rather than human interaction with the device in a diagnostic workflow where the human acts as the ultimate decision-maker for the study’s performance outcome.
7. The Type of Ground Truth Used
The "ground truth" for the image quality evaluation was based on expert comparison and qualitative assessment of images produced by the device, specifically assessing "diagnostic confidence for DSA image quality and roadmapping" as "between good and excellent" when compared to a predicate device. This is primarily an expert consensus on image quality rather than pathology, clinical outcomes, or a gold standard.
8. The Sample Size for the Training Set
Not applicable. This device is an X-ray imaging system, not a machine learning or AI algorithm that requires a training set of data. The image processing algorithms are described as being "similar to those previously cleared" or "similar to the primary predicate device."
9. How the Ground Truth for the Training Set Was Established
Not applicable, as this device does not utilize a machine learning model that would require a ground truth for a training set.
Ask a specific question about this device
(104 days)
GREENVILLE SC 29601
Re: K193262
Trade/Device Name: DR 600 with Tomosynthesis Regulation Number: 21 CFR 892.1740
Tomographic Classification Name: Tomographic X-ray System Regulatory Classification: Class II, 21 CFR 892.1740
Tomographic Classification Name: Tomographic X-ray System Regulatory Classification: Class II, 21 CFR 892.1740
The DR 600 system is a General Radiography X-ray imaging system used in hospitals, clinics and medical practices by radiographers, radiologists and physicists to make, process and view static X-ray radiographic images of the skeleton (including skull, spinal column and extremities), chest, abdomen and other body parts on adult, pediatric or neonatal patients.
In addition, the system provides the Agfa tomosynthesis option, which is intended to acquire tomographic slices of human anatomy and to be used with Agfa DR X-ray systems. Digital tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep.
Applications can be performed with the patient in the sitting, standing or lying position.
This system is not intended for mammography applications.
The DR 600 with Tomosynthesis is a tomographic and solid state x-ray system (product codes IZF and MQB) intended to capture tomographic slices and static images of the human body. The DR 600 with Tomosynthesis is a ceiling mounted tomographic and general radiographic system that consists of a tube and operator console with a motorized patient table and/or wall stand. The DR 600 with Tomosynthesis uses Agfa's NX workstation with MUSICA2 ™ image processing and flat-panel detectors of the scintillator-photodetector type (Cesium Iodide - CsI or Gadolinium Oxysulfide - GOS). It is capable of replacing other direct radiography, tomography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
The provided text describes the Agfa DR 600 with Tomosynthesis device and its K193262 510(k) submission. However, it does not contain specific acceptance criteria or a detailed clinical study demonstrating the device's meeting of these criteria. The document focuses on showing substantial equivalence to predicate devices primarily through technological characteristics and bench testing, not through comparative clinical effectiveness studies with explicit acceptance criteria for diagnostic performance.
Therefore, many of the requested details about acceptance criteria, clinical study design, sample sizes, ground truth establishment, expert qualifications, and MRMC studies are not present in the provided text. The document explicitly states: "No clinical trials were performed in the device. No animal or clinical studies were performed in the development of the new device."
Based on the available information, here's what can be extracted and what is missing:
Acceptance Criteria and Device Performance (as inferred from the document's approach to substantial equivalence):
Since no specific acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, AUC) are presented, the "acceptance criteria" for this 510(k) appear to be primarily focused on demonstrating substantial equivalence to predicate devices through technical specifications, image quality evaluations (bench testing), and compliance with various electrical safety, EMC, and software standards.
Acceptance Criteria (Inferred from Document) | Reported Device Performance |
---|---|
1. Technological Characteristics are Identical/Equivalent to Predicates: |
- Communications (DICOM)
- Flat Panel Detectors (type, material, sizes, pixel size, dynamic range)
- Operator Workstation (Agfa NX)
- Image Processing (MUSICA DTS, MUSICA2, MUSICA3/3+)
- Operating System (Windows 7, 8, 8.1, 10)
- Display System (Separately cleared medical display)
- Power Supply
- Generators
2. Indication for Use statement is consistent/identical to predicates.
3. Performance/Functionality as Intended:
- Confirmed functions and performs as intended.
- Supports a tomographic workflow and Smart Dr visualization (including adult and pediatric patients).
4. Image Quality Equivalent to Predicate:
- For both adult and pediatric patients.
5. Software Validation:
- Verification and validation plans confirmed.
- Risk assessment shows no unacceptable risks.
6. Electrical Safety and EMC Compliance:
- Adherence to specified IEC standards (60601-1, 60601-1-2, 60601-1-3, 60601-1-6, 60601-2-28, 60601-2-54).
- Compliance with FDA Subchapter J mandated performance standard 21 CFR 1020.30 and 1020.31.
7. Quality Management System Compliance:
- Adherence to ISO 13485:2015, ISO 14971:2012, ACR/NEMA PS3.1-3.20 (DICOM). | 1. Technological Characteristics:
- Communications: Same as both predicates (DICOM).
- Flat Panel Detectors: Same as both predicates (Flat Panel Detectors, GOS/CsI Scintillator, various sizes 17x17, 14x17, 10x10 in., 148 µm pixel size in primary predicate, 139 µm in other, 16 bit dynamic range in primary predicate, 14 bit in other). The new device shares characteristics with both, indicating equivalence.
- Operator Workstation: Same as both predicates (Agfa NX).
- Image Processing: MUSICA DTS, MUSICA2, MUSICA3/3+. The addition of tomographic image processing is identical to the DR 800 (K183275) primary predicate device.
- Operating System: Same as predicate K183275 (Windows 7, 8, 8.1, 10).
- Display System: Same as both predicates (Separately cleared medical display K051901).
- Power Supply: Same as predicate K152639 (50-60 Hz, 380/400/415/440/480V + 10%).
- Generators: Same as predicate K183275 (Choice of three models: 50, 65KW, 80 KW).
- Overall: "Principles of operation and technological characteristics of the new and predicate devices are the same."
2. Indication for Use: "The DR 600 system is a General Radiography X-ray imaging system... In addition, the system provides the Agfa tomosynthesis option... Digital tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep." This is stated to be "virtually identical" to K152639 with the tomosynthesis addition from K183275.
3. Performance/Functionality:
- "Technical and acceptance testing was completed on the DR 600 in order to confirm the medical device functions and performs as intended. All deviations or variances are documented... All design input requirements have been tested and passed."
- "Functionality evaluations were conducted with three qualified radiographers. Usability and TBD. The results of these tests fell within the acceptance criteria for the DR 600; therefore, the DR 600 supports a tomographic workflow and Smart Dr visualization including adult and pediatric patients."
4. Image Quality: "Image quality bench tests were conducted in support of this 510(k) submission in which anthropomorphic adult and pediatric images taken with the DR 600 and the primary predicate device, DR 800 (K183275) were compared to ensure substantial equivalency. The test results indicated the image processing of the DR 600 passed the acceptance criteria and was equal to the image processing for the primary predicate, DR 800 (K183275) device for both adult and pediatric patients."
5. Software Validation: "Verification and validation testing confirmed the device meets performance, safety, usability and security requirements... For the NX22 (NX Nomad) software there are a total of 342 risks in the broadly acceptable region and 27 risks in the ALARP region with only one of these risks identified. Zero risks were identified in the Not Acceptable Region."
6. Electrical Safety and EMC Compliance: Document states compliance with all listed IEC standards and FDA performance standards.
7. Quality Management System Compliance: Document states adherence to all listed ISO and other standards. |
Study Details (Based on the provided text):
-
Sample sizes used for the test set and the data provenance:
- Test Set: No specific numerical sample size is mentioned for image quality evaluations beyond "anthropomorphic adult and pediatric images." The document mentions "functionality evaluations were conducted with three qualified radiographers," but this refers to human user testing of workflow and usability, not diagnostic image performance.
- Data Provenance: Not explicitly stated, but likely retrospective as it refers to comparisons of images taken with the new device and a predicate device. The comparison of anthropomorphic phantom images suggests a controlled laboratory setting.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document states "Laboratory data and image quality evaluations conducted with internal and independent specialists confirm that performance is equivalent to the predicates." It also mentions "clinical image quality evaluations for adults and pediatric patients" and "functionality evaluations were conducted with three qualified radiographers."
- Number of Experts: At least "three qualified radiographers" for functionality, and "internal and independent specialists" for image quality, but exact numbers or specific qualifications (e.g., years of experience, board certification) are not detailed.
- Qualifications: "Qualified radiographers" and "internal and independent specialists."
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- No adjudication method is described for the image quality evaluations or other performance tests. The comparison to predicates implies direct visual or quantitative comparison by specialists.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done. The document explicitly states: "No clinical trials were performed in the device. No animal or clinical studies were performed in the development of the new device."
- Therefore, no effect size of human readers improving with AI assistance is provided as this type of study was not conducted. The device in question is an imaging system, not an AI-based diagnostic assistance tool.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The "performance data including clinical image quality evaluations for adults and pediatric patients" involved the system's output. The "image processing of the DR 600 passed the acceptance criteria and was equal to the image processing for the primary predicate, DR 800 (K183275) device for both adult and pediatric patients." This implies an evaluation of the algorithm's output (image quality) without necessarily focusing on a human-in-the-loop diagnostic task. So, in essence, standalone image quality performance was evaluated against a predicate.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" for image quality evaluation appears to be comparison to a cleared predicate device's image quality, as judged by "internal and independent specialists," using anthropomorphic phantoms. There is no mention of pathological confirmation or patient outcomes for establishing ground truth, as it was not a clinical trial.
-
The sample size for the training set:
- This device is an X-ray system with image processing, not a machine learning/AI algorithm that requires a "training set" in the traditional sense of AI development. The software capabilities (MUSICA DTS, MUSICA2, MUSICA3/3+) are described as being identical to previously cleared versions in predicate devices. Therefore, a "training set" for a new AI model is not applicable here.
-
How the ground truth for the training set was established:
- Not applicable, as no new AI model training set is described. The image processing algorithms are identical to those previously cleared.
Ask a specific question about this device
(158 days)
|
| Reference Regulation Number: | 21 CFR 892.1740
The device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging including tomography. This device also supports digital tomosynthesis. The tomography and digital tomosynthesis features are not to be used for imaging pediatric patients.
Carestream Digital Tomosynthesis (DT) is a limited "sweep" imaging technique that generates multiple two-dimensional (2D) coronal slices (i.e. planes) from a series of low dose x-ray images of the same anatomy taken at the same exposure but at different angles. During a tomosynthesis acquisition the detector will be stationary while the tube head travels (sweeps) in a straight path (i.e. focal spot travel path). For each exposure, the tube will be angled toward the center of the detector. The Carestream Digital Tomosynthesis contains 3 options: Sweep angle option is to provide the desired slice thickness. The number of images per degree of sweep angle. The Projection Image Resolution allows for the selection of speed of capture versus image resolution.
The Carestream Digital Tomosynthesis (DT) system was evaluated through non-clinical (bench) testing and a clinical reader study to demonstrate its diagnostic image quality and equivalence to predicate devices.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria | Reported Device Performance | Comments |
---|---|---|---|
Diagnostic Image Quality | Mean RadLex Rating (4-point scale) for scout and Digital Tomosynthesis exams. | 3.7156 | The RadLex scale ranged from 1 (non-diagnostic) to 4 (exemplary). All ratings were above non-diagnostic. |
Equivalence to Predicate | "Equivalent or better in diagnostic quality compared to images obtained using commercially available predicate and reference devices." | Achieved | Statistical test results demonstrated equivalence or superiority. |
Non-clinical Performance | Conformance to specifications, intended workflow, related performance, overall function, verification and validation of requirements for intended use, and reliability of system software. | Met | Predefined acceptance criteria were met, demonstrating the device is as safe, effective, and performs as well as or better than the predicate device. |
2. Sample Size and Data Provenance
- Test Set Sample Size:
- Clinical Images: 17 Digital Tomosynthesis image cases from adult human subjects (patients). Each case included a thoracic digital radiograph (PA and lateral chest exposure) and a DT exam (scout PA chest image and DT exposures).
- Phantom Images: 11 Digital Tomosynthesis phantom exams and corresponding Linear Tomography exams.
- Data Provenance: Clinical study conducted at Toronto General Hospital located in Toronto, Ontario, Canada (prospective). Phantom studies were also conducted.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Seven (7) board certified radiologists.
- Qualifications: "general varying reading experience." (No further specific details on years of experience were provided in the text).
4. Adjudication Method for the Test Set
The text indicates that seven radiologists performed an evaluation, but it does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It only states they used a "graduated 4-point RadLex rating scale" and the mean rating was calculated from their assessments.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A clinical reader study was performed. The study involved seven radiologists evaluating images from the investigational device, a reference comparison (standard of care PA and lateral chest x-rays), and the predicate device (Linear Tomography phantom studies). The "statistical test results demonstrate the Carestream Digital Tomosynthesis delivers quality imaging performance that is equivalent or better in diagnostic quality compared to images obtained using commercially available predicate and reference devices."
- Effect Size: The document does not provide a specific quantitative effect size of how much human readers improved with AI (Digital Tomosynthesis) vs. without AI assistance. It states the DT system was found to be "equivalent or better" in diagnostic quality.
6. Standalone (Algorithm Only) Performance
The document describes the "Carestream Digital Tomosynthesis reconstruction software leverages algorithms that are the same in principle to those applied in computed tomography (CT), such as filtered back projection or iterative reconstruction etc." While it implies algorithm processing, the overall evaluation was of the imaging system producing the images for radiologist interpretation. The reader study assessed the diagnostic image quality facilitated by the DT feature. It is not explicitly stated whether a standalone algorithm-only performance assessment without human-in-the-loop was conducted. The focus was on the diagnostic utility of the images produced by the device.
7. Type of Ground Truth Used for the Test Set
The ground truth for the clinical cases was based on the diagnostic image quality ratings by board-certified radiologists using a RadLex scale. For the phantom studies, the ground truth would inherently be known from the phantom's construction and expected imaging characteristics, used for comparison with Linear Tomography.
8. Sample Size for the Training Set
The document does not provide information on the sample size used for the training set of any AI or reconstruction algorithms.
9. How Ground Truth for the Training Set Was Established
The document does not provide information on how ground truth was established for the training set. It focuses on the validation of the device performance.
Ask a specific question about this device
(70 days)
GREENVILLE SC 29601
Re: K183275
Trade/Device Name: DR 800 with Tomosynthesis Regulation Number: 21 CFR 892.1740
Tomographic Classification Name: Tomographic X-ray System Regulatory Classification: Class II, 21 CFR 892.1740
Tomographic Classification Name: Tomographic X-ray System Regulatory Classification: Class II, 21 CFR 892.1740
The DR 800 system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:
- Positioning fluoroscopy procedures
- Gastro-intestinal examinations
- Urogenital tract examinations
- Angiography
It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for project radiography of all body parts.
In addition, the system provides the Agfa Tomosynthesis option, which is intended to acquire tomographic slices of human anatomy and to be used with Agfa DR X-Ray systems. Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep.
The DR 800 is not intended for mammography applications.
Agfa's DR 800 with Tomosynthesis a tomographic and fluoroscopic x-ray system (product codes IZF and JAA) intended to capture tomographic slices of the human body. The DR 800 is a floormounted radiographic, fluoroscopic and tomographic system that consists of a tube and operator console with a motorized tilting patient table and bucky with optional wall stand, FLFS overlay and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital and wide dynamic range capture. It is capable of replacing other direct radiography, tomography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
The Agfa DR 800 with Tomosynthesis underwent bench testing and software verification and validation to demonstrate substantial equivalence to its predicate devices, the GE Medical System's Discover XR656 with VolumeRAD (K132261) and Agfa's previous version of the DR 800 with MUSICA Dynamic (K180589). The primary focus of the testing for this submission was on the new Digital TomoSynthesis (DTS) software and its performance in generating tomographic slices.
Here's a breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and the Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Technical & Acceptance Testing | All deviations or variances are documented, addressed in CR&T (Corrective and Remedial Actions) documentation, and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed. | Verification and validation testing confirmed the device meets performance, safety, usability, and security requirements. Pediatric indications were also taken into account. Results were verified and validated. Technical and acceptance testing was completed on the DR 800 with Tomosynthesis to confirm the medical device functions and performs as intended. All deviations or variances are documented in a defect database and addressed in the CR&T documentation and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed. |
Usability & Functionality Evaluation | The results of these tests fell within the acceptance criteria for the DR 800 X-ray system. | Usability and functionality evaluations were conducted with qualified independent radiographers and internal experts. The results of these tests fell within the acceptance criteria for the DR 800 X-ray system; therefore, the DR 800 supports a radiographic, fluoroscopic, and tomosynthesis workflow including dynamic and static imaging, continuous and rapid sequence exams, tomographic slices calibration, and positioning. |
Image Quality Validation (Adults) | The reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria. DTS images were suitable for diagnosis. | Image Quality Validation testing was conducted using anthropomorphic phantoms and evaluated by qualified independent radiographers and internal experts. The image quality validation included testing a full range of applications for the DR 800 X-ray system with Tomosynthesis compared to reference images from the primary predicate GE Discovery XR656 with VolumeRAD (K132261) using anonymized adult phantoms. The test results indicated that the reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria and that the DR 800 with Tomosynthesis is capable of making DTS studies for adult patients. The test results showed MUSICA Digital TomoSynthesis (DTS) images were suitable for diagnosis for adult patients. |
Image Quality Validation (Pediatric) | The reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria. Both 5x the dose and 10x the dose images were clinically sufficient and within the intended use, and DTS images were suitable for diagnosis for pediatric patients. | Image Quality Validation testing was conducted using anthropomorphic phantoms and evaluated by qualified independent radiographers and internal experts. The image quality validation included testing using anonymized pediatric phantoms. The pediatric phantom image quality validation testing analyzed five tomographic slices at 5x the dose and five tomographic slices at 10x the dose. Both the 5x the dose and 10x the dose images are clinically sufficient and within the intended use. The test results indicated that the reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria and that the DR 800 with Tomosynthesis is capable of making DTS studies for pediatric patients. The test results showed MUSICA Digital TomoSynthesis (DTS) images were suitable for diagnosis for pediatric patients. |
Software Risk Assessment | No risks identified in the Not Acceptable Region. The device is assumed to be safe, and the benefits of the device outweigh the residual risk. | During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with conventional x-ray film previously released to the field. For the NX4.x.21 (NX Mentor) there are a total of 322 risks in the broadly acceptable region and 27 risks in the ALARP (As Low As Reasonably Practicable) region with only eight of these risks identified. Zero risks were identified in the Not Acceptable Region. |
Electrical Safety & EMC Testing | Compliance with various IEC 60601 standards and FDA Subchapter J. | The DR 800 with Tomosynthesis is compliant to the FDA Subchapter J mandated performance standard 21 CFR 1020.30 - 1020.32. Compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, and IEC 60601-2-54 was confirmed. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated as a numerical count of cases/images. The testing involved "anthropomorphic phantoms" for image quality evaluation, including both adult and pediatric phantoms. The pediatric phantom testing analyzed "five tomographic slices at 5x the dose and five tomographic slices at 10x the dose."
- Data Provenance: The data provenance is from bench testing using anonymized anthropomorphic phantoms. This indicates that the data is prospective in the sense that the phantoms were specifically used for this testing, but it is not from human patients. The country of origin of the data is not specified, but the manufacturer is Agfa N.V. (Belgium).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- Number of Experts: "Qualified independent radiographers and internal experts" were used for usability, functionality, and image quality evaluations. The exact number of each group is not specified.
- Qualifications of Experts: They are described as "qualified independent radiographers and internal experts" and "qualified radiologists" (in the "Descriptive characteristics and performance data including image quality evaluations by qualified radiologists are adequate to ensure equivalence" section). Specific details like years of experience or subspecialty are not provided.
4. Adjudication Method for the Test Set
- The document implies that the "qualified independent radiographers and internal experts" evaluated the images and that the results "fell within the acceptance criteria" or "passed the acceptance criteria," suggesting a consensus or individual assessment against predefined criteria. However, a specific adjudication method (e.g., 2+1, 3+1) is not explicitly stated.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No MRMC comparative effectiveness study was done. The study design described is a bench test comparison of the device against a predicate device's reference images using phantoms, with evaluation by human experts, rather than an assessment of human reader performance with or without AI assistance. The device itself is an imaging system, not an AI-powered diagnostic tool for interpretation assistance in the sense of comparing human performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone evaluation of the algorithm's output was done, as part of the image quality validation. The "reconstruction software of the image processing for Digital TomoSynthesis (DTS)" was evaluated to ensure the generated tomographic images were suitable for diagnosis. This is an assessment of the algorithm's output (the tomographic slices) without direct human intervention in the image generation process, beyond setting up the acquisition parameters.
7. The Type of Ground Truth Used
- The ground truth for the image quality evaluation was based on comparison to reference images from the primary predicate device (GE Discovery XR656 with VolumeRAD - K132261) using anthropomorphic phantoms, and expert assessment by "qualified independent radiographers and internal experts" confirming the images were "suitable for diagnosis" and "clinically sufficient." It is not pathology, or outcomes data.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set for the MUSICA DTS software. It mentions that "The image processing algorithms in the new device are similar to those previously cleared in the DR 800 with MUSICA Dynamic (K180589) and other devices in Agfa's radiography portfolio today... The addition of the tomographic image processing is similar to the predicate device (K132261)." This suggests leveraging existing, previously trained algorithms or development methodologies, rather than describing a specific new training dataset for this submission.
9. How the Ground Truth for the Training Set Was Established
- This information is not provided in the document. As noted above, the submission emphasizes similarity to existing, cleared technologies, rather than detailing the unique training of a novel algorithm from scratch.
Ask a specific question about this device
(247 days)
K944967 (12/09/1994)
Regulation No. 21 CFR 892.1740
Product Code: IZF |
The device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging including tomography. The tomography feature is not to be used for imaging pediatric patients.
The DRX-Evolution is a diagnostic x-ray system utilizing digital radiography (DR) technology. The DRX-Evolution is designed for horizontal and upright projection exams. The system consists of a high voltage x-ray generator, overhead tube crane with x-ray tube assembly, radiographic table with detector tray, Bucky image receptor on an upright wall stand, and x-ray controls containing a power distribution unit and operator PC (user interface).
This document describes the Carestream DRX-Evolution, a diagnostic x-ray system. The modifications to the device include firmware and mechanical changes to facilitate linear tomography exams, the addition of a new generator option, and software updates such as Bone Suppression and Pneumothorax Visualization.
Here's an analysis of the acceptance criteria and the studies that prove the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Functionality | Acceptance Criteria | Reported Device Performance |
---|---|---|
Overall Safety & Effectiveness | Device is as safe, as effective, and performs as well as or better than the predicate device. | "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device." |
Workflow, Performance, Function, Verification/Validation | Intended workflow, related performance, overall function, verification and validation of requirements for intended use, shipping performance, and reliability of the DRX-Evolution system (both software and hardware) are demonstrated. | "These studies demonstrated the intended workflow, related performance, overall function, verification and validation of requirements for intended use, shipping performance, and reliability of the DRX-Evolution system including both software and hardware requirements. Nonclinical test results have demonstrated that the device conforms to its specifications." |
Linear Tomography Accuracy | Accurate movement of the x-ray tube head with respect to the image capture device. | "Test results using the tool phantom were as expected, demonstrating accuracy of the tube head movement with respect to the capture device." |
Linear Tomography Diagnostic Quality | Linear tomography images acquired are of acceptable diagnostic quality. | "linear tomography images were generated using four different anthropomorphic phantoms (chest, hand, knee and pelvis). The images were evaluated by a board-certified radiologist for diagnostic quality. Results of this evaluation demonstrated that the linear tomography images acquired using the DRX-Evolution system are of acceptable diagnostic quality." |
DRX 2530C Detector Image Quality | Equivalent or superior image quality to the DRX-1 Detector (predicate device). | "Results of these studies demonstrated equivalent or superior image quality to the DRX-1 Detector (predicate device)." |
DRX-1C Detector Image Quality | Equivalent or superior image quality to the DRX-1 Detector (predicate device). | "Results of these studies demonstrated equivalent or superior image quality to the DRX-1 Detector (predicate device)." |
DR Long Length Imaging Software Diagnostic Capability | Produced LLI images with statistically equivalent or better diagnostic capability to the predicate software. | "Results of the DR Long Length Imaging study demonstrated that the investigational software produced LLI images with statistically equivalent or better diagnostic capability to the predicate software." |
Bone Suppression Software Effectiveness | Generates a companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved as compared to that of the predicate product (standard-of-care image without the bone-suppressed companion image). | "Results of the Bone Suppression clinical study demonstrated that the software generates a companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved as compared to that of the predicate product (standardof-care image without the bone-suppressed companion image)." |
2. Sample Size Used for the Test Set and Data Provenance
- Linear Tomography Accuracy (Bench Testing):
- Sample Size: Not explicitly stated, but it involved "a tool phantom" and "four different anthropomorphic phantoms (chest, hand, knee and pelvis)."
- Data Provenance: Bench testing, likely conducted internally by Carestream Health, Inc. The country of origin is not specified but the company is based in Rochester, New York, USA. This is retrospective for the purpose of regulatory submission.
- Detector Image Quality (Clinical Studies - DRX 2530C and DRX-1C):
- Sample Size: Not explicitly stated in this document ("Results of these studies demonstrated..."). These studies refer to K130464 for DRX 2530C and K120062 for DRX-1C, where detailed sample sizes would be found.
- Data Provenance: Clinical studies, in accordance with FDA guidance. Prospective studies are typical for such evaluations. The country of origin is not specified.
- DR Long Length Imaging Software Diagnostic Capability (Clinical Study):
- Sample Size: Not explicitly stated in this document ("Results of the DR Long Length Imaging study demonstrated..."). It refers to K130567 for more details.
- Data Provenance: Clinical study, likely prospective. The country of origin is not specified.
- Bone Suppression Software Effectiveness (Clinical Study):
- Sample Size: Not explicitly stated in this document ("Results of the Bone Suppression clinical study demonstrated..."). It refers to K133442 for more details.
- Data Provenance: Clinical study, likely prospective. The country of origin is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
-
Linear Tomography Diagnostic Quality (Bench Testing):
- Number of Experts: A single expert ("evaluated by a board-certified radiologist").
- Qualifications: "board-certified radiologist."
-
For other clinical studies (Detectors, LLI, Bone Suppression): The number and qualifications of experts are not detailed in this summary; they would be in the referenced 510(k) documents (K130464, K120062, K130567, K133442).
4. Adjudication Method for the Test Set
- Linear Tomography Diagnostic Quality: No formal adjudication method is described beyond a single board-certified radiologist's evaluation.
- For other clinical studies (Detectors, LLI, Bone Suppression): Adjudication methods are not detailed in this summary; they would be in the referenced 510(k) documents.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- No explicit MRMC comparative effectiveness study involving AI assistance for human readers is described in this document. The clinical studies mentioned for Bone Suppression and DR Long Length Imaging software compare the investigational software/images to a predicate or standard-of-care, but they don't explicitly state an "AI vs without AI assistance" MRMC study for improved human reader performance with an effect size.
- The Bone Suppression study mentions a "companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved." This implies a reader study where human performance (or perception of image utility) is assessed with and without the software-generated companion image, but it does not specify an MRMC design or quantify an "effect size" in terms of improved diagnostic accuracy for human readers.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, standalone performance was evaluated for certain aspects:
- Linear Tomography Accuracy: The initial bench testing using a "tool phantom" evaluated the mechanical accuracy of the device's linear tomography function in a standalone manner (without human diagnostic interpretation), confirming "accuracy of the tube head movement."
- Image Quality Metrics: The statement that DRX-1C and DRX 2530C detectors "provide equal or superior image quality with respect to noise and spatial resolution at equivalent doses" suggests that standalone technical image quality metrics were assessed, and these would be algorithm-only or device-only measurements.
7. The Type of Ground Truth Used
- Linear Tomography Accuracy: The ground truth for mechanical accuracy was based on the "expected" results from a "tool phantom," implying known physical measurements or standards.
- Linear Tomography Diagnostic Quality: The ground truth was based on expert consensus/opinion by a "board-certified radiologist" regarding "acceptable diagnostic quality" of anthropomorphic phantom images.
- Detector Image Quality, DR Long Length Imaging Software, Bone Suppression Software: For these, the ground truth was generally based on comparison to a predicate device or standard-of-care, with evaluation typically performed by qualified experts (e.g., physicians for diagnostic capability). While not explicitly stated as "ground truth," the predicate/standard-of-care serves as the reference for equivalence or improvement claims.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for any training sets.
- The software features (Bone Suppression, Pneumothorax Visualization) likely involved machine learning/AI models that would require training data. However, this information is not provided in the 510(k) summary. References to K133442 (Bone Suppression) might contain this information.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide details on how ground truth for any training sets was established. Since training set details are absent, the method for establishing their ground truth is also not mentioned.
Ask a specific question about this device
Page 1 of 2