Search Results
Found 3 results
510(k) Data Aggregation
(214 days)
Nanox.ARC is a stationary X-ray system intended to produce tomographic images of the human musculoskeletal system adjunctive to conventional radiography, on adult patients. This device is intended to be used in professional healthcare facilities or radiological environments, such as hospitals, clinics, imaging centers, and other medical practices by trained radiographers, radiologists, and physicists. Digital Tomosynthesize tomographic slices from a single tomographic sweep. Applications can be performed with the patient in prone, supine, and lateral positions. This device is not intended for mammographic, cardiac, pulmonary, intra-abdominal, intra-cranial, intra-cranial, interventional, or fluoroscopic applications. This device is not intended for imaging pediatric or neonatal patients.
Nanox.ARC is a tomographic and solid-state X-ray system (product codes IZF and MQB) intended to produce tomographic images of the human musculoskeletal system from a single tomographic sweep, as an adjunct to conventional radiography, on adult patients.
Nanox.ARC is a floor-mounted tomographic system that consists of a user control console, a multisource, tiltable arc gantry with five alternately-switched tubes, a motorized patient table, a flatpanel detector of a scintillator-photodetector type, and Protocols database and Image processing software packages.
Nanox.ARC utilizes several small-sized X-ray tubes that are independently and electronically switched, thereby dividing the overall power requirements over multiple tubes. Nanox.ARC utilizes a tilting imaging ring with five X-ray tubes, operated sequentially, one at a time, used to generate multiple low-dose X-ray projection images acquired from different angles during a single spherical (non-linear) sweep. The sweep is performed over a motorized patient table. Patients can be placed in prone, supine, and lateral positions.
The acquired projection imaging data is automatically reconstructed to form tomographic slices of the imaged object, with each slice parallel to the table plane. The Tomosynthesis image result reduces the effect of overlying structures and provides depth information on structures of interest. The image reconstruction service, as well as the system's protocol database and DICOMization services, can be hosted either locally or as part of the Nanox.CLOUD, according to customer preference. The resultant images are sent using the DICOM protocol.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The document doesn't explicitly list specific quantitative acceptance criteria in a table format with separate reported device performance values for each criterion. Instead, it states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
The "Table 2: Non-clinical Performance Data" lists various tests performed and reports a "PASS" for each, indicating that the device met the acceptance criteria for those specific tests.
Table of Acceptance Criteria (Implied) and Reported Device Performance:
Acceptance Criterion (Implied by Test Description) | Reported Device Performance |
---|---|
System Electrical Qualification | PASS |
System Performance (Motion resolution & accuracy) | PASS |
System Longevity & Consistency | PASS |
Tube Longevity and Reliability | PASS |
Functional Verification | PASS |
Motion Control stability | PASS |
Detector and image acquisition functionality | PASS |
Usability Summative (Safety, effectiveness, no failures) | PASS |
Transportation safety | PASS |
Dimensional and Mechanical Properties | PASS |
Image Quality | PASS |
Phantom Validation (Diagnostic quality vs. predicate) | PASS |
Software verification and validation | PASS |
Compliance to 21 CFR 1020.30 and 1020.31 | PASS |
Electrical Safety & EMC (IEC 60601-1, IEC 60601-1-2) | PASS |
Radiation Safety (IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54) | PASS |
Biocompatibility (ISO 10993-1) | PASS |
Study Details:
-
Sample size used for the test set and the data provenance:
- Clinical Sample Evaluation (for image quality): Nine (9) Digital Tomosynthesis image cases were acquired from healthy adult human subjects (patients).
- Phantom Performance Exams: Twelve (12) Digital Tomosynthesis phantom performance exams (total cases = 9 human + 12 phantom = 21 cases).
- Data Provenance: From a clinical study conducted at Shamir Medical Center in Israel. The study appears to be prospective as it states "image cases were acquired from healthy adult human subjects (patients) from a clinical study conducted at Shamir Medical Center in Israel."
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: One (1)
- Qualifications: An ABR-certified radiologist.
-
Adjudication method for the test set:
- Adjudication Method: Not explicitly stated, but with only one radiologist reviewing, there was no multi-expert adjudication mentioned (e.g., 2+1, 3+1). If only one expert makes the determination, it's effectively "none" in terms of reaching a consensus among multiple experts.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not conducted. The clinical sample evaluation involved a single ABR-certified radiologist evaluating the diagnostic quality of the Nanox.ARC images themselves, "against a reference comparison which was the standard of care radiographies." This was a direct comparison of images, not a study on human reader performance with or without AI assistance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, the described "Bench Testing" and "Non-clinical Performance Data" table largely represent standalone algorithm and system performance without human intervention in the diagnostic interpretation loop. The "Image Quality" and "Phantom Validation" tests also assessed the device's output directly. The clinical sample evaluation by the radiologist was to evaluate the diagnostic quality of the images produced by the device, effectively assessing the device's standalone output for clinical utility.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: For the clinical sample evaluation, the diagnostic quality of the Nanox.ARC images was evaluated by an ABR-certified radiologist "against a reference comparison which was the standard of care radiographies." This implies the "ground truth" was essentially the interpretive diagnostic quality determined by a single expert, compared to standard of care imaging. For the phantom studies, the ground truth would be based on the known physical properties and measurements within the phantoms.
-
The sample size for the training set:
- Training Set Sample Size: The document does not provide any information regarding the sample size used for the training set of the Nanox.ARC system's image reconstruction or processing algorithms.
-
How the ground truth for the training set was established:
- Training Set Ground Truth: The document does not provide any information on how ground truth was established for the training set.
Ask a specific question about this device
(21 days)
The VALORY system is a General Radiography X-ray imaging system used in hospitals, clinics and medical practices by radiographers, radiologists and physicists to make, process and view static X-ray radiographic images of the skeleton (including skull, spinal column and extremities), chest, abdomen and other body parts on adults and pediatric patients.
Applications can be performed with the patient in sitting, standing or lying position.
The system is not intended for use in Mammography applications
VALORY is a solid state x-ray system, a direct radiography (DR) system (product code MQB) intended to capture general radiographic images of the human body. VALORY is a ceiling mounted stationary X-Ray system with digital image capture that consists of a tube and operator console with a patient table and/or wall stand. VALORY uses Agfa's NX workstation with MUSICA2 TM image processing and flat-panel detectors of the scintillator-photodetector type (Cesium Iodide - CsI) to capture and process the digital image.
The provided text describes the VALORY system, a general radiography X-ray imaging system, and its substantial equivalence to a predicate device (DR 600, K152639). However, it does not detail specific acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy) or a study proving the device meets these criteria in the way a clinical performance study would.
Instead, the submission focuses on demonstrating substantial equivalence through:
- Comparison of technological characteristics.
- Bench testing for image quality and usability.
- Software verification and validation.
- Compliance with electrical safety, EMC, and radiation protection standards.
- Adherence to quality management systems and guidance documents.
Here's an attempt to extract and organize the requested information, noting where specific details are not provided in the text:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not specify quantitative acceptance criteria for image quality or clinical performance metrics (like sensitivity, specificity, or accuracy) in a traditional sense. The performance is assessed relative to the predicate device.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Image Quality: Equivalent or better than predicate device. | "Image quality validation testing was conducted using anthropomorphic adult and pediatric phantoms and evaluated by qualified internal experts and external radiographers. The radiographers evaluated the VALORY X-ray system with the DR 600 (predicate device. K 152639) using XD 14 (K211790, pending 510(k) clearance) and DR 14s (K161368) flat-panel detectors comparing overall image quality. The test results indicated that the VALORY X-ray system has at least the same if not better image quality than the predicate device (DR 600 - K152639) and other flat-panel detectors currently on the market." |
"Additional image quality validation testing for NX 23 was completed in scope of the DX-D Imaging Package with XD Detectors and included a full range of GenRad image processing applications compared to MUSICA 2 image processing using anonymized adult and pediatric phantoms and read by eight internal experts." |
| Usability: Meets safety and workflow requirements. | "Usability evaluations for VALORY were conducted with external radiographers. The usability studies evaluated overall product safety, including workflow functionality for adults and pediatric patients, system movements, information and support for components. The results of the usability tests, fulfillment of the validation acceptance criteria, and assessment of remaining defects support VALORY passing usability validation testing." |
| Software Safety: Acceptable risk profile. | "Software verification testing for NX 23 was completed... For the NX 23 (NX Orion) software there are a total of 535 risks in the broadly acceptable region and 37 risks in the ALARP region with only four of these risks identified. Zero risks were identified in the Not Acceptable Region. Therefore, the device is assumed to be safe, the benefits of the device are assumed to outweigh the residual risk." |
| Compliance: Meets relevant electrical, EMC, and radiation standards. | VALORY is compliant to FDA Subchapter J mandated performance standards 21 CFR 1020.30 and 1020.31. Also compliant with IEC 60601 series, ISO 13485, ISO 14971, DICOM, IEC 62366-1, and IEC 62304. |
| Functional Equivalence: Same intended use as predicate. | The VALORY system has indications for use that is consistent with and substantially equivalent to the legally marketed predicate device (K152639). |
2. Sample Size Used for the Test Set and Data Provenance
The "test set" consisted of:
- Anthropomorphic adult and pediatric phantoms.
- No information on the exact number of phantoms or images used in these tests.
- The data provenance is not specified as country of origin, but it is implied to be laboratory-generated phantom data as "No clinical trials were performed in the development of the device. No animal or clinical studies were performed in the development of the new device."
- The studies were retrospective in the sense that they were conducted for submission, but not using existing clinical patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Image Quality Evaluation: "qualified internal experts and external radiographers" evaluated images from phantoms compared to the predicate device. For NX 23 testing, "eight internal experts" read images. Specific qualifications (e.g., years of experience) for these experts are not provided.
- Usability Evaluation: "external radiographers." Specific numbers and qualifications are not provided.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method for the expert evaluations, such as 2+1 or 3+1. It states experts "evaluated" and "read" the images, implying their findings were used as the basis for comparison.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No MRMC comparative effectiveness study was done.
- The device is a general radiography X-ray imaging system, not explicitly an AI-assisted diagnostic device for specific disease detection. Its image processing (MUSICA/MUSICA2/MUSICA3) is an established technology from the predicate device and other Agfa systems. The study focuses on demonstrating image quality equivalence, not on reader performance improvement with AI assistance.
6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done
- The VALORY system itself is an imaging system, not a standalone diagnostic algorithm.
- The software risk assessment for NX 23 (the workstation software) involved evaluating risks without explicit human interaction for every risk listed, focusing on software failure modes. However, this is not a diagnostic "algorithm only without human-in-the-loop performance" in the sense of an AI CAD system. The image processing algorithms within MUSICA are an integral part of the system's image generation, not a separate diagnostic interpretation.
7. The Type of Ground Truth Used
- Image Quality: The ground truth for image quality evaluation was based on the performance of the predicate device (DR 600) and other marketed flat-panel detectors, combined with expert evaluation of images from anthropomorphic phantoms. It's a comparative assessment rather than a definitive "ground truth" of a disease state.
- Usability: Ground truth for usability was established through the feedback and performance of external radiographers during usability studies, ensuring the system met workflow and safety requirements.
- Software/Safety: Ground truth for software safety and compliance was against established standards (IEC, ISO, FDA regulations) and internal risk analyses.
8. The Sample Size for the Training Set
The document makes no mention of a "training set" in the context of machine learning or AI models. The device is being cleared based on substantial equivalence to an existing X-ray system, not as a novel AI diagnostic algorithm that requires a distinct training dataset. The "image processing algorithms" mentioned are presented as existing, cleared technologies from previous Agfa devices.
9. How the Ground Truth for the Training Set Was Established
Since there is no mention of a distinct training set for an AI model, this information is not applicable and not provided in the document.
Ask a specific question about this device
(25 days)
The DR 100s system is a mobile X-ray imaging system used in hospitals, clinics and medical practices by radiographers and radiologists to make, process and view static X-ray radiographic images of the skeleton (including skull, spinal column and extremities), chest, abdomen and other body parts on adult, pediatric or neonatal patients.
Applications can be performed with the patient in the sitting, standing or lying position.
This device is not intended for mammography applications.
Agfa's DR 100s is a mobile x-ray system, a direct radiography system (product code ILL) intended to capture images of the human body. The device is an integrated mobile digital radiography x-ray system. The complete DR 100s systems consists of the mobile x-ray unit with integrated x-ray generator and NX software and one or more DR detectors. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital image capture. It is compatible with Agfa's computed radiography systems as well.
This submission is to add another mobile unit to Agfa's direct radiography portfolio.
The optional image processing allows users to conveniently select image processing settings for different patient sizes and examinations. The image processing algorithms in the new device are identical to those previously cleared in the DX-D 100 (K103597) and other devices in Agfa's radiography portfolio today, which includes DR 600 (K152639), DR 400 (K141192) and DR 800 (K183275).
Significant dose reduction can be achieved using the DR 100s with patented Agfa's MUSICA imaging processing and CsI flat-panel detectors. Testing with board certified radiologists determined that Cesium Bromide (CR) and Cesium Iodine (DR) detectors when used with MUSICA imaging processing can provide dose reduction between 50-60% for adult patients and up to 60% for pediatric and neonatal patients when compared to traditional Barium Fluoro Bromide CR systems (K141602).
Principles of operation and technological characteristics of the new and predicate device are the same. The new device is virtually identical to Agfa's DX-D 100(K103597) with the exception that it has a telescopic column and ergonomic design. It uses the same flat panel detectors to capture and digitize the image. Differences in devices do not alter the intended diagnostic effect.
The provided text describes the DR 100s mobile X-ray system and its substantial equivalence to a predicate device, the DX-D 100. However, it does not contain specific acceptance criteria or a detailed clinical study proving the device meets these criteria in the manner requested.
The document states that the DR 100s uses the same image processing algorithms and flat-panel detectors as previously cleared devices, and that "Clinical image validation was conducted during testing in support for the 510(k) clearances for the flat-panel detectors (K161368 and K172784) and MUSICA software (K183275) in a previous submission." It also mentions that "Image quality bench tests were conducted in support of this 510(k) submission in which anthropomorphic adult and pediatric images taken with the DR 100s and the predicate device, DX-D 100 (K103597) were compared to ensure substantial equivalency. The test results indicated the image processing of the DR 100s passed the acceptance criteria."
This means the primary method for demonstrating equivalence and meeting acceptance criteria was through bench testing and referencing prior clearances for components. There's no detailed mention of a specific, standalone clinical study with human patients for the DR 100s itself, nor a multi-reader multi-case (MRMC) study.
Therefore, many of the requested details about a clinical study's methodology (sample size, data provenance, expert numbers, adjudication, MRMC results, ground truth types) cannot be extracted from this document directly for the DR 100s.
Here's an attempt to answer the questions based only on the provided text, acknowledging where information is missing:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly list quantitative acceptance criteria in a table format for image quality or specific diagnostic performance metrics (e.g., sensitivity, specificity). Instead, it states:
- "The test results indicated the image processing of the DR 100s passed the acceptance criteria." (for image quality bench tests)
- "All design input requirements have been tested and passed." (for technical and acceptance testing)
- "The results of these tests fell within the acceptance criteria for the DR 100s X-ray system therefore, the DR 100s supports a General radiographic workflow including adult and pediatric patients." (for usability and functionality)
Without specific numerical criteria, a performance table cannot be constructed. The main "performance" metrics provided are technical specifications of the flat-panel detectors (DQE, MTF, pixel size, etc.) which are compared to predicate devices but don't represent acceptance criteria for a clinical study proving diagnostic performance.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified for the image quality bench tests. The document only mentions "anthropomorphic adult and pediatric images."
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The studies are described as "bench tests" using anthropomorphic phantoms, not real patient data directly for the DR 100s itself. The "clinical image validation" mentioned refers to prior 510(k) clearances for components (detectors and software), not this specific device submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: It vaguely mentions "internal experts" for usability and functionality evaluations. For the "clinical image validation" of previously cleared components, it references validation conducted with "board certified radiologists" for dose reduction testing (K141602), but this is for a different aspect (dose reduction with MUSICA and CsI detectors) and potentially not the core image quality comparison for equivalence of the total system.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC study is explicitly mentioned for the DR 100s. The document states "No clinical trials were performed in the development of the device. No animal or clinical studies were performed in the development of the new device."
- Effect Size: Not applicable, as no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The device is a mobile X-ray system and image processor, not an AI algorithm for diagnosis. Its performance is inherent in the image acquisition and processing. The "image processing of the DR 100s passed the acceptance criteria" refers to the system's ability to produce images comparable to the predicate. Therefore, the "standalone" performance is the image quality produced directly by the system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the image quality bench tests, the "ground truth" was likely established through objective phantom measurements and comparison to the predicate device, rather than clinical ground truth (expert consensus, pathology, or outcomes data). The document refers to "anthropomorphic adult and pediatric images" meaning images of phantoms designed to mimic human anatomy.
8. The sample size for the training set
- Training Set for DR 100s: The DR 100s system itself is a hardware device with integrated software for image processing (MUSICA). It's not an AI model that undergoes a "training" phase in the conventional sense (e.g., deep learning). The MUSICA image processing algorithms are stated to be "identical to those previously cleared" in other Agfa devices (K103597, K152639, K141192, K183275). Therefore, any "training" (algorithm development/tuning) would have occurred for these prior versions/devices, and no specific training set for the DR 100s is mentioned.
9. How the ground truth for the training set was established
- Not applicable/Not specified as the DR 100s itself does not undergo a "training" phase like a new AI algorithm. The MUSICA algorithms were previously developed and cleared.
Ask a specific question about this device
Page 1 of 1