Search Results
Found 5 results
510(k) Data Aggregation
(53 days)
The DR 800 with DSA system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:
- · Positioning fluoroscopy procedures
- · Gastro-intestinal examinations
- · Urogenital tract examinations
- · Angiography
- · Digital Subtraction Angiography
It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for projection radiography of all body parts.
In addition, the system provides the Agfa Tomosynthesis option, which is intended to acquire tomographic slices of human anatomy and to be used with Agfa DR X-ray systems. Digital Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep.
Not intended for cardiovascular and cerebrovascular contrast studies. Not intended for mammography applications.
Agfa's DR 800 with DSA medical device is a fluoroscopic x-ray system that includes digital angiography (product code JAA) intended to capture tomographic, static and dynamic images of the human body. The DR 800 is a floor-mounted radiographic, fluoroscopic and tomographic system that consists of a tube and operator console with a motorized tilting patient table. FLFS overlay and bucky with optional wall stand and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital, wide dynamic range and angiographic image capture. It is capable of replacing other direct radiography, tomography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
This submission is to add the newest version of the DR 800 with Digital Subtraction Angiography (DSA) to Agfa's radiography portfolio.
Here's an analysis of the acceptance criteria and study information for the Agfa DR 800 with DSA, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list quantitative acceptance criteria in a table format for performance metrics. Instead, it describes a more qualitative approach, focusing on equivalence to predicate devices and confirmation through expert evaluation.
Acceptance Criteria (Inferred from text) | Reported Device Performance |
---|---|
Bench Testing (General Performance) | "Technical and acceptance testing was completed on the DR 800 with DSA in order to confirm the medical device functions and performs as intended. All deviations or variances are documented in a defect database and addressed in the CRD documentation and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed." |
Functionality and Usability | "Performance functionality and usability evaluations were conducted with qualified experts. The results of these tests fell within the acceptance criteria for the DR 800 with DSA; therefore, the DR 800 supports GenRad, Full Leg/ Full Spine (FLFS), roadmapping and Digital Subtraction Angiography (DSA) workflow." |
Clinical Image Quality (DSA) | "Clinical image validation was conducted using anthropomorphic phantoms and evaluated by qualified experts. The radiographers evaluated the DSA image quality on the DR 800 by comparing overall image quality with the primary predicate A device (K190373). Diagnostic confidence for DSA image quality and roadmapping on the DR 800 was between good and excellent." The document also states, "Clinical image quality evaluation is not essential in establishing substantial equivalence for the DR 800 with DSA. Adequate Bench Testing results should be sufficient to determine device safety and effectiveness." This indicates that while performed, it wasn't a strict acceptance criterion in the same vein as quantitative safety/effectiveness thresholds. |
Software Verification & Validation (Safety/Risk) | "The complete device has been certified and validated. During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with conventional x-ray film previously released to the field." "For the NX 23 (NX Orion) software there are a total of 535 risks in the broadly acceptable region and 37 risks in the ALARP region with only four of these risks identified. Zero risks were identified in the Not Acceptable Region. Therefore, the device is assumed to be safe, the benefits of the device are assumed to outweigh the residual risk." |
Electrical Safety and Electromagnetic Compatibility (EMC) Testing | The device is compliant with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, and IEC 60601-2-54. The DR 800 is also compliant with FDA Subchapter J mandated performance standards 21 CFR 1020.30 - 1020.32. |
Quality Management, Risk Management, DICOM, Usability Engineering | The company's in-house procedures conform to ISO 13485, ISO 14971, ACR/NEMA PS3.1-3.20 (DICOM), and IEC 62366-1. (This implies compliance with these standards as part of overall acceptance). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated in terms of number of images or cases. The document mentions "anthropomorphic phantoms" for clinical image validation.
- Data Provenance: The study used "anthropomorphic phantoms," which are physical models designed to simulate human anatomy for imaging purposes. This indicates a laboratory/phantom study rather than real patient data. The country of origin for the phantom data is not specified, but the submission is from Agfa N.V. (Belgium). It is a prospective study in the sense that the new device was evaluated with these phantoms.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated. The document mentions "qualified experts" and "radiographers."
- Qualifications of Experts: Described as "qualified experts" and "radiographers." No specific experience levels (e.g., "10 years of experience") are provided.
4. Adjudication Method for the Test Set
Not specified. The document states "evaluated by qualified experts" and "radiographers evaluated...by comparing overall image quality with the primary predicate A device," implying a comparative evaluation rather than a strict adjudication process for ground truth establishment.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If so, What was the effect size of how much human readers improve with AI vs without AI assistance
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not conducted. This is not an AI-assisted diagnostic device; it's a conventional X-ray system with digital image processing and DSA capabilities. The study compared the device's image quality to a predicate device, focusing on equivalence, not human reader improvement with AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, in essence, the "Bench Testing" and "Software Verification and Validation Testing" sections describe standalone performance evaluations of the device's functions and image processing algorithms. The "Clinical image validation" with phantoms also focuses on the device's output (image quality) rather than human interaction with the device in a diagnostic workflow where the human acts as the ultimate decision-maker for the study’s performance outcome.
7. The Type of Ground Truth Used
The "ground truth" for the image quality evaluation was based on expert comparison and qualitative assessment of images produced by the device, specifically assessing "diagnostic confidence for DSA image quality and roadmapping" as "between good and excellent" when compared to a predicate device. This is primarily an expert consensus on image quality rather than pathology, clinical outcomes, or a gold standard.
8. The Sample Size for the Training Set
Not applicable. This device is an X-ray imaging system, not a machine learning or AI algorithm that requires a training set of data. The image processing algorithms are described as being "similar to those previously cleared" or "similar to the primary predicate device."
9. How the Ground Truth for the Training Set Was Established
Not applicable, as this device does not utilize a machine learning model that would require a ground truth for a training set.
Ask a specific question about this device
(25 days)
The DR 100s system is a mobile X-ray imaging system used in hospitals, clinics and medical practices by radiographers and radiologists to make, process and view static X-ray radiographic images of the skeleton (including skull, spinal column and extremities), chest, abdomen and other body parts on adult, pediatric or neonatal patients.
Applications can be performed with the patient in the sitting, standing or lying position.
This device is not intended for mammography applications.
Agfa's DR 100s is a mobile x-ray system, a direct radiography system (product code ILL) intended to capture images of the human body. The device is an integrated mobile digital radiography x-ray system. The complete DR 100s systems consists of the mobile x-ray unit with integrated x-ray generator and NX software and one or more DR detectors. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital image capture. It is compatible with Agfa's computed radiography systems as well.
This submission is to add another mobile unit to Agfa's direct radiography portfolio.
The optional image processing allows users to conveniently select image processing settings for different patient sizes and examinations. The image processing algorithms in the new device are identical to those previously cleared in the DX-D 100 (K103597) and other devices in Agfa's radiography portfolio today, which includes DR 600 (K152639), DR 400 (K141192) and DR 800 (K183275).
Significant dose reduction can be achieved using the DR 100s with patented Agfa's MUSICA imaging processing and CsI flat-panel detectors. Testing with board certified radiologists determined that Cesium Bromide (CR) and Cesium Iodine (DR) detectors when used with MUSICA imaging processing can provide dose reduction between 50-60% for adult patients and up to 60% for pediatric and neonatal patients when compared to traditional Barium Fluoro Bromide CR systems (K141602).
Principles of operation and technological characteristics of the new and predicate device are the same. The new device is virtually identical to Agfa's DX-D 100(K103597) with the exception that it has a telescopic column and ergonomic design. It uses the same flat panel detectors to capture and digitize the image. Differences in devices do not alter the intended diagnostic effect.
The provided text describes the DR 100s mobile X-ray system and its substantial equivalence to a predicate device, the DX-D 100. However, it does not contain specific acceptance criteria or a detailed clinical study proving the device meets these criteria in the manner requested.
The document states that the DR 100s uses the same image processing algorithms and flat-panel detectors as previously cleared devices, and that "Clinical image validation was conducted during testing in support for the 510(k) clearances for the flat-panel detectors (K161368 and K172784) and MUSICA software (K183275) in a previous submission." It also mentions that "Image quality bench tests were conducted in support of this 510(k) submission in which anthropomorphic adult and pediatric images taken with the DR 100s and the predicate device, DX-D 100 (K103597) were compared to ensure substantial equivalency. The test results indicated the image processing of the DR 100s passed the acceptance criteria."
This means the primary method for demonstrating equivalence and meeting acceptance criteria was through bench testing and referencing prior clearances for components. There's no detailed mention of a specific, standalone clinical study with human patients for the DR 100s itself, nor a multi-reader multi-case (MRMC) study.
Therefore, many of the requested details about a clinical study's methodology (sample size, data provenance, expert numbers, adjudication, MRMC results, ground truth types) cannot be extracted from this document directly for the DR 100s.
Here's an attempt to answer the questions based only on the provided text, acknowledging where information is missing:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly list quantitative acceptance criteria in a table format for image quality or specific diagnostic performance metrics (e.g., sensitivity, specificity). Instead, it states:
- "The test results indicated the image processing of the DR 100s passed the acceptance criteria." (for image quality bench tests)
- "All design input requirements have been tested and passed." (for technical and acceptance testing)
- "The results of these tests fell within the acceptance criteria for the DR 100s X-ray system therefore, the DR 100s supports a General radiographic workflow including adult and pediatric patients." (for usability and functionality)
Without specific numerical criteria, a performance table cannot be constructed. The main "performance" metrics provided are technical specifications of the flat-panel detectors (DQE, MTF, pixel size, etc.) which are compared to predicate devices but don't represent acceptance criteria for a clinical study proving diagnostic performance.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not specified for the image quality bench tests. The document only mentions "anthropomorphic adult and pediatric images."
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The studies are described as "bench tests" using anthropomorphic phantoms, not real patient data directly for the DR 100s itself. The "clinical image validation" mentioned refers to prior 510(k) clearances for components (detectors and software), not this specific device submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: It vaguely mentions "internal experts" for usability and functionality evaluations. For the "clinical image validation" of previously cleared components, it references validation conducted with "board certified radiologists" for dose reduction testing (K141602), but this is for a different aspect (dose reduction with MUSICA and CsI detectors) and potentially not the core image quality comparison for equivalence of the total system.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No MRMC study is explicitly mentioned for the DR 100s. The document states "No clinical trials were performed in the development of the device. No animal or clinical studies were performed in the development of the new device."
- Effect Size: Not applicable, as no MRMC study was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The device is a mobile X-ray system and image processor, not an AI algorithm for diagnosis. Its performance is inherent in the image acquisition and processing. The "image processing of the DR 100s passed the acceptance criteria" refers to the system's ability to produce images comparable to the predicate. Therefore, the "standalone" performance is the image quality produced directly by the system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For the image quality bench tests, the "ground truth" was likely established through objective phantom measurements and comparison to the predicate device, rather than clinical ground truth (expert consensus, pathology, or outcomes data). The document refers to "anthropomorphic adult and pediatric images" meaning images of phantoms designed to mimic human anatomy.
8. The sample size for the training set
- Training Set for DR 100s: The DR 100s system itself is a hardware device with integrated software for image processing (MUSICA). It's not an AI model that undergoes a "training" phase in the conventional sense (e.g., deep learning). The MUSICA image processing algorithms are stated to be "identical to those previously cleared" in other Agfa devices (K103597, K152639, K141192, K183275). Therefore, any "training" (algorithm development/tuning) would have occurred for these prior versions/devices, and no specific training set for the DR 100s is mentioned.
9. How the ground truth for the training set was established
- Not applicable/Not specified as the DR 100s itself does not undergo a "training" phase like a new AI algorithm. The MUSICA algorithms were previously developed and cleared.
Ask a specific question about this device
(70 days)
The DR 800 system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:
- Positioning fluoroscopy procedures
- Gastro-intestinal examinations
- Urogenital tract examinations
- Angiography
It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for project radiography of all body parts.
In addition, the system provides the Agfa Tomosynthesis option, which is intended to acquire tomographic slices of human anatomy and to be used with Agfa DR X-Ray systems. Tomosynthesis is used to synthesize tomographic slices from a single tomographic sweep.
The DR 800 is not intended for mammography applications.
Agfa's DR 800 with Tomosynthesis a tomographic and fluoroscopic x-ray system (product codes IZF and JAA) intended to capture tomographic slices of the human body. The DR 800 is a floormounted radiographic, fluoroscopic and tomographic system that consists of a tube and operator console with a motorized tilting patient table and bucky with optional wall stand, FLFS overlay and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA image processing and flat-panel detectors for digital and wide dynamic range capture. It is capable of replacing other direct radiography, tomography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
The Agfa DR 800 with Tomosynthesis underwent bench testing and software verification and validation to demonstrate substantial equivalence to its predicate devices, the GE Medical System's Discover XR656 with VolumeRAD (K132261) and Agfa's previous version of the DR 800 with MUSICA Dynamic (K180589). The primary focus of the testing for this submission was on the new Digital TomoSynthesis (DTS) software and its performance in generating tomographic slices.
Here's a breakdown of the acceptance criteria and study details:
1. Table of Acceptance Criteria and the Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Technical & Acceptance Testing | All deviations or variances are documented, addressed in CR&T (Corrective and Remedial Actions) documentation, and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed. | Verification and validation testing confirmed the device meets performance, safety, usability, and security requirements. Pediatric indications were also taken into account. Results were verified and validated. Technical and acceptance testing was completed on the DR 800 with Tomosynthesis to confirm the medical device functions and performs as intended. All deviations or variances are documented in a defect database and addressed in the CR&T documentation and verified. All mitigations have been tested and passed. All design input requirements have been tested and passed. All planned verification activities have been successfully completed. |
Usability & Functionality Evaluation | The results of these tests fell within the acceptance criteria for the DR 800 X-ray system. | Usability and functionality evaluations were conducted with qualified independent radiographers and internal experts. The results of these tests fell within the acceptance criteria for the DR 800 X-ray system; therefore, the DR 800 supports a radiographic, fluoroscopic, and tomosynthesis workflow including dynamic and static imaging, continuous and rapid sequence exams, tomographic slices calibration, and positioning. |
Image Quality Validation (Adults) | The reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria. DTS images were suitable for diagnosis. | Image Quality Validation testing was conducted using anthropomorphic phantoms and evaluated by qualified independent radiographers and internal experts. The image quality validation included testing a full range of applications for the DR 800 X-ray system with Tomosynthesis compared to reference images from the primary predicate GE Discovery XR656 with VolumeRAD (K132261) using anonymized adult phantoms. The test results indicated that the reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria and that the DR 800 with Tomosynthesis is capable of making DTS studies for adult patients. The test results showed MUSICA Digital TomoSynthesis (DTS) images were suitable for diagnosis for adult patients. |
Image Quality Validation (Pediatric) | The reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria. Both 5x the dose and 10x the dose images were clinically sufficient and within the intended use, and DTS images were suitable for diagnosis for pediatric patients. | Image Quality Validation testing was conducted using anthropomorphic phantoms and evaluated by qualified independent radiographers and internal experts. The image quality validation included testing using anonymized pediatric phantoms. The pediatric phantom image quality validation testing analyzed five tomographic slices at 5x the dose and five tomographic slices at 10x the dose. Both the 5x the dose and 10x the dose images are clinically sufficient and within the intended use. The test results indicated that the reconstruction software of the image processing for Digital TomoSynthesis (DTS) of the DR 800 X-ray system passed the acceptance criteria and that the DR 800 with Tomosynthesis is capable of making DTS studies for pediatric patients. The test results showed MUSICA Digital TomoSynthesis (DTS) images were suitable for diagnosis for pediatric patients. |
Software Risk Assessment | No risks identified in the Not Acceptable Region. The device is assumed to be safe, and the benefits of the device outweigh the residual risk. | During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with conventional x-ray film previously released to the field. For the NX4.x.21 (NX Mentor) there are a total of 322 risks in the broadly acceptable region and 27 risks in the ALARP (As Low As Reasonably Practicable) region with only eight of these risks identified. Zero risks were identified in the Not Acceptable Region. |
Electrical Safety & EMC Testing | Compliance with various IEC 60601 standards and FDA Subchapter J. | The DR 800 with Tomosynthesis is compliant to the FDA Subchapter J mandated performance standard 21 CFR 1020.30 - 1020.32. Compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, and IEC 60601-2-54 was confirmed. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated as a numerical count of cases/images. The testing involved "anthropomorphic phantoms" for image quality evaluation, including both adult and pediatric phantoms. The pediatric phantom testing analyzed "five tomographic slices at 5x the dose and five tomographic slices at 10x the dose."
- Data Provenance: The data provenance is from bench testing using anonymized anthropomorphic phantoms. This indicates that the data is prospective in the sense that the phantoms were specifically used for this testing, but it is not from human patients. The country of origin of the data is not specified, but the manufacturer is Agfa N.V. (Belgium).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- Number of Experts: "Qualified independent radiographers and internal experts" were used for usability, functionality, and image quality evaluations. The exact number of each group is not specified.
- Qualifications of Experts: They are described as "qualified independent radiographers and internal experts" and "qualified radiologists" (in the "Descriptive characteristics and performance data including image quality evaluations by qualified radiologists are adequate to ensure equivalence" section). Specific details like years of experience or subspecialty are not provided.
4. Adjudication Method for the Test Set
- The document implies that the "qualified independent radiographers and internal experts" evaluated the images and that the results "fell within the acceptance criteria" or "passed the acceptance criteria," suggesting a consensus or individual assessment against predefined criteria. However, a specific adjudication method (e.g., 2+1, 3+1) is not explicitly stated.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No MRMC comparative effectiveness study was done. The study design described is a bench test comparison of the device against a predicate device's reference images using phantoms, with evaluation by human experts, rather than an assessment of human reader performance with or without AI assistance. The device itself is an imaging system, not an AI-powered diagnostic tool for interpretation assistance in the sense of comparing human performance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone evaluation of the algorithm's output was done, as part of the image quality validation. The "reconstruction software of the image processing for Digital TomoSynthesis (DTS)" was evaluated to ensure the generated tomographic images were suitable for diagnosis. This is an assessment of the algorithm's output (the tomographic slices) without direct human intervention in the image generation process, beyond setting up the acquisition parameters.
7. The Type of Ground Truth Used
- The ground truth for the image quality evaluation was based on comparison to reference images from the primary predicate device (GE Discovery XR656 with VolumeRAD - K132261) using anthropomorphic phantoms, and expert assessment by "qualified independent radiographers and internal experts" confirming the images were "suitable for diagnosis" and "clinically sufficient." It is not pathology, or outcomes data.
8. The Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set for the MUSICA DTS software. It mentions that "The image processing algorithms in the new device are similar to those previously cleared in the DR 800 with MUSICA Dynamic (K180589) and other devices in Agfa's radiography portfolio today... The addition of the tomographic image processing is similar to the predicate device (K132261)." This suggests leveraging existing, previously trained algorithms or development methodologies, rather than describing a specific new training dataset for this submission.
9. How the Ground Truth for the Training Set Was Established
- This information is not provided in the document. As noted above, the submission emphasizes similarity to existing, cleared technologies, rather than detailing the unique training of a novel algorithm from scratch.
Ask a specific question about this device
(30 days)
The DR 800 system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:
- · Positioning fluoroscopy procedures
- · Gastro-intestinal examinations
- · Urogenital tract examinations
- · Angiography
It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for project radiography of all body parts.
The DR 800 is not intended for manimography applications.
Agfa HealthCare's DR 800 is an image-intensified fluoroscopic x-ray system (product code JAA) intended to capture images of the human body. The DR 800 is a floor-mounted R/F system that consists of a tube and operator console with a motorized tilting patient table and bucky with optional wall stand, FLFS overlay and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA Dynamic™ image processing and flat-panel detectors for digital and wide dynamic range image capture. It is capable of replacing other direct radiography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.
Here's an analysis of the provided text to extract information about acceptance criteria and the supporting study, structured as requested:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Usability and Functionality | Support a radiographic and fluoroscopic workflow, including dynamic and static imaging, continuous and rapid sequence exams, calibration, and positioning. | "The results of these tests fell within the acceptance criteria for the DR 800 R/F X-ray system and some improvements will be implemented based on these results; the DR 800 supports a radiographic and fluoroscopic workflow including dynamic and static imaging, continuous and rapid sequence exams, calibration, and positioning." |
Full Leg Full Spine (FLFS) | Mount stitch grid, imaging ranges of a certain tolerance, and transversal collimation, and medical ruler exposure. The FLFS software for NX Luna compared to current FLFS software on the market should be "equal to or better". FLFS landscape functional design meets user needs. | "The FLFS clinical validation for the mount stitch grid, imaging ranges of a certain tolerance and transversal collimation, and medical ruler exposure fulfilled the acceptance criteria and passed the assessment with minor fails that will be solved." |
"The results of the FLFS comparison test for NX Luna concluded that the FLFS software is equal to or better than the current FLFS software currently on the market. The results of the FLFS landscape validation for the NX Luna concluded that the FLFS landscape functional design meets to user needs." | ||
Dose Control | None of the detector doses would measure higher than the DIN-norm or exceed the dose limit curve for adult and pediatric phantoms with pulsed and continuous fluoroscopy exams. | "The results fulfilled the acceptance criteria that none of the detector doses would measure higher than the DIN-norm or exceed the dose limit curve." |
Image Quality (Dynamic) | Pulsed and continuous fluoroscopy imaging with MUSICA Dynamic should be between "good and excellent" and pass acceptance criteria. | "The test results indicated that the pulsed and continuous fluoroscopy imaging of the DR 800 R/F X-ray system with MUSICA Dynamic was between good and excellent and passed the acceptance criteria." |
Image Quality (Static) | MUSICA3 Abdomen+ images should be suitable for diagnosis with overall higher image quality. Static images made with the R/F flat-panel detector (FL4343) should demonstrate clinical acceptability. | "The test results showed MUSICA3 Abdomen+ images were suitable for diagnosis with an overall higher image quality. The test results proved clinical acceptability for static images made with the R/F flat-panel detector (FL4343)." |
Software Risk (NX4.0) | No risks identified in the "Not Acceptable Region"; medical risk no greater than conventional x-ray film. | "For the NX4.0 (NX Luna) there are a total of 274 risks in the broadly acceptable region and 26 risks in the ALARP region with only three of these risks identified. Zero risks were identified in the Not Acceptable Region. Therefore, the device is assumed to be safe, the benefits of the device are assumed to outweigh the residual risk. The medical risk is no greater than with conventional x-ray film previously released to the field." (Note: This is a statement of compliance with risk assessment findings rather than a performance metric.) |
Electrical Safety & EMC | Compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-2-54, and FDA Subchapter J (21 CFR 1020.30 – 1020.32). | "The DR 800 with MUSICA Dynamic is compliant to the FDA Subchapter J mandated performance standard 21 CFR 1020.30 – 1020.32." (Implied compliance with IEC standards through testing.) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify exact numerical sample sizes for the test sets used in the various evaluations (usability, FLFS, dose control, image quality). It mentions the use of "anthropomorphic phantoms" for image quality testing.
- Data Provenance: The data is described as "Laboratory data" and "image quality evaluations conducted with independent radiologists."
- Country of Origin: Not explicitly stated, but the company is Agfa HealthCare N.V. (Belgium), and the submission is to the FDA (USA), implying development might be international but for the US market. The use of "DIN-norm" suggests European origin or influence for some standards.
- Retrospective or Prospective: Not explicitly stated. The nature of the "bench testing" and "clinical image quality evaluations" using phantoms suggests a controlled, prospective testing environment rather than retrospective analysis of patient data. "No clinical trials were performed in the device. No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld." confirms prospective, non-clinical study design.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts:
- Usability and Functionality: "qualified independent radiographers and internal experts." (Number not specified)
- FLFS Clinical Validation: "qualified internal radiographer." (One identified)
- FLFS Comparison Test: "several qualified internal experts." (Number not specified, but more than one)
- Dose Control Validation: "qualified internal expert." (One identified)
- Image Quality Validation: "qualified independent radiographers and internal experts." (Number not specified)
- Qualifications of Experts: The experts are consistently referred to as "qualified independent radiographers" or "qualified internal experts" (or radiographer). Specific years of experience are not provided.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for resolving discrepancies among expert opinions. Evaluations often involved "qualified independent radiographers and internal experts," implying consensus or individual assessment, but a specific arbitration process is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated as being performed to compare human readers with and without AI assistance.
- The study focused on showing equivalence or improvement of the new device (DR 800 with MUSICA Dynamic) compared to reference images and predicate devices/software, and its ability to meet acceptance criteria for performance, not on AI-assisted human reading performance. The "MUSICA Dynamic" is described as software for image processing, not necessarily an AI for diagnostic assistance to human readers.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
Yes, the studies described are primarily standalone evaluations of the device's technical performance and image quality. These evaluations assessed the output of the DR 800 system with MUSICA Dynamic processing (which is an algorithm) directly, using phantoms and comparing its output to reference images and predicate devices. There is no indication of a human-in-the-loop performance study where the algorithm's output is then interpreted by a human and compared to human interpretation without the algorithm.
7. Type of Ground Truth Used
The ground truth used appears to be:
- Expert Consensus/Opinion: For image quality, usability, and FLFS validation, the assessments by "qualified independent radiographers and internal experts" served as the ground truth for determining acceptability.
- Technical Standards/Benchmarks: For dose control, "DIN-norm" and "dose limit curve" served as the objective ground truth.
- Reference Images/Predicate Device Performance: For image quality validation, comparisons were made to "reference images using anonymized phantoms" and against predicate device performance, implying these served as a comparative ground truth.
- Pre-defined Requirements: For usability and functionality, the 'acceptance criteria' themselves served as the ground truth of what the device needed to achieve.
There is no mention of pathology or long-term outcomes data as ground truth.
8. Sample Size for the Training Set
- The document does not provide information on the sample size for a training set. This is likely because the device is an X-ray imaging system with image processing software (MUSICA Dynamic), and the focus of the 510(k) submission is on demonstrating substantial equivalence and validation of its performance, not on a machine learning model that would require a distinct training set. The "MUSICA Dynamic" algorithms are described as "similar to those previously cleared" or "identical to the predicate device" in terms of dynamic image processing.
9. How the Ground Truth for the Training Set Was Established
- As no training set is explicitly mentioned or detailed for a machine learning model, there is no information on how its ground truth would have been established. The image processing algorithms are likely based on established signal processing techniques, rather than learned directly from a labeled dataset in the way a modern AI model might be.
Ask a specific question about this device
(128 days)
Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.
Agfa's DX-D Imaging Package is not indicated for use in mammography.
Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.
This submission is to add the DR10s and DR14s Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. The DX-D Imaging Package with the DR 10s and DR 14s wireless panels will be labeled as the Pixium 2430EZ and Pixium 3543EZ. DR 10s and DR 14s are commercial trade names used by Agfa HealthCare for marketing purposes only.
Principles of operation and technological characteristics of the new and predicate device are the same. There are no changes to the intended use/indications of the new device is physically and electronically identical to the predicate, K142184. It uses the same workstation and the similatorphotodetector flat panel detectors to capture and digitize the images as predicate K142184.
This document describes the 510(k) summary for Agfa's DX-D Imaging Package, focusing on the newly added DR10s and DR14s Flat Panel Detectors. The submission aims to demonstrate substantial equivalence to a predicate device (K142184).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a dedicated "acceptance criteria" table with specific quantitative thresholds. Instead, the acceptance criteria are implied to be equivalence to the predicate device (K142184) and performance falling within expected parameters for radiographic systems. The reported device performance is presented through comparison with other Agfa flat-panel detectors on the market, including the predicate.
Below is a table summarizing the performance characteristics of the new detectors (DR 10s, DR 14s) and the predicate (represented by DX-D 10, DX-D 20, DX-D 40 from the comparison table, as the predicate K142184's individual detector specs aren't explicitly broken out separately):
Characteristic | DX-D 10 Flat-Panel Detector (Predicate Example) | DR 10s Wireless Detector (New Device) | DR 14s Wireless Detector (New Device) | Acceptance Criteria (Implied) | Reported Device Performance (Summary) |
---|---|---|---|---|---|
Scintillator | CsI, GOS | CsI | CsI, GOS | Equivalent to predicate (CsI, GOS) | DR 10s uses CsI, DR 14s uses CsI, GOS. Deemed equivalent. |
Cassette size | 35x43cm/14x17in | 24x30cm | 35x43cm/14x17in | Appropriate for general radiography. | Different sizes, but appropriate for general radiography. |
Pixel Size | 139 µm | 148 µm | 148 µm | Comparable to predicate (139-140 µm). | Slightly larger pixel size but deemed equivalent. |
A/D Conversion | 14 bits | 16 bits | 16 bits | Comparable to predicate (14 bits). | Higher (16 bits) – considered an improvement. |
Interface | Ethernet | AED & Synchronized | AED & Synchronized | Reliable interface. | AED & Synchronized. |
Communication | Tethered | Wireless | Wireless | Reliable communication. | Wireless (new feature). |
Power | I/O Interface Box: 100-240 VAC, 47-63 Hz | Battery: replaceable & rechargeable | Battery: replaceable & rechargeable | Reliable power. | Battery-powered for wireless operation. |
Weight | 3.9 kg (8.6 lbs) | 1.6 kg (3.53 lbs) | 2.8 kg (6.17 lbs) | Ergonomically acceptable. | Lighter than predicate examples (due to wireless nature). |
DQE @ 1lp/mm | 0.530/0.608 | 0.523 | 0.521/0.292 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
DQE @ 2lp/mm | 0.219/0.298 | 0.476 | 0.449/0.189 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
DQE @ 3lp/mm | 0.092/0.147 | 0.295 | 0.296/0.071 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
MTF @ 1lp/mm | 0.205/0.456 | 0.637 | 0.638/0.526 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
MTF @ 2lp/mm | 0.106/0.304 | 0.360 | 0.363/0.208 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
MTF @ 3lp/mm | 0.092/0.147 | 0.199 | 0.198/0.081 | Equivalent to predicate. | Comparable values, "equivalent to other flat-panel detectors." |
Image Acquisition/hr. | 150 | 240 | 240 | At least equivalent to predicate (150). | Higher (240) – considered an improvement. |
The overall acceptance criteria for the study is "Substantial Equivalence" to the predicate device (K142184), demonstrated through:
- Identical Indications for Use.
- Same principles of operation and technological characteristics (despite some hardware differences).
- Performance data (laboratory and clinical evaluations) ensuring equivalence.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- For laboratory image quality (DQE, MTF) comparisons and grid evaluation: The document does not specify a numerical sample size in terms of images or measurements. It states "equivalent test protocols as used for the cleared detectors" were used and the results "confirmed that the DX-D Imaging Package with DR 10sC, DR14sC, and DR14sG flat-panel detectors was equivalent to other flat-panel detectors Agfa currently markets including the predicate (K142184)."
- For usability and functionality evaluations: Not specified.
- For Image Quality Validation testing (using anthropomorphic phantoms): Not specified.
- For in-hospital image quality comparisons ("clinical evaluations"): "anonymized" patient images were utilized, but the number of images or cases is not specified.
- Data Provenance: The data appears to be retrospective (for human image data, implied from "anonymized to remove all identifying patient information" and "No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld.") and laboratory-generated (for DQE, MTF, grid, usability, and phantom studies). The country of origin is not explicitly stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- For laboratory image quality, grid evaluation, usability/functionality:
- "qualified individuals employed by the sponsor" conducted these evaluations.
- "qualified independent radiographers" conducted usability and functionality evaluations.
- "a qualified internal radiographer" conducted the grid evaluation.
- "qualified independent radiographers" evaluated anthropomorphic phantoms.
- For in-hospital image quality comparisons:
- "qualified independent radiologists" conducted these comparisons.
- Qualifications: "Qualified independent radiographers" and "qualified independent radiologists" are mentioned. Specific experience levels (e.g., "10 years of experience") are not provided. The term "qualified" implies they possess the necessary expertise for the task.
4. Adjudication Method for the Test Set
The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). The "clinical evaluations" and "in-hospital image quality comparisons" mention "qualified independent radiologists" in plural, suggesting a consensus or comparison approach among them, but details are not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done in the sense of measuring human reader improvement with vs. without AI assistance.
- The studies conducted were focused on demonstrating that the new device's image quality and performance were equivalent to the predicate device and other established systems, meaning human readers would perform similarly with the new device as with the predicate.
6. Standalone (Algorithm Only) Performance Study
- No standalone algorithm performance (AI-only) study was done for diagnostic interpretation. The device is an imaging package (hardware detectors and workstation) for capturing and displaying images, not an AI diagnostic algorithm.
7. Type of Ground Truth Used
- For laboratory image quality (DQE, MTF, grid): The "ground truth" is based on physical measurements and standardized test protocols.
- For usability and functionality: The "ground truth" is based on expert assessment by radiographers against pre-defined workflow and compatibility requirements.
- For image quality validation (phantoms): The "ground truth" is based on expert assessment by radiographers of the generated images, likely comparing features to expected phantom characteristics and established image quality standards.
- For in-hospital image quality comparisons: The "ground truth" is implicitly based on radiological expert consensus (potentially with existing patient reports as a reference, though this is not specified), primarily for qualitative comparison against images produced by predicate devices.
8. Sample Size for the Training Set
The document does not mention a training set, as this device (an X-ray imaging package) is not an AI diagnostic device that requires a training set in the typical machine learning sense. The "software validation testing" refers to verification and validation of the software components against predefined requirements, not training a machine learning model.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set for an AI algorithm is mentioned or implied.
Ask a specific question about this device
Page 1 of 1