Search Results
Found 1 results
510(k) Data Aggregation
(265 days)
2430MCA with Xmaru W
The 2430MCA Digital Flat Panel X-Ray Detector is indicated for digital imaging solution designed for a mammographic system. It is intended to replace film or screen based mammographic systems in screening mammography. Xmaru W is an integrated software solution indicated for use with the 2430MCA detector.
2430MCA is a digital mammography X-ray detector that is based on flat-panel technology. This mammographic image detector and processing unit consists of a CsI scintillator coupled to a CMOS sensor. This device needs to be integrated with a mammographic imaging system. It can be utilized to capture and digitalize X-ray images for mammographic screening. The RAW files can be further processed as DICOM compatible image files by separate console SW, Xmaru W for a mammographic screening. 2430MCA detector is connected by wire to a viewing station via ethernet connection.
Here's a breakdown of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state quantitative acceptance criteria in a pass/fail format for clinical performance. Instead, it relies on a qualitative assessment: "provide images of equivalent or superior diagnostic capability to the predicate device."
Criterion (Qualitative) | Reported Device Performance (2430MCA with Xmaru W) |
---|---|
Diagnostic Capability | Equivalent or superior to the predicate device (RSM 1824C with Rconsole1) |
MTF Performance | Superior to the predicate device |
DQE Performance | Superior to the predicate device |
Overall Clinical Image Quality | Acceptable for screening mammography |
2. Sample Size Used for the Test Set and Data Provenance:
The text states: "...clinical images obtained from the subject device and the predicate device are reviewed by three MQSA qualified US radiologists..." and "...taking sample radiographs of similar age groups and anatomical structures..."
- Sample Size: Not explicitly stated as a number. The term "sample radiographs" is used, implying a subset of images rather than a comprehensive, statistically powered study.
- Data Provenance: Clinical images were obtained from the subject and predicate devices. No specific country of origin is mentioned beyond the radiologists being "US radiologists." The study appears to be retrospective in the sense that existing images from both devices were reviewed, rather than a prospective study designed specifically for this comparison.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Number of Experts: Three (3)
- Qualifications of Experts: MQSA qualified US radiologists. (MQSA stands for Mammography Quality Standards Act, indicating they are qualified to interpret mammograms in the US).
4. Adjudication Method for the Test Set:
- Adjudication Method: "concurrent review... by three MQSA qualified US radiologists to render an expert opinion." This implies a consensus or majority opinion approach rather than a specific 2+1 or 3+1 rule. The outcome was a collective opinion that the images were of "acceptable overall clinical image quality" and "equivalent or superior diagnostic capability."
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, what was the effect size of how much human readers improve with AI vs. without AI assistance?
- MRMC Study: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. This study focused on comparing the image quality of the proposed device against a predicate device, as interpreted by human readers, not on how an AI system improves human reader performance. There is no mention of AI assistance in the context of human reader improvement.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Not applicable. The device is a digital X-ray detector and integrated software (Xmaru W) for processing, viewing, searching, storing, annotating, measuring, and stitching images. It is not an AI algorithm performing diagnosis independently. The comparison here is between the image output of two hardware/software systems, assessed by human experts.
7. The type of ground truth used:
- Ground Truth Type: Expert consensus. The "ground truth" for the comparison was the "expert opinion" of three MQSA qualified US radiologists regarding the "diagnostic capability" and "overall clinical image quality" of the images produced by the subject and predicate devices. There is no mention of pathology or outcomes data being used as ground truth for this comparison.
8. The sample size for the training set:
- Training Set Sample Size: Not applicable. The document describes a comparison study of a new medical device (digital X-ray detector with software) against a predicate device. It does not describe the development or training of an AI algorithm, so there is no "training set" in this context.
9. How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not applicable, as there is no training set for an AI algorithm described in the document.
Ask a specific question about this device
Page 1 of 1