Search Results
Found 1 results
510(k) Data Aggregation
(229 days)
EFAI Intelligent Cardiothoracic Ratio Assessment System (or iCTR) is a software for use by hospital and clinics to automatically assess the cardiothoracic ratio (CTR) of a chest X-ray image from the X-ray imager subject. The iCTR is designed to measure the maximal transverse diameter of heart and maximal inner transverse diameter of thoracic cavity and calculate the CTR of a chest X-ray image in posterior (PA) chest view using an artificial intelligence algorithm.
Intended users of the software are aimed to the physicians or other licensed practitioners in the healthcare institutions, such as clinics, hospitals, healthcare facilities, residential care facilities and long-term care services. The system is suitable for adults between 20 - 80 years of age.
Its results are not intended to be used on a stand-alone basis for clinical-decision making or otherwise preclude clinical assessment of cardiothoracic ratio (CTR) cases.
The iCTR is a non-invasive software medical device designed to be installed on the computer with specific system requirements. It is a radiological computer-assisted software system that automatically analyzes DICOM chest X-ray images in PA view and outputs the CTR through an artificial intelligence algorithm. The structure report includes a preview of the compressed chest X-ray image with the automatically-derived CTR result and annotation line, indicating the maximal transverse diameter of heart and maximal inner transverse diameter of thoracic cavity, and the trajectory of CTR records. The trajectory of CTR does not implement a predictive or prognostic feature.
Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text:
Device Name: EFAI Intelligent Cardiothoracic Ratio (iCTR) Assessment System
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly present a formal "acceptance criteria" table with thresholds. Instead, it describes performance metrics that were achieved and deemed "met" or "greater than" a certain level. For clarity, I've inferenced the acceptance criteria based on these reported performance numbers.
| Feature / Metric | Acceptance Criteria (Inferred) | Reported Device Performance |
|---|---|---|
| Accuracy - Identification of Imaging Mode | > 95% | 0.99 (99%) |
| Accuracy - Identification of View | > 95% | 0.99 (99%) |
| Quality Control Model: Filter non-CXR images | Sensitivity > X, Accuracy > Y (e.g., deemed sufficient by FDA) | Sensitivity: 0.99, Accuracy: 0.99 |
| Quality Control Model: Filter non-PA view CXR | Sensitivity > X, Accuracy > Y (e.g., deemed sufficient by FDA) | Sensitivity: 0.99, Accuracy: 0.97 |
| Boundary Clarity Threshold Message | Present message for unclear images whose threshold < 0.5 | Presents message for images with unclear boundary whose threshold < 0.5 |
| Annotation Model: Max Transverse Diameter of Heart (RMSE) | Lower RMSE deemed better (e.g., < 10mm) | 8.81mm |
| Annotation Model: Max Inner Diameter of Thoracic Cavity (RMSE) | Lower RMSE deemed better (e.g., < 15mm) | 14.3mm |
| CTR Accuracy (System vs. Physician-derived) | Defined as 0.95 or higher | 0.95 |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: At least a total of 840 eligible PA CXR images.
- Data Provenance: The images were from three participating sites, with 280 PA CXR images from each site. The document does not specify the country of origin, but given the submitter is from Taiwan, it is plausible the data originated from Taiwan or other undisclosed locations. The study refers to "patients," implying real-world data, but does not explicitly state if it was retrospective or prospective. Typically, K-submissions rely on retrospective data for such clinical performance studies.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: The ground truth for the test set was established by a "panel of expert readers." The exact number of experts is not specified.
- Qualifications of Experts: The document explicitly states "expert readers" but does not provide their specific qualifications (e.g., board certification, years of experience, specialty).
4. Adjudication Method for the Test Set
- The document states that results of the EFAI iCTR were "compared to evaluation by a panel of expert readers." It does not specify the adjudication method used by the expert panel to establish the ground truth (e.g., 2+1, 3+1 consensus, individual reads averaged/aggregated, etc.).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was not explicitly done to evaluate how much human readers improve with AI vs. without AI assistance. The study described focuses on the standalone performance of the AI model and its agreement with aggregated physician-derived CTR measurements. The "Indications for Use" explicitly state: "Its results are not intended to be used on a stand-alone basis for clinical-decision making or otherwise preclude clinical assessment of cardiothoracic ratio (CTR) cases," which suggests it's an assistive tool, but the clinical study did not assess its assistive benefit.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance study was done. The described clinical performance testing directly evaluated the EFAI iCTR system's ability to assess CTR independently and compared its outputs (iCTR-derived CTR) to physician-derived CTR, yielding an accuracy of 0.95. The various quality control models and annotation model RMSE values also represent standalone performance metrics.
7. Type of Ground Truth Used
- The ground truth for the clinical performance testing (CTR accuracy) was expert consensus / physician-derived CTR. The document states, "We used accuracy to evaluate the agreement between EFAI iCTR-derived and physician derived CTR." For other quality control aspects (e.g., filtering non-CXR images), the ground truth would likely be established by a human review or a pre-defined dataset with known labels.
8. Sample Size for the Training Set
- The document states: "Images and cases used for verification testing were carefully separated from training algorithms." However, the sample size for the training set is not provided in the supplied text.
9. How the Ground Truth for the Training Set Was Established
- The document states that "Extensive algorithm development and software verification testing assessed the performance characteristics of the algorithm". It also mentions that "Images and cases used for verification testing were carefully separated from training algorithms." However, the method for establishing ground truth for the training set is not described in the provided text. It is typically assumed that for AI models, the training data is meticulously annotated, often by human experts or through other trusted sources, but this document does not detail that process.
Ask a specific question about this device
Page 1 of 1