(245 days)
AI-Rad Companion (Pulmonary) is image processing software that provides quantitative and qualitative analysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice in the evaluation and assessment of disease of the lungs. It provides the following functionality:
- Segmentation and measurements of complete lung and lung lobes
- Identification of areas with lower Hounsfield values in comparison to a predefined threshold for complete lung and lung lobes
- Providing an interface to external Medical Device syngo.CT Lung CAD
- Segmentation and measurements of found lung lesions and dedication to corresponding lung lobe.
The software has been validated for data from Siemens (filtered backprojection and iterative reconstruction), GE Healthcare (filtered backprojection reconstruction), and Philips (filtered backprojection reconstruction).
Only DICOM images of adult patients are considered to be valid input.
AI-Rad Companion (Pulmonary) is a software only image processing application that supports quantitative and qualitative analysis of previously acquired CT DICOM Images to support radiologists and physicians from emergency medicine, specialty care, and general practice in the evaluation of and assessment of disease of the thorax.
Here is a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA document for Siemens AI-Rad Companion (Pulmonary):
1. Table of Acceptance Criteria & Reported Device Performance
The document doesn't explicitly state "acceptance criteria" as clear pass/fail thresholds for each metric. Instead, it describes validated performance results and claims they are "superior" to the predicate device, thereby supporting substantial equivalence. The key performance metrics are for lung lobe segmentation.
Feature/Metric | Acceptance Criteria (Implied/Compared) | Reported Device Performance |
---|---|---|
Lung Lobe Segmentation | Performance must be "superior" to the predicate device (syngo.CT Pulmo 3D). The specific quantitative thresholds for "superior" are not explicitly defined as acceptance criteria but are demonstrated by the comparative results below. | DICE Coefficients for individual lung lobes: |
- Ranged between 0.95 and 0.98.
- Standard Deviation (SD) 4,500 CT data sets"
- Data Provenance: Retrospective performance study from "multiple clinical sites from within and outside United States."
3. Number of Experts and Qualifications for Ground Truth
- The document does not explicitly state the "number of experts" or their specific "qualifications" beyond mentioning "manually established ground truth." It implies that the ground truth was established by qualified professionals, likely radiologists or trained medical personnel, given the nature of the task (segmentation).
4. Adjudication Method for the Test Set
- The document does not specify an adjudication method (e.g., 2+1, 3+1). It states "manually established ground truth," which typically implies consensus among multiple readers or a single highly experienced reader whose work is considered the gold standard within the study's context.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No evidence of a MRMC study. The document describes a standalone (algorithm only) performance study directly comparing the AI algorithm's output to "manually established ground truth" and claiming superiority over a predicate device's algorithm, not an AI-assisted human reader study. The purpose of this submission is to demonstrate substantial equivalence of the new AI-Rad Companion to existing predicate devices, not improvement in human reader performance.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone study was done. The performance metrics (DICE coefficients, surface distance, Hausdorff distance, volume error) were computed by comparing the output of the algorithm to the manually established ground truth. This confirms it was an algorithm-only performance evaluation without human-in-the-loop.
7. Type of Ground Truth Used
- Expert Consensus/Manual Establishment: The ground truth for the test set was "manually established ground truth." This typically refers to annotations or segmentations performed by human experts (e.g., radiologists) and potentially reviewed for consensus.
8. Sample Size for the Training Set
- The document mentions a "Training cohort: size and properties of data used for training" as a structural element of their analysis but does not provide the specific sample size for the training set.
9. How the Ground Truth for the Training Set was Established
- The document mentions "Description of ground truth / annotations generation" as a structural element for their algorithm analysis but does not detail how the ground truth for the training set was established. It can be inferred that it was likely generated through expert annotations, potentially similar to the test set, but specific methods (e.g., single expert, multiple experts, consensus, specific tools) are not described.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.