(85 days)
The Rapid RV/LV software device is designed to measure the maximal diameters of the right and left ventricles of the heart from a volumetric CTPA acquisition and report the ratio of those measurements for adults. Rapid RV/LV analyzes cases using machine learning algorithms to identify locations and measurements of the ventricles. The Rapid RV/LV device provides the user with annotated images showing ventricular measurements. Its results are not intended to be used on a stand-alone basis for clinical decision-making or otherwise preclude clinical assessment of CTPA cases.
Rapid RV/LV software device is a radiological computer-assisted image processing software device. The Rapid RV/LV device is a CTPA processing module which operates within the integrated Rapid Platform to locate and measure the right and left ventricle diameters of the human heart to ultimately provide a ratio of the right ventricle diameter to the left ventricle diameter. The RV/LV software analyzes input CTPA images that are provided in DICOM format and provides both a visual output containing a color overlay image displaying where the ventricle diameter measurements were made along with the quantitative results of the measurements and a text file output (json format) containing the quantitative measurement results (the individual right and left ventricle diameters and their corresponding ratio).
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on specific performance metrics rather than explicitly listing "acceptance criteria" in a separate table. However, the performance data section implies the following are the primary endpoints for proving the device's accuracy in measuring RV/LV ratios.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Average slope of RV/LV ratio measurements between device and experts | 1.1 (95% CI: 1.0, 1.2) |
Average intercept of RV/LV ratio measurements between device and experts | -0.2 (95% CI: -0.1, -0.3) |
Lower confidence level of the 95% CI of the slope | 1.0 |
Lower confidence level of the 95% CI of the intercept | -0.1 |
Mean Bland-Altman bias (RV/LV ratio) | 0.023 (95% CI: -0.04, 0.08) |
Mean Absolute Error (MAE) between Rapid RV/LV and experts | 3.8mm |
2. Sample Size for the Test Set and Data Provenance
- Sample Size: 124 CTPA cases.
- Data Provenance: The cases were mixed from different scanner manufacturers (GE, Philips, Toshiba, and Siemens), suggesting data from various sources, likely clinical institutions. The document does not explicitly state the country of origin or if it was retrospective or prospective, though the mention of "cases with ground truth established" usually implies a retrospective approach where existing data is annotated.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 experts.
- Qualifications of Experts: The document does not explicitly state the qualifications of the experts (e.g., "radiologist with 10 years of experience"). It only identifies them as "experts."
4. Adjudication Method for the Test Set
- The document states "ground truth established by 3 experts." It does not specify the adjudication method used (e.g., 2+1, 3+1, or simple consensus).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document does not indicate that an MRMC comparative effectiveness study was done to evaluate how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the device against expert ground truth.
6. Standalone Performance Study
- Yes, a standalone performance study was done. The document states: "Final device validation included standalone performance validation." This indicates the algorithm's performance was evaluated by itself, without human-in-the-loop assistance.
7. Type of Ground Truth Used
- Expert Consensus: The ground truth was established by "3 experts."
8. Sample Size for the Training Set
- The algorithm development used 516 CTPA cases. The text indicates that "training included 80% of cases for validation and 20% for training." This phrasing is a bit ambiguous, as typically the larger portion is for training and a smaller set for validation during development. However, if interpreted as 20% for pure training, then the training set size would be approximately 103 cases (20% of 516). If "validation" here refers to a development-phase validation set, then 80% would be 413 cases used in that stage. Given the context of "algorithm development" and allocation for "validation and training" within the 516, it's safe to say the total "training set" (including internal validation during development) was 516 cases.
9. How the Ground Truth for the Training Set Was Established
- The document states that the 516 CTPA cases used for algorithm development were used with a "wide range of LV diameters." It does not explicitly detail the method for establishing ground truth for the training set, but it can be inferred that it would have been established by experts, similar to the test set, given that the final validation relied on expert ground truth.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).