Search Results
Found 2 results
510(k) Data Aggregation
(155 days)
InVision Medical Technology Corporation
InVision Precision Cardiac Amyloid is an automated machine learning-based decision support system, indicated as a screening tool for adult patients aged 65 years and over undergoing cardiovascular assessment using echocardiography.
When utilized by an interpreting physician, this device provides information alerting the physician for referral to confirmatory investigations.
InVision Precision Cardiac Amyloid is indicated in adult populations over 65 years of age. Patient management decisions should not be made solely on the results of the InVision Precision Cardiac Amyloid.
The InVision Precision Cardiac Amyloid (InVision PCA) is a Software as a Medical Device (SaMD) machine-learning screening algorithm to identify high suspicion of cardiac amyloidosis from routinely obtained echocardiogram videos. The device assists clinicians in the diagnosis of cardiac amyloidosis.
The InVision PCA algorithm uses a machine learning process to identify the presence of cardiac amyloidosis. The device inputs images and videos from echocardiogram studies, and it outputs a report suggestive or not suggestive of cardiac amyloidosis.
The device has no physical form and is installed as a third-party application to an institution's PACS system.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter:
InVision Precision Cardiac Amyloid: Acceptance Criteria and Performance Study
InVision Precision Cardiac Amyloid (InVision PCA) is a Software as a Medical Device (SaMD) machine-learning screening algorithm developed to identify a high suspicion of cardiac amyloidosis from routinely obtained echocardiogram videos. It acts as a decision support system, alerting interpreting physicians for referral to confirmatory investigations for adult patients aged 65 years and over undergoing cardiovascular assessment using echocardiography.
The device's performance was validated through a comprehensive study, demonstrating its substantial equivalence to the predicate device.
1. Acceptance Criteria and Reported Device Performance
The primary acceptance criteria for the InVision PCA device were established based on its ability to reliably screen for cardiac amyloidosis. The reported performance metrics from the validation study are as follows:
Acceptance Criteria | Reported Device Performance |
---|---|
Sensitivity | 0.607 (60.7%) |
Specificity | 0.990 (99.0%) |
Note: While specific numerical acceptance thresholds are not explicitly stated as "passing" values (e.g., "must achieve >X% sensitivity"), these reported values are presented as the results that successfully met the predefined endpoints of the validation study, implying they satisfied the implicit acceptance criteria deemed necessary for clearance.
2. Sample Size and Data Provenance for Test Set
- Sample Size: 1221 unique echocardiogram studies.
- Data Provenance: The data were selected from three geographically different U.S. sites. The study was conducted on "previously acquired" images, indicating it was a retrospective study.
3. Number of Experts and Qualifications for Ground Truth
The provided document does not explicitly state the number of experts used to establish the ground truth nor their specific qualifications. It mentions "confirmatory reference data," which could imply a consensus of expert opinion but does not detail the process.
4. Adjudication Method for Test Set
The document does not explicitly state the adjudication method used (e.g., 2+1, 3+1). It refers to the ground truth being established by "confirmatory reference data, such as diagnostic imaging or pathology," suggesting a definitive diagnostic pathway rather than a multi-reader visual interpretation adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text, meaning there is no information on how much human readers improve with AI vs. without AI assistance. The study focused on the standalone performance of the AI model.
6. Standalone (Algorithm Only) Performance
Yes, a standalone (algorithm only) performance study was conducted. The reported sensitivity of 0.607 and specificity of 0.990 are directly attributable to the InVision PCA algorithm's performance in analyzing echocardiogram studies against confirmed ground truth.
7. Type of Ground Truth Used
The ground truth for the test set was established using confirmatory reference data, such as diagnostic imaging or pathology. This indicates a high-fidelity ground truth derived from definitive diagnostic procedures rather than solely expert consensus on images.
8. Sample Size for Training Set
The document does not specify the sample size used for the training set. It only details the sample size for the validation/test set.
9. How Ground Truth for Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It is assumed that similar rigorous methods involving confirmatory diagnostic imaging or pathology would have been used for the training data, consistent with the approach for the test set, but this is not directly mentioned.
Ask a specific question about this device
(265 days)
InVision Medical Technology Corporation
In Vision Precision LVEF is used to process previously acquired trans thoracic cardiac ultrasound images, and manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM compliant PACS system to provide an automated estimation of LVEF. This measurement can be used to assist the clinician in a cardiac evaluation. In Vision Precision is indicated for use in patients 22 years and older by sonographers and physicians evaluating cardiac ultrasound.
InVision Precision LVEF is a software as a medical device (SaMD), manufactured by InVision Medical Technology Corporation, intended as an aid in diagnostic review and analysis of echocardiographic data, including the evaluation of left ventricular ejection fraction (LVEF) in cardiovascular ultrasound images in DICOM format. The software interfaces with data files uploaded to a PACS by any ultrasound or data collection equipment. It selects a set of echocardiogram videos of the correct view and generates semi-automatic segmentations of the left ventricle using a machine learning algorithm to form the basis for the calculator of the LVEF output. The analysis results are visualized by the clinician's integrated image view application as adjustable annotations. The user has the option to modify the semi-automatic segmentations suggested by the software. The EF calculation is updated in real-time with the user's modification of the segmentation. A cardiologist can adjust the annotations and the downstream measurement of LVEF prior to finalization.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Root Mean Square Deviation (RMSD) of LVEF vs. reference ground truth EF | Biplane view: ~6.06 |
A4C view: 6.17 | |
A2C view: 7.12 | |
Dice score for A4C segmentation | 0.89 |
Dice score for A2C segmentation | 0.90 |
Overall functional performance | Met all endpoints |
Accuracy of algorithm | Met all endpoints |
Image video clip selection function performance | Met all endpoints |
Note: The document states "Root Mean Square Deviation below a set threshold" and "Dice score above a set threshold," but the specific thresholds are not explicitly given. The reported performance values are provided instead.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated. The document mentions "A retrospective, multicenter study" and "Images and cases used for verification and validation testing were separate and carefully segregated from training datasets," but does not give a specific number for the test set.
- Data Provenance: Retrospective, multicenter study. It included a variety of imaging equipment manufacturers (Philips, GE, Siemens), implying data from different sites. The country of origin is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three cardiologists.
- Qualifications of Experts: Not explicitly stated, but they are identified as "cardiologists," implying medical doctors specializing in cardiology. No experience level (e.g., 10 years) is provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus annotation of three cardiologists. This suggests a consensus-based method, where agreement among the three experts formed the ground truth. It is not explicitly stated as 2+1 or 3+1, but rather a collective agreement by all three.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to evaluate human readers' improvement with AI vs. without AI assistance. The study focuses on the device's performance against a ground truth established by experts.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, a standalone performance evaluation was done. The study "evaluated the capability of the Precision machine learning model in calculating LVEF against ground truth." The reported RMSD and Dice scores are measures of the algorithm's performance.
- It's important to note that the device description states the user has the option to modify the semi-automatic segmentations, implying a human-in-the-loop aspect in the clinical use, but the reported performance data appears to be for the standalone algorithm's initial output before human modification.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. Specifically, "the consensus annotation of three cardiologists."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document only mentions that "Images and cases used for verification and validation testing were separate and carefully segregated from training datasets."
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not explicitly stated. The document only mentions "training datasets" but does not describe the method used to establish their ground truth. This is a common gap in publicly available summary documents.
Ask a specific question about this device
Page 1 of 1