Search Results
Found 2 results
510(k) Data Aggregation
(265 days)
In Vision Precision LVEF is used to process previously acquired trans thoracic cardiac ultrasound images, and manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM compliant PACS system to provide an automated estimation of LVEF. This measurement can be used to assist the clinician in a cardiac evaluation. In Vision Precision is indicated for use in patients 22 years and older by sonographers and physicians evaluating cardiac ultrasound.
InVision Precision LVEF is a software as a medical device (SaMD), manufactured by InVision Medical Technology Corporation, intended as an aid in diagnostic review and analysis of echocardiographic data, including the evaluation of left ventricular ejection fraction (LVEF) in cardiovascular ultrasound images in DICOM format. The software interfaces with data files uploaded to a PACS by any ultrasound or data collection equipment. It selects a set of echocardiogram videos of the correct view and generates semi-automatic segmentations of the left ventricle using a machine learning algorithm to form the basis for the calculator of the LVEF output. The analysis results are visualized by the clinician's integrated image view application as adjustable annotations. The user has the option to modify the semi-automatic segmentations suggested by the software. The EF calculation is updated in real-time with the user's modification of the segmentation. A cardiologist can adjust the annotations and the downstream measurement of LVEF prior to finalization.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Root Mean Square Deviation (RMSD) of LVEF vs. reference ground truth EF | Biplane view: ~6.06 |
A4C view: 6.17 | |
A2C view: 7.12 | |
Dice score for A4C segmentation | 0.89 |
Dice score for A2C segmentation | 0.90 |
Overall functional performance | Met all endpoints |
Accuracy of algorithm | Met all endpoints |
Image video clip selection function performance | Met all endpoints |
Note: The document states "Root Mean Square Deviation below a set threshold" and "Dice score above a set threshold," but the specific thresholds are not explicitly given. The reported performance values are provided instead.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated. The document mentions "A retrospective, multicenter study" and "Images and cases used for verification and validation testing were separate and carefully segregated from training datasets," but does not give a specific number for the test set.
- Data Provenance: Retrospective, multicenter study. It included a variety of imaging equipment manufacturers (Philips, GE, Siemens), implying data from different sites. The country of origin is not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three cardiologists.
- Qualifications of Experts: Not explicitly stated, but they are identified as "cardiologists," implying medical doctors specializing in cardiology. No experience level (e.g., 10 years) is provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Consensus annotation of three cardiologists. This suggests a consensus-based method, where agreement among the three experts formed the ground truth. It is not explicitly stated as 2+1 or 3+1, but rather a collective agreement by all three.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study to evaluate human readers' improvement with AI vs. without AI assistance. The study focuses on the device's performance against a ground truth established by experts.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, a standalone performance evaluation was done. The study "evaluated the capability of the Precision machine learning model in calculating LVEF against ground truth." The reported RMSD and Dice scores are measures of the algorithm's performance.
- It's important to note that the device description states the user has the option to modify the semi-automatic segmentations, implying a human-in-the-loop aspect in the clinical use, but the reported performance data appears to be for the standalone algorithm's initial output before human modification.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. Specifically, "the consensus annotation of three cardiologists."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document only mentions that "Images and cases used for verification and validation testing were separate and carefully segregated from training datasets."
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not explicitly stated. The document only mentions "training datasets" but does not describe the method used to establish their ground truth. This is a common gap in publicly available summary documents.
Ask a specific question about this device
(149 days)
The Caption Interpretation Automated Ejection Fraction software is used to process previously acquired transthoracic cardiac ultrasound images, to store images, and to manipulate and make measurements on images using an ultrasound device, personal computer, or a compatible DICOM-compliant PACS system in order to provide automated estimation of left ventricular ejection fraction. This measurement can be used to assist the clinician in a cardiac evaluation.
The Caption Interpretation Automated Ejection Fraction Software is indicated for use in adult patients.
The Caption Interpretation Automated Ejection Fraction Software ("AutoEF") applies machine learning algorithms to process two-dimensional transthoracic echocardiography images for calculating left ventricular ejection fraction.
The current implementation of the device adds a predetermined change control plan (PCCP) to the device cleared under K210747, which allows future modifications to the device.
The version of Caption Interpretation AutoEF cleared under K210747 performs left ventricular ejection fraction estimation using apical four chamber (A4C), apical two chamber (A2C) and the parasternal long-axis (PLAX) cardiac ultrasound images.
The software uses an algorithm that was derived through use of deep learning and locked prior to validation. The product operates as an add-in to a DICOM PACS system, ultrasound device, or personal computer. Caption Interpretation receives imaging data either directly from an ultrasound system or from a module in a PACS system.
The device includes the following main components:
- Clip Annotation and Selection: The AutoEF software includes a function that processes video clips in a study to automatically classify clips that are PLAX. AP4, and AP2 views. This view selection is based on a convolutional network. It also includes a function, Image Quality Score (IQS), that allows selection of best available PLAX, AP4, and AP2 clips within the study or provide an indication to the user if there are no clips with sufficient quality to estimate ejection fraction, based on prespecified IQS thresholds.
- Eiection Fraction Estimator and Confidence Metric: The automated eiection fraction estimation is performed using a machine learning model trained on apical and parasternal long-axis views. The model is trained with a dataset from a large number of patients. representative of the intended patient population and variety of contemporary cardiac ultrasound scanners. The EF calculation can be performed on a 3-view combination (PLAX, AP4 and AP2), 2-view combinations (AP4 and AP2, AP2 and PLAX, AP4 and PLAX), or single views (AP4, AP2). The confidence metric provides expected error in left-ventricle ejection fraction estimation and is based on IQS.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
80% positive predictive value (PPV) and 80% sensitivity for correct mode, view, and minimum number of frames (Clip Annotator). | Observed PPV point estimates for the Clip Annotator were greater than 97% for identification of the imaging mode and the view. Observed sensitivity point estimates were greater than 96% across views and imaging mode. (Meets Criteria) |
80% of clips meet expert criteria for suitability for EF estimation (Clip Annotator). | (This specific metric for "suitability for EF estimation" is not explicitly reported with a percentage in the provided text. The Clip Annotator performance for mode, view, and frames implies suitability, but a direct percentage for "expert criteria for suitability" is not given. However, the Clip Annotator study did meet its pre-defined acceptance criteria, suggesting this was addressed indirectly or considered acceptable based on the reported PPV and sensitivity.) |
Overall (all views) and all combined views Auto EF is within 9.2% RMSD of expert EF. | Overall (best available view) RMSD EF% [95% CIs]: 7.21 [6.62, 7.74] (Meets Criteria) |
Combined Views: |
- AP4 and AP2: 7.27 [6.55, 7.92] (Meets Criteria)
- AP4 and PLAX: 7.50 [6.85, 8.09] (Meets Criteria)
- AP4, AP2, and PLAX: 7.24 [6.64, 7.80] (Meets Criteria)
- AP2 and PLAX: 8.04 [7.32, 8.7] (Meets Criteria) |
| New views superior to 11.024% RMSD individually. | Individual Views: - AP4 only: 7.76 [7.01, 8.45] (Meets Criteria)
- AP2 only: 8.27 [7.44, 9.03] (Meets Criteria) (Note: PLAX individual view is not explicitly reported here for comparison against this criterion, but the overall context implies good performance.) |
| For each standard view, the Confidence Metric must meet the equivalence to expert EF criteria as defined in PCCP. | Testing of the confidence metric functionality verified successful performance of the Confidence Metric in estimating the error range of the EF estimates around the reference EF using equivalence criteria and the evidence that the difference between the estimated EF and the reference EF is normally distributed. (Meets Criteria) |
Study Details
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: 186 patient studies.
- Data Provenance: Retrospective, multicenter study. The studies were acquired from three sites across the US: Minneapolis Heart Institute, Duke University, and Northwestern University.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated as a specific number of individual experts. The text refers to "a panel of expert readers" for the Clip Annotator study and "expert cardiologists" for establishing the EF reference standard.
- Qualifications of Experts: "Expert cardiologists." No further specific details like years of experience are provided, but the title implies appropriate medical qualifications for interpreting echocardiograms.
4. Adjudication method for the test set
- Adjudication Method (Clip Annotator Study): "Results of the Clip Annotator were compared to evaluation by a panel of expert readers." This implies a consensus-based or direct comparison method, but the specific adjudication (e.g., 2+1, 3+1) is not detailed.
- Adjudication Method (EF Calculation): The reference standard for ejection fraction was established by "expert cardiologists." This suggests expert consensus or established expert interpretation, but a formal adjudication process (e.g., how disagreements between experts were resolved if multiple experts reviewed the same case) is not explicitly described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not reported in this document. The clinical validation focused on comparing the AutoEF device's performance directly against an expert-established reference standard, not on the improvement of human readers using the device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Yes, the clinical validation study assessed the standalone performance of the Caption Health AutoEF. The "test compared the Caption Health AutoEF to the expert produced and reported biplane Modified Simpson's ejection fraction." This means the algorithm's output was directly compared to the ground truth without human intervention in the device's estimation process. The clinician's ability to edit the estimation is mentioned as a feature, but the presented performance results are for the automated estimation.
7. The type of ground truth used
- Ground Truth Type (Clip Annotator): Evaluation by a "panel of expert readers."
- Ground Truth Type (Ejection Fraction): Reference standard for ejection fraction was established by 2D echo using the biplane Modified Simpson's method of disks, performed by "expert cardiologists."
8. The sample size for the training set
- Training Set Sample Size: The text states, "The model is trained with a dataset from a large number of patients." However, a specific numerical sample size for the training set is not provided. It also mentions training occurred on "a dataset from a large number of patients, representative of the intended patient population and variety of contemporary cardiac ultrasound scanners."
9. How the ground truth for the training set was established
- Training Set Ground Truth Establishment: The document does not explicitly detail how the ground truth for the training set was established. It only mentions that the machine learning model was "trained on apical and parasternal long-axis views" and derived through "deep learning," and "locked prior to validation." Given the nature of EF calculation, it is highly probable that expert cardiologists also established the ground truth for the training data, likely using methods similar to the test set (e.g., biplane Modified Simpson's method).
Ask a specific question about this device
Page 1 of 1