Search Results
Found 2 results
510(k) Data Aggregation
(110 days)
Libby™ Echo:Prio is software that is used to process previously acquired DICOM-compliant cardiac ultrasound images, and to make measurements on these images in order to provide automated estimation of several cardiac measurements. The data produced by this software is intended to be used to support qualified cardiologists, sonographers, or other licensed professional healthcare practitioners for clinical decision-making. Libby™ Echo:Prio is indicated for use in adult patients.
Echo:Prio is an image post-processing analysis software device used for viewing and quantifying cardiovascular ultrasound images. The device is intended to aid diagnostic review and analysis of echocardiographic data, patient record management and reporting. The software provides an interface for a skilled sonographer to perform the necessary markup on the echocardiographic image prior to review by the prescribing physician. The markup includes: the cardiac segments captured, measurements of distance, time, area, quantitative analysis of cardiac function, and a summary report. The software allows the sonographer to enter their markup manually and/or manually correct automatically generated results. It also provides automated markup and analysis, which the sonographer may choose to accept outright, to accept partially and modify, or to reject and ignore. Machine learning based view classification and border segmentation form the basis for this automated analysis. Additionally, the software has features for organizing, displaying, and comparing to reference guidelines the quantitative data from cardiovascular images acquired from ultrasound scanners.
The provided text describes the Libby™ Echo:Prio software, its intended use, and performance data from its premarket notification. Here's a breakdown of the acceptance criteria and the study proving the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" in a tabular format with defined thresholds. However, it presents performance metrics from the validation study which serve as the evidence of the device's capability. We can infer the implicit "acceptance criteria" from these reported performance metrics, which are presented as achieved targets.
Metric (Implied Acceptance Criteria) | Reported Device Performance |
---|---|
View Classification Accuracy | 97% |
View Classification F1 Score | > 96.6% (average) |
View Classification Sensitivity (Sn) | 96.8% (average) |
View Classification Specificity (Sp) | 98.5% (average) |
Heart Rate (HR) Estimation Bias (Regression Slope) | 0.98 (95% CI) compared to 12-lead ECG ground truth |
Ejection Fraction (EF) Prediction (Bivariate Linear Regression Slope) | 0.79 (95% CI: 0.52, 0.98) compared to human expert annotations |
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: The document states that performance testing was done "retrospectively on a diverse clinical dataset." However, the exact sample size for this test set is not specified in the provided text.
- Data Provenance:
- Country of Origin: Not specified.
- Retrospective or Prospective: Retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: For the Ejection Fraction (EF) prediction, ground truth was established by "four human experts."
- Qualifications of Experts: The specific qualifications (e.g., years of experience, subspecialty) of these four human experts are not explicitly stated. They are referred to as "human experts" or "clinicians."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The adjudication method for constructing the ground truth from the four human experts for EF prediction is not explicitly stated. It is only mentioned that EF prediction was compared with "annotations by four human experts."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The performance data focuses on the software's standalone accuracy in comparison to a ground truth, rather than measuring the improvement of human readers assisted by the AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the algorithm's predictions (view classification, HR, EF) against established ground truth was performed. The data presented ("view classification accuracy of 97%", "HR output estimate is with minimal bias", "prediction of the EF output... had a slope of 0.79") reflects the algorithm's direct performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The types of ground truth used are:
- 12-lead ECG: For Heart Rate (HR) estimation.
- Human Expert Annotations/Consensus (implied): For Left Ventricular Ejection Fraction (EF) prediction, derived from the "annotations by four human experts." For view classification, the "accuracy" and "F1 value" imply a comparison to a set ground truth, likely also established by experts.
8. The sample size for the training set
The sample size for the training set used to develop the machine learning model is not specified in the provided text.
9. How the ground truth for the training set was established
The method for establishing ground truth for the training set is not specified in the provided text. It only vaguely mentions "Machine learning based view classification and border segmentation form the basis for this automated analysis."
Ask a specific question about this device
(263 days)
Libby IAAA is intended to review and analyze Intravascular optical coherence tomography (OCT) images in raw OCT file format. IAAA enables quantification of artery and/or stent dimensions. The software is intended to be used by or under supervision of a Cardiologist.
The Libby IAAA v1.0 platform is a web-accessible post-processing analysis device used for viewing and quantifying intravascular OCT images. The device is intended to visualize and quantify OCT pullback data in raw OCT file format. The device enables lumen, stent, and stent strut detection and has features for loading, saving, and report generation of aggregated quantitative data. The device allows for analysis of raw Intravascular optical coherence tomography (OCT) files obtained from the Abbott Laboratories C7-XR system and compatible imaging catheters.
The web-based platform can be used in common desktop web browsers. A user opens an intravascular image pullback file using the platform and has the ability to use various modules to perform image analysis on areas of interest. The platform includes the following module panels for visualization, quantification, and report generation:
Visualization:
- 2D cross-sectional view
- 2D longitudinal view
- Image navigation tools
- Measurement and annotation tools
- Bookmark areas of interest
Quantification:
- Distance and area measurements
- Guidewire detection
- Lumen and Stent area quantification
- Stent and strut detection (pullback level and frame level)
- Strut classification (covered versus uncovered, apposed, and malapposed)
Data Reporting:
Study information, lumen areas, stent areas, reference areas, percent stenosis, along with usercreated annotations are displayed to the user within the software automatically saves all data and the user has the option to generate a report in .xlsx format.
The product is intended to be used by or under supervision of a board-certified Cardiologist.
Here's an analysis of the acceptance criteria and study as described in the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state formal acceptance criteria with numerical targets. Instead, it describes a standalone performance test where the device's output was compared against analyses performed by expert cardiologists. The conclusion states that the device "raises no different questions of safety and effectiveness and is substantially equivalent to the predicate device in terms of safety, effectiveness, and performance," implying that the device's performance was deemed acceptable relative to expert cardiologists.
Based on the text, the acceptance criteria implicitly involve the device's automated quantification of artery and/or stent dimensions aligning sufficiently with expert manual analysis.
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Device's automated quantification of artery and/or stent dimensions should be comparable to expert manual analysis. | Standalone performance testing consisted of a "head to head analysis of a generalized dataset manually analyzed by expert cardiologists and compared to the performance of the Libby IAAA algorithm." The conclusion states the device "raises no different questions of safety and effectiveness." |
Software specifications, applicable performance standards, and regulatory guidance documents (e.g., ANSI/AAMI/IEC 62304, FDA Guidance). | "evaluated and verified in accordance with software specifications, applicable performance standards through software verification and validation testing, in addition to the FDA Guidance documents..." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document mentions "a generalized dataset" but does not provide a specific number for the sample size used in the standalone performance test.
- Data Provenance: Not explicitly stated. The term "generalized dataset" suggests it might be diverse, but there's no mention of country of origin or whether it was retrospective or prospective.
3. Number of Experts and Qualifications
- Number of Experts: The document refers to "expert cardiologists" (plural), but does not specify the exact number of experts used to establish the ground truth for the test set.
- Qualifications of Experts: The document states "expert cardiologists." It doesn't provide specific details like years of experience, subspecialty certifications, or affiliations.
4. Adjudication Method for the Test Set
The document describes the test as a "head to head analysis of a generalized dataset manually analyzed by expert cardiologists and compared to the performance of the Libby IAAA algorithm." This implies that the experts' manual analyses served as the reference against which the algorithm was compared. However, it does not explicitly mention an adjudication method (e.g., 2+1, 3+1 consensus) among the experts themselves if multiple experts were involved in the manual analysis. It's possible each expert's analysis was independently compared, or a consensus was reached, but this detail is missing.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The document describes a "standalone performance test" comparing the algorithm against expert manual analysis. It does not mention a comparative effectiveness study where human readers' performance with and without AI assistance was evaluated.
- Effect size of human reader improvement: Since an MRMC study was not described, there is no information on the effect size of human readers improving with AI assistance.
6. Standalone Performance
- Was standalone (algorithm-only) performance done? Yes. The document explicitly states: "Standalone performance testing consisted of a head to head analysis of a generalized dataset manually analyzed by expert cardiologists and compared to the performance of the Libby IAAA algorithm." This confirms an algorithm-only evaluation.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the standalone performance test was established through expert consensus/manual analysis ("manually analyzed by expert cardiologists").
8. Sample Size for the Training Set
The document does not provide any information about the sample size for the training set. It focuses solely on the performance testing.
9. How Ground Truth for the Training Set Was Established
The document does not provide any information about how the ground truth for the training set was established, as it does not discuss the training process or dataset.
Ask a specific question about this device
Page 1 of 1