Search Results
Found 1 results
510(k) Data Aggregation
(103 days)
Strain AI is intended for noninvasive processing of cardiac ultrasound images to provide measurements of global longitudinal strain of adult patients with suspected disease.
Exo's Strain Al is a software as a medical device (SaMD), intended as an aid in diagnostic analysis of echocardiography data. It specifically measures the global longitudinal strain (GLS) from apical 4-chamber (A4C) cardiac ultrasound images.
This software is developed as a module to be integrated by another computer programmer into their legally marketed ultrasound imaging device.
The software does not have a built-in viewer; instead, it integrates into a third-party ultrasound imaging device. The software functions as a post-processing tool, analyzing images after they are acquired. End-users have the option to accept or reject the provided measurements.
Strain Al takes as input image data and provides as an output a quantitative measurement of the global longitudinal strain (GLS) from apical 4-chamber (A4C) cardiac ultrasound images. It is important to note that patient management decisions should not be made solely on the results of the Strain AI analysis.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Measurement | Accepted Range / Threshold | Reported Device Performance (GLS) |
---|---|---|
Intraclass Correlation Coefficient (ICC) | Not explicitly stated as a numerical threshold, but implies high correlation with reference. | 0.95 (0.91 - 0.97) |
Root Mean Square Difference (RMSD) | Not explicitly stated as a numerical threshold, but implies low difference with reference. | 2.76 (2.44 - 3.17) |
Note: The document states that the performance was "successfully evaluated" and "consistent among clinically meaningful subgroups," and the reported ICC and RMSD values contribute to this conclusion, suggesting they met internal acceptance criteria. Formal numerical thresholds for acceptance are not explicitly listed in this summary.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated as a single number. The document mentions "test data encompassing diverse demographic variables, including gender, age (ranging from 21 to 96), and ethnicity."
- Data Provenance:
- Country of Origin: Not specified.
- Retrospective or Prospective: Not explicitly stated, however, the phrase "images acquired during a routine clinical practice" could suggest retrospective use of existing clinical data or prospective collection within a routine clinical setting. It's not definitively clear from the text.
- Specifics: Data was collected from "multiple clinical sites in metropolitan cities with diverse racial patient populations."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not explicitly stated.
The document indicates that "The ground truth (reference data) was obtained using the reference device." This implies that the ground truth was established by the output of the reference device (Us2.v2) rather than direct expert interpretation of the images.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable, as the ground truth was derived from the output of a reference device (Us2.v2), not from multiple expert adjudications.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The study focused on the standalone performance of the Strain AI device against a reference device, not on how human readers' performance might improve with AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was a standalone study done? Yes. The study evaluated the Strain AI's performance by comparing its output (GLS measurements) directly to the ground truth established by a reference device, without explicit human interaction or modification of the AI's results during the performance assessment. The device is described as "a software as a medical device (SaMD)" that "functions as a post-processing tool, analyzing images after they are acquired." While "End-users have the option to accept or reject the provided measurements," the performance evaluation itself appears to be a direct comparison of the AI's output.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth (reference data) was established using the reference device, Us2.v2 (K233676). This indicates a "device-based" or "software-based" ground truth, where the output of another legally marketed and classified device serves as the standard for comparison.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not explicitly stated. The document mentions "The test data was entirely separated from the training/validation datasets acquired from independent clinical sites."
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set Was Established: Not explicitly stated. The document only mentions that the AI algorithms are "trained with clinical data." It does not detail the specific method used to establish the ground truth for this training data.
Ask a specific question about this device
Page 1 of 1