Search Results
Found 2 results
510(k) Data Aggregation
(265 days)
Sonix Health
Sonix Health is intended for quantifying and reporting echocardiography for use by or on the order of a licensed physician. Sonix Health accepts DICOM-compliant medical images acquired from ultrasound imaging devices. Sonix Health is indicated for use in adult populations.
Sonix Health comes with the following functions:
- Checking ultrasound multiframe DICOM
- Echocardiography multiframe DICOM classification and automatic measurement.
- Verification of the results and making adjustments manually.
- Providing the report for analysis
Sonix Health will be offered as SW only, to be installed directly on customer PC hardware. Sonix Health is DICOM compliant and is used within a local network.
Sonix Health utilizes a two-step algorithm. A single identification model identifies a view in the first step. The second step performs the deep learning according to the view. The deep learning algorithms for the second step are categorized as B-mode, and Doppler algorithms. The main algorithm of Sonix Health is to identify the view and segment the anatomy in the image.
The provided text describes the performance evaluation of a medical device named "Sonix Health" for quantifying and reporting echocardiography. Here's a breakdown of the requested information:
Device: Sonix Health (K240645)
Software Functions:
- Checking ultrasound multiframe DICOM
- Echocardiography multiframe DICOM classification and automatic measurement.
- Verification of the results and making adjustments manually.
- Providing the report for analysis.
- Utilizes a two-step algorithm: single identification model for view recognition, followed by deep learning for B-mode and Doppler algorithms. Main algorithm identifies view and segments anatomy.
1. Table of Acceptance Criteria and Reported Device Performance
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
View Recognition | Average accuracy ≥ 84% | 96.25% average accuracy for additional views. |
Auto Measure | Average correlation coefficient ≥ 0.80 (compared to manual measurements) | 0.918 average correlation coefficient (compared to manual measurements). |
Auto Strain | ||
LVGLS, LARS, LACts | Average correlation coefficient ≥ 0.80 (compared to manual measurements) | 0.88 average correlation coefficient. |
RV Free Wall Strain | Average correlation coefficient ≥ 0.60 (compared to manual measurements) | 0.69 correlation coefficient. |
Average GLS | RMSE ≤ 3.00% (compared to manual measurements) | 2.16% RMSE. |
Segmental Longitudinal Strain | RMSE ≤ 7.50% (compared to manual measurements) | 6.32% RMSE. |
2. Sample Size and Data Provenance
- Total Patients: 335
- Data Provenance:
- 303 patients (90%) originated from the U.S. (Mayo Clinic in Arizona) and South Korea (Severance Hospital, Seoul).
- Specifically, 30% (93 patients) of these 303 were from U.S. hospitals.
- 70% (200 patients) of these 303 were from Korean hospitals.
- An additional 32 patients (10%) were obtained from South Korea (Severance Hospital, Seoul).
- 303 patients (90%) originated from the U.S. (Mayo Clinic in Arizona) and South Korea (Severance Hospital, Seoul).
- Recruitment Type: Images were "taken for diagnostic purposes in actual clinical settings" and "acquired following the IRB procedures," suggesting a retrospective collection of existing patient data.
3. Number and Qualifications of Experts for Ground Truth
- Experts for Annotation: Two experienced sonographers with Registered Diagnostic Cardiac Sonographer (RDCS) certification.
- Supervising Experts: Two experienced cardiologists.
4. Adjudication Method for the Test Set
- The text states, "The annotation was supervised by two experienced cardiologists and the consensus annotation was used as the final ground truth." This implies a form of consensus-based adjudication, but the exact process (e.g., if initial annotations were independent, how disagreements were resolved, etc.) is not detailed beyond "consensus annotation."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study to evaluate "human readers improve with AI vs without AI assistance." The study focuses on evaluating the standalone performance of the AI model against expert manual measurements, and the device is intended for human-in-the-loop use where users review and modify results.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance evaluation was primarily done. The metrics presented (accuracy, correlation coefficients, RMSE) directly assess the algorithm's output compared to ground truth, which was established by experts' manual measurements or reference devices. Although the device is designed for human review, the reported performance metrics quantify the automated capabilities of the software.
7. Type of Ground Truth Used
- The ground truth for the test set was established through expert consensus annotation.
- For strain measurements, the ground truth was "established by the experts with the help of the reference devices (EchoPAC for global longitudinal, segmental and RV free wall strain and TOMTEC Arena for LA reservoir and contraction strain)." This means the ground truth combines expert interpretation with measurements derived from established medical software.
8. Sample Size for the Training Set
- The document states, "The training data and validation data are distinct and independent." However, the sample size for the training set is not provided in the given text.
9. How Ground Truth for the Training Set Was Established
- The document explicitly states how the ground truth for the test set was established (expert consensus, aided by reference devices).
- However, the text does not describe how the ground truth for the training set was established.
Ask a specific question about this device
(268 days)
Sonix Health
Sonix Health is intended for quantifying and reporting echocardiography for use by or on the order of a licensed physician. Sonix Health accepts DICOM-compliant medical images acquired from ultrasound imaging devices. Sonix Health is indicated for use in adult populations.
Sonix Health comes with the following functions:
- Checking ultrasound multiframe DICOM
- Echocardiography multiframe DICOM classification and automatic measurement.
- Verification of the results and making adjustments manually.
- Providing the report for analysis
Sonix Health will be offered as SW only, to be installed directly on customer PC hardware. Sonix Health is DICOM compliant and is used within a local network.
Sonix Health utilizes a two-step algorithm. A single identification model identifies a view in the first step. The second step performs the deep learning according to the view. The deep learning algorithms for the second step are categorized as B-mode, and Doppler algorithms. The main algorithm of Sonix Health is to identify the view and segment the anatomy in the image.
Based on the provided text, here's a detailed description of the acceptance criteria and the study that proves the device meets them:
Device Name: Sonix Health
Intended Use: Quantifying and reporting echocardiography for use by or on the order of a licensed physician. Accepts DICOM-compliant medical images acquired from ultrasound imaging devices. Indicated for use in adult populations. Ultrasound images are acquired via B (2D), M, Pulsed-wave Doppler, and Continuous-wave Doppler modes.
Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Implicit from "passed the test") | Reported Device Performance |
---|---|---|
View Recognition Accuracy | High accuracy (implicitly needed to "pass the test"). | Average accuracy of 98.22% |
Correlation Coefficient (Manual vs. AI Measurements) | High correlation (implicitly needed to "pass the test"). | 93.98% average correlation coefficient when compared to manual measurements by participating experts. |
Study Details
-
A table of acceptance criteria and the reported device performance: (See table above)
-
Sample size used for the test set and the data provenance:
- Total Test Images: 2,744 images
- B-mode: 476 images
- M-mode: 243 images
- Doppler: 2,025 images
- Data Provenance:
- Country of Origin: 2,648 (96.5%) from American participants; 1,264 (47.7%) from a U.S. hospital and 1,384 (52.3%) from a South Korea hospital (note: the percentages for US and South Korea seem to describe the total "American participants" data, not the overall 2,744 images. There's a slight ambiguity, but the majority is clearly US-based).
- Retrospective/Prospective: Not explicitly stated, but "collected from six centers" and "data was collected" implies retrospective collection of existing data for the test set.
- Demographics: For American participants: 65% male and 35% female. Overall representative institution data showed 50.2% male, average BMI of 22.2, and 67.0% LVEF. The LVEF range for the validation datasets was 14% to 76%, with a mean of 58% and a standard deviation of 11%.
- Equipment: Data acquired using equipment from four manufacturers.
- Total Test Images: 2,744 images
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Ground Truth Annotators: Two experienced sonographers with a Registered Diagnostic Cardiac Sonographer (RDCS) certification.
- Supervising Experts: Two experienced cardiologists. The qualifications (e.g., years of experience) for these cardiologists are not specified beyond "experienced."
-
Adjudication method for the test set:
- The "consensus annotation" of the two experienced sonographers (supervised by two cardiologists) was used as the final ground truth. This implies a consensus-based adjudication, but the specific process (e.g., if initial disagreements were resolved through discussion or a third expert) is not detailed. It's essentially a (2+0) or (2+supervision) model.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs. without AI assistance:
- No MRMC comparative effectiveness study was done to assess human reader improvement with AI assistance. The performance testing focused on the standalone algorithm's accuracy and correlation with manual measurements (ground truth), not human-AI collaboration. The document explicitly states: "No clinical testing conducted in support of substantial equivalence..."
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, performance metrics related to "view recognition accuracy" and "correlation coefficient when compared to manual measurements by participating experts" demonstrate a standalone evaluation of the algorithm's output against established ground truth. The device "utilizes artificial intelligence to automate previous manual quantification tasks" and then "users can review and modify the results if necessary," but the performance metrics reported are for the automated results before clinician modification.
-
The type of ground truth used:
- The ground truth for the test set was established through expert consensus (two experienced sonographers with RDCS certification, supervised by two experienced cardiologists). The "consensus annotation" was used as the final ground truth. This is a form of expert consensus.
-
The sample size for the training set:
- The exact sample size (number of images or studies) for the training data is not explicitly stated. It mentions that "The training data was collected from six centers" and indicates demographic information for "the representative institution."
-
How the ground truth for the training set was established:
- The document implies that the "training data" and "validation data" (test set) are distinct and independent. While it details the ground truth establishment for the test set (expert sonographer consensus supervised by cardiologists), it does not explicitly describe how the ground truth for the training set was established. It's reasonable to infer a similar process of expert annotation, but it's not confirmed in the provided text.
Ask a specific question about this device
Page 1 of 1