Search Filters

Search Results

Found 61 results

510(k) Data Aggregation

    K Number
    K251985
    Device Name
    LOGIQ E10
    Date Cleared
    2025-10-29

    (124 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    LOGIQ E10 is intended for use by a qualified physician for ultrasound evaluation of Fetal/Obstetrics; Abdominal (including Renal, Gynecology/Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid); Neonatal Cephalic, Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular; Musculo-skeletal Conventional and Superficial; Urology (including Prostate); Transrectal; Transvaginal; Tranesophageal and Intraoperative (Abdominal and Vascular).

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.

    The LOGIQ E10 is intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ E10 is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 585 mm wide (keyboard), 991 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 12-inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and supporting studies for the LOGIQ E10 ultrasound system, derived from the provided FDA 510(k) Clearance Letter:


    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriteriaReported Device Performance
    Auto Abdominal Color Assistant 2.0
    Overall Model Detection Accuracy$\ge 80%$$94.8%$
    Sensitivity (True Positive Rate)$\ge 80%$$0.91$
    Specificity (True Negative Rate)$\ge 80%$$0.98$
    DICE Similarity Coefficient (Segmentation Accuracy)$\ge 0.80$$0.82$
    Auto Aorta Measure Assistant (Long View AP Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$87.2%$ (95% CI of $\pm 1.98%$)
    Average Absolute ErrorNot explicitly stated as a target$0.253$ cm (95% CI of $0.049$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.15, 0.60)$ cm (95% CI of $(-0.26, 0.71)$)
    Auto Aorta Measure Assistant (Short View AP Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$92.9%$ (95% CI of $\pm 2.02%$)
    Average Absolute ErrorNot explicitly stated as a target$0.128$ cm (95% CI of $0.037$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.21, 0.36)$ cm (95% CI of $(-0.29, 0.45)$)
    Auto Aorta Measure Assistant (Short View Trans Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$86.9%$ (95% CI of $\pm 6.25%$)
    Average Absolute ErrorNot explicitly stated as a target$0.235$ cm (95% CI of $0.110$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.86, 0.69)$ cm (95% CI of $(-1.06, 0.92)$)
    Auto Common Bile Duct (CBD) Measure Assistant (Porta Hepatis measurement accuracy without segmentation scroll edit)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$59.85%$ (95% CI of $\pm 17.86%$)
    Average Absolute ErrorNot explicitly stated as a target$1.66$ mm (95% CI of $1.02$ mm)
    Limits of AgreementNot explicitly stated as a target range$(-4.75, 4.37)$ mm (95% CI of $(-6.17, 5.79)$)
    Auto Common Bile Duct (CBD) Measure Assistant (Porta Hepatis measurement accuracy with segmentation scroll edit)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$80.56%$ (95% CI of $\pm 8.83%$)
    Average Absolute ErrorNot explicitly stated as a target$0.91$ mm (95% CI of $0.45$ mm)
    Limits of AgreementNot explicitly stated as a target range$(-1.96, 3.25)$ mm (95% CI of $(-2.85, 4.14)$)
    Ultrasound Guided Fat Fraction (UGFF)
    Correlation Coefficient with MRI-PDFF (Japan Cohort)Strong correlation confirmed$0.87$
    Offset (UGFF vs MRI-PDFF, Japan Cohort)Not explicitly stated as a target$-0.32%$
    Limits of Agreement (UGFF vs MRI-PDFF, Japan Cohort)Not explicitly stated as a target range$-6.0%$ to $5.4%$
    % Patients within $\pm 8.4%$ difference (Japan Cohort)Not explicitly stated as a target$91.6%$
    Correlation Coefficient with MRI-PDFF (US/EU Cohort)Strong correlation confirmed$0.90$
    Offset (UGFF vs MRI-PDFF, US/EU Cohort)Not explicitly stated as a target$-0.1%$
    Limits of Agreement (UGFF vs MRI-PDFF, US/EU Cohort)Not explicitly stated as a target range$-3.6%$ to $3.4%$
    % Patients within $\pm 4.6%$ difference (US/EU Cohort)Not explicitly stated as a target$95.0%$
    Correlation Coefficient with UDFF (EU Cohort)Strong correlation confirmed$0.88$
    Offset (UGFF vs UDFF, EU Cohort)Not explicitly stated as a target$-1.2%$
    Limits of Agreement (UGFF vs UDFF, EU Cohort)Not explicitly stated as a target range$-5.0%$ to $2.6%$
    % Patients within $\pm 4.7%$ difference (EU Cohort)Not explicitly stated as a targetAll patients

    2. Sample Size for Test Set and Data Provenance

    • Auto Abdominal Color Assistant 2.0:
      • Test Set Sample Size: 49 individual subjects, 1186 annotation images.
      • Data Provenance: Retrospective, all data from the USA.
    • Auto Aorta Measure Assistant:
      • Test Set Sample Size:
        • Long View Aorta: 36 subjects (11 Male, 25 Female).
        • Short View Aorta: 35 subjects (11 Male, 24 Female).
      • Data Provenance: Retrospective, from Japan (15-16 subjects) and USA (20 subjects).
    • Auto Common Bile Duct (CBD) Measure Assistant:
      • Test Set Sample Size: 25 subjects (11 Male, 14 Female).
      • Data Provenance: Retrospective, from USA (40%) and Japan (60%).
    • Ultrasound Guided Fat Fraction (UGFF):
      • Test Set Sample Size (Primary Study): 582 participants.
      • Data Provenance (Primary Study): Retrospective, Japan.
      • Test Set Sample Size (Confirmatory Study 1): 15 US patients + 5 EU patients (total 20).
      • Data Provenance (Confirmatory Study 1): Retrospective, USA and EU.
      • Test Set Sample Size (Confirmatory Study 2): 24 EU patients.
      • Data Provenance (Confirmatory Study 2): Retrospective, EU.

    3. Number of Experts and Qualifications for Ground Truth

    • Auto Abdominal Color Assistant 2.0: Not explicitly stated, but implies multiple "readers" to ground truth anatomical visibility. No specific qualifications are mentioned beyond "readers."
    • Auto Aorta Measure Assistant: Not explicitly stated, but implies multiple "readers" for measurements and an "arbitrator" to select the most accurate measurement. No specific qualifications are mentioned beyond "readers" and "arbitrator."
    • Auto Common Bile Duct (CBD) Measure Assistant: Not explicitly stated, but implies multiple "readers" for measurements and an "arbitrator" to select the most accurate measurement. No specific qualifications are mentioned beyond "readers" and "arbitrator."
    • Ultrasound Guided Fat Fraction (UGFF): Ground truth for the primary study was MRI Proton Density Fat Fraction (MRI-PDFF %). No human experts were involved in establishing the ground truth for UGFF, as it relies on MRI-PDFF as the reference. The correlation between UGFF and UDFF also used UDFF as a reference, not human experts.

    4. Adjudication Method for the Test Set

    • Auto Abdominal Color Assistant 2.0: Not explicitly mentioned, however, the process described as "Readers to ground truth the 'anatomy' visible in static B-Mode image. (Before running AI)" and then comparing to AI predictions does not suggest an adjudication process for the ground truth generation itself beyond initial reader input. Confusion matrices were generated later.
    • Auto Aorta Measure Assistant: An "Arbitrator" was used to "select most accurate measurement among all readers" for the initial ground truth, which was then compared to AI baseline. This implies a 1 (arbitrator) + N (readers) adjudication method for measurement accuracy. For keystroke comparison, readers measured with and without AI.
    • Auto Common Bile Duct (CBD) Measure Assistant: An "Arbitrator" was used to "select most accurate measurement among all readers" for the initial ground truth, which was then compared to AI baseline. This implies a 1 (arbitrator) + N (readers) adjudication method for measurement accuracy. For keystroke comparison, readers measured with and without AI.
    • Ultrasound Guided Fat Fraction (UGFF): Ground truth was established by MRI-PDFF or comparison to UDFF. No human adjudication method was described for these.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Auto Aorta Measure Assistant: Yes, a comparative study was performed by comparing keystroke counts with and without AI assistance for human readers.
      • Effect Size:
        • Long View Aorta AP Measurement: Average reduction from $4.132 \pm 0.291$ keystrokes (without AI) to $1.236 \pm 0.340$ keystrokes (with AI).
        • Short View Aorta AP and Trans Measurement: Average reduction from $7.05 \pm 0.158$ keystrokes (without AI) to $2.307 \pm 1.0678$ keystrokes (with AI).
    • Auto Common Bile Duct (CBD) Measure Assistant: Yes, a comparative study was performed by comparing keystroke counts with and without AI assistance for human readers.
      • Effect Size: Average reduction of $1.62 \pm 0.375$ keystrokes (mean and standard deviation) from manual to AI-assisted measurements.
    • Other features (Auto Abdominal Color Assistant 2.0, UGFF): The documentation does not describe a MRMC study for improved human reader performance with AI assistance for these features.

    6. Standalone (Algorithm Only) Performance Study

    • Auto Abdominal Color Assistant 2.0: Yes, the model's accuracy (detection accuracy, sensitivity, specificity, DICE score) was evaluated in a standalone manner against the human-annotated ground truth.
    • Ultrasound Guided Fat Fraction (UGFF): Yes, the correlation and agreement of the UGFF algorithm's values were tested directly against an established reference standard (MRI-PDFF) and another device's derived fat fraction (UDFF).

    7. Type of Ground Truth Used

    • Auto Abdominal Color Assistant 2.0: Expert consensus/annotations on B-Mode images, followed by comparison to AI predictions.
    • Auto Aorta Measure Assistant: Expert consensus on measurements (human readers with arbitrator selection) and keystroke counts from these manual measurements and AI-assisted measurements.
    • Auto Common Bile Duct (CBD) Measure Assistant: Expert consensus on measurements (human readers with arbitrator selection) and keystroke counts from these manual measurements and AI-assisted measurements.
    • Ultrasound Guided Fat Fraction (UGFF): Established clinical reference standard: MRI Proton Density Fat Fraction (MRI-PDFF %). For one confirmatory study, another cleared device's derived fat fraction (UDFF) was used as a comparative reference.

    8. Sample Size for the Training Set

    • The document states that "The exams used for test/training validation purpose are separated from the ones used during training process" but does not provide the sample size for the training set itself for any of the AI features.

    9. How the Ground Truth for the Training Set was Established

    • The document implies that the ground truth for training data would have been established similarly to the test data ground truth (e.g., expert annotation for Auto Abdominal Color Assistant, expert measurements for Auto Aorta/CBD Measure Assistants). However, the specific methodology for the training set's ground truth establishment (e.g., number of experts, adjudication, qualifications) is not detailed in the provided text. It only explicitly states that "Before the process of data annotation, all information displayed on the device is removed and performed on information extracted purely from Ultrasound B-mode images" for annotation. Independence of test and training data by exam site origin or overall separation is mentioned, but not the process for creating the training set ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251963
    Device Name
    LOGIQ E10s
    Date Cleared
    2025-10-29

    (125 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The LOGIQ E10s is intended for use by a qualified physician for ultrasound evaluation.

    Specific clinical applications and exam types include: Fetal / Obstetrics; Abdominal (including Renal, Gynecology / Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid); Neonatal Cephalic; Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular; Musculo-skeletal Conventional and Superficial; Urology (including Prostate); Transrectal; Transvaginal; Transesophageal and Intraoperative (Abdominal and Vascular).

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.

    The LOGIQ E10s is intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ E10s is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 585 mm wide (keyboard), 991 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 12-inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor.

    AI/ML Overview

    The provided text describes three AI features: Auto Abdominal Color Assistant 2.0, Auto Aorta Measure Assistant, and Auto Common Bile Duct (CBD) Measure Assistant, along with a UGFF Clinical Study.

    Here's an analysis of the acceptance criteria and study details for each, where available:

    1. Table of Acceptance Criteria and Reported Device Performance

    For Auto Abdominal Color Assistant 2.0

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    Overall model detection accuracy (sensitivity and specificity): $\ge 80%$ (0.80)Accuracy: 94.8%Yes
    Sensitivity (True Positive Rate): $\ge 80%$ (0.80)Sensitivity: 0.91Yes
    Specificity (True Negative Rate): $\ge 80%$ (0.80)Specificity: 0.98Yes
    DICE Similarity Coefficient (Segmentation Accuracy): $\ge 0.80$DICE score: 0.82Yes

    For Auto Aorta Measure Assistant

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    No explicit numerical acceptance criteria were provided for keystrokes or measurement accuracy. The study aims to demonstrate improvement in keystrokes and acceptable accuracy. The provided results are the performance reported without specific targets for acceptance.Long View Aorta:- Average keystrokes: 4.132 (without AI) vs. 1.236 (with AI)- Average accuracy: 87.2% with 95% CI of +/- 1.98%- Average absolute error: 0.253 cm with 95% CI of 0.049 cm- Limits of Agreement: (-0.15, 0.60) with 95% CI of (-0.26, 0.71)Short View AP Measurement:- Average accuracy: 92.9% with 95% CI of +/- 2.02%- Average absolute error: 0.128 cm with 95% CI of 0.037 cm- Limits of Agreement: (-0.21, 0.36) with 95% CI of (-0.29, 0.45)Short View Trans Measurement:- Average accuracy: 86.9% with 95% CI of +/- 6.25%- Average absolute error: 0.235 cm with 95% CI of 0.110 cm- Limits of Agreement: (-0.86, 0.69) with 95% CI (-1.06, 0.92)N/A

    For Auto Common Bile Duct (CBD) Measure Assistant

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    No explicit numerical acceptance criteria were provided for keystrokes or measurement accuracy. The study aims to demonstrate reduction in keystrokes and acceptable accuracy. The provided results are the performance reported without specific targets for acceptance.- Average reduction in keystrokes (manual vs. AI): 1.62 +/- 0.375Keystrokes for Porta Hepatis measurement with segmentation scroll edit- Average accuracy: 80.56% with 95% CI of +/- 8.83%- Average absolute error: 0.91 mm with 95% CI of 0.45 mm- Limits of Agreement: (-1.96, 3.25) with 95% CI of (-2.85, 4.14)Porta Hepatis measurement accuracy without segmentation scroll edit- Average accuracy: 59.85% with 95% CI of +/- 17.86%- Average absolute error: 1.66 mm with 95% CI of 1.02 mm- Limits of Agreement: (-4.75, 4.37) with 95% CI of (-6.17, 5.79)N/A

    For UGFF Clinical Study

    Acceptance Criteria (Implied by intent to demonstrate strong correlation)Reported Device PerformanceMeets Criteria?
    Strong correlation between UFF values and MRI-PDFF (e.g., correlation coefficient $\ge 0.8$)Original study: Correlation coefficient = 0.87Confirmatory study (US/EU): Correlation coefficient = 0.90(Confirmatory study (UGFF vs UDFF): Correlation coefficient = 0.88)Yes
    Acceptable Limits of Agreement with MRI-PDFF (e.g., small offset and LOA with high percentage of patients within LOA)Original study: Offset = -0.32%, LOA = -6.0% to 5.4%, 91.6% patients within LOAConfirmatory study (US/EU): Offset = -0.1%, LOA = -3.6% to 3.4%, 95.0% patients within LOAYes
    No statistically significant effect of BMI, SCD, and other demographic confounders on AC, BSC, and SNR measurements (Implied)The results of the clinical study indicate that BMI, SCD, and other demographic confounders do not have a statistically significant effect on measurements of the AC, BSC, and SNR.Yes

    2. Sample size used for the test set and the data provenance

    Auto Abdominal Color Assistant 2.0:

    • Sample Size: 49 individual subjects (1186 annotation images)
    • Data Provenance: Retrospective, from the USA (100%).

    Auto Aorta Measure Assistant:

    • Sample Size:
      • Long View Aorta: 36 subjects
      • Short View Aorta: 35 subjects
    • Data Provenance: Retrospective, from Japan and USA.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Sample Size: 25 subjects
    • Data Provenance: Retrospective, from USA (40%) and Japan (60%).

    UGFF Clinical Study:

    • Sample Size:
      • Original study: 582 participants
      • Confirmatory study (US/EU): 15 US patients and 5 EU patients (total 20)
      • Confirmatory study (UGFF vs UDFF): 24 EU patients
    • Data Provenance: Retrospective and Prospective implicitly (clinical study implies data collection).
      • Original Study: Japan (Asian population)
      • Confirmatory Study (US/EU): US and EU (demographic info unavailable for EU patients, US patients: BMI 21.0-37.5, SCD 13.9-26.9)
      • Confirmatory Study (UGFF vs UDFF): EU

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Auto Abdominal Color Assistant 2.0:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the 'anatomy'".
    • Qualifications of Experts: Not specified.

    Auto Aorta Measure Assistant:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the AP measurement..." and an "Arbitrator to select most accurate measurement among all readers." This implies multiple readers and a single arbitrator.
    • Qualifications of Experts: Not specified.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the diameter..." and an "Arbitrator to select most accurate measurement among all readers." This implies multiple readers and a single arbitrator.
    • Qualifications of Experts: Not specified.

    UGFF Clinical Study:

    • Number of Experts: Not applicable, as ground truth was established by MRI-PDFF measurements, not expert consensus on images.

    4. Adjudication method for the test set

    Auto Abdominal Color Assistant 2.0:

    • Adjudication Method: Not explicitly described as a specific method (e.g., 2+1). The process mentions "Readers to ground truth" and then comparison to AI predictions, but no specific adjudication among multiple readers' initial ground truths.

    Auto Aorta Measure Assistant:

    • Adjudication Method: Implies an arbitrator-based method. "Arbitrator to select most accurate measurement among all readers." This suggests multiple readers provide measurements, and a single arbitrator makes the final ground truth selection.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Adjudication Method: Implies an arbitrator-based method. "Arbitrator to select most accurate measurement among all readers." Similar to the Aorta assistant.

    UGFF Clinical Study:

    • Adjudication Method: Not applicable. Ground truth was established by MRI-PDFF measurements.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Auto Abdominal Color Assistant 2.0:

    • MRMC Study: Not explicitly stated as a comparative effectiveness study showing human improvement. The study focuses on the algorithm's performance against ground truth.
    • Effect Size (Human Improvement with AI): Not reported.

    Auto Aorta Measure Assistant:

    • MRMC Study: Yes, an implicit MRMC study comparing human performance with and without AI. Readers performed measurements with and without AI assistance.
    • Effect Size (Human Improvement with AI):
      • Long View Aorta (Keystrokes): Average keystrokes reduced from 4.132 (without AI) to 1.236 (with AI).
      • Short View Aorta (Keystrokes): Average keystrokes reduced from 7.05 (without AI) to 2.307 (with AI).
      • (No specific improvement in diagnostic accuracy for human readers with AI is stated, primarily focuses on efficiency via keystrokes).

    Auto Common Bile Duct (CBD) Measure Assistant:

    • MRMC Study: Yes, an implicit MRMC study comparing human performance with and without AI. Readers performed measurements with and without AI assistance.
    • Effect Size (Human Improvement with AI):
      • Porta Hepatis CBD (Keystrokes): Average reduction in keystrokes for measurements with AI vs. manually is 1.62 +/- 0.375.
      • (No specific improvement in diagnostic accuracy for human readers with AI is stated, primarily focuses on efficiency via keystrokes).

    UGFF Clinical Study:

    • MRMC Study: No, this was a standalone algorithm performance study compared to a reference standard (MRI-PDFF) and a predicate device (UDFF). It did not involve human readers using the AI tool.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Auto Abdominal Color Assistant 2.0:

    • Standalone Performance: Yes. The reported accuracy, sensitivity, specificity, and DICE score are for the algorithm's performance.

    Auto Aorta Measure Assistant:

    • Standalone Performance: Yes, implicitly. The "AI baseline measurement" was compared for accuracy against the arbitrator-selected ground truth. While keystrokes involved human interaction to use the AI, the measurement accuracy is an algorithm output.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Standalone Performance: Yes, implicitly. The "AI baseline measurement" was compared for accuracy against the arbitrator-selected ground truth.

    UGFF Clinical Study:

    • Standalone Performance: Yes. The study directly assesses the correlation and agreement of the UGFF algorithm's output with MRI-PDFF and another ultrasound-derived fat fraction algorithm.

    7. The type of ground truth used

    Auto Abdominal Color Assistant 2.0:

    • Ground Truth Type: Expert consensus for anatomical visibility ("Readers to ground truth the 'anatomy' visible in static B-Mode image.")

    Auto Aorta Measure Assistant:

    • Ground Truth Type: Expert consensus from multiple readers, adjudicated by an arbitrator, for specific measurements ("Arbitrator to select most accurate measurement among all readers.")

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Ground Truth Type: Expert consensus from multiple readers, adjudicated by an arbitrator, for specific measurements ("Arbitrator to select most accurate measurement among all readers.")

    UGFF Clinical Study:

    • Ground Truth Type: Outcomes data / Quantitative Reference Standard: MRI Proton Density Fat Fraction (MRI-PDFF %).

    8. The sample size for the training set

    Auto Abdominal Color Assistant 2.0:

    • Training Set Sample Size: Not specified beyond "The exams used for test/training validation purpose are separated from the ones used during training process".

    Auto Aorta Measure Assistant:

    • Training Set Sample Size: Not specified beyond "The exams used for regulatory validation purpose are separated from the ones used during model development process".

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Training Set Sample Size: Not specified beyond "The exams used for regulatory validation purpose are separated from the ones used during model development process".

    UGFF Clinical Study:

    • Training Set Sample Size: Not specified. The study describes validation but not the training phase.

    9. How the ground truth for the training set was established

    Auto Abdominal Color Assistant 2.0:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the 'anatomy'".

    Auto Aorta Measure Assistant:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the AP measurement...".

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the diameter...".

    UGFF Clinical Study:

    • Training Set Ground Truth: Not specified for the training set, but for the validation set, the ground truth was MRI-PDFF measurements.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251322
    Date Cleared
    2025-07-25

    (87 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Venue, Venue Go, Venue Fit and Venue Sprint are general purpose diagnostic ultrasound systems for use by qualified and trained healthcare professionals or practitioners that are legally authorized or licensed by law in the country, state or other local municipality in which he or she practices, for ultrasound imaging, measurement, display and analysis of the human body and fluid. The users may or may not be working under supervision or authority of a physician. Users may also include medical students working under the supervision or authority of a physician during their education / training.

    Venue, Venue Go and Venue Fit are intended to be used in a hospital or medical clinic. Venue, Venue Go and Venue Fit clinical applications include: abdominal (GYN and Urology), thoracic/pleural, ophthalmic, Fetal/OB, Small Organ (including breast, testes, thyroid), Vascular/Peripheral vascular, neonatal and adult cephalic, pediatric, musculoskeletal (conventional and superficial), cardiac (adults and pediatric), Transrectal, Transvaginal, Transesophageal, Intraoperative (vascular) and interventional guidance (includes tissue biopsy, fluid drainage, vascular and non-vascular access). Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/Color M, B/PWD, B/Color/PWD, B/Power/PWD, B/CWD, B/Color/CWD.

    The Venue Sprint is intended to be used in a hospital, medical clinic, home environment and road/air ambulance. Venue Sprint clinical applications include: abdominal (GYN and Urology), thoracic/pleural, ophthalmic, Fetal/OB, Small Organ (including breast, testes, thyroid), Vascular/Peripheral vascular, neonatal and adult cephalic, pediatric, musculoskeletal (conventional and superficial), cardiac (adults and pediatric, 40 kg and above) and interventional guidance (includes free hand tissue biopsy, fluid drainage, vascular and non-vascular access). Modes of operation include: B, M, PW Doppler, Color Doppler and Harmonic Imaging.

    Device Description

    Venue, Venue Go, Venue Fit and Venue Sprint are general-purpose diagnostic ultrasound systems intended for use by qualified and trained healthcare professionals to evaluate the body by ultrasound imaging and fluid flow analysis.

    The systems utilize a variety of linear, convex, and phased array transducers which provide high imaging capability, supporting all standard acquisition modes.

    The systems have a small footprint that easily fits into tight spaces and positioned to accommodate the sometimes-awkward work settings of the point of care user.

    The Venue is a mobile system, the Venue Go and Venue Fit are compact, portable systems that can be hand carried using an integrated handle, placed on a horizontal surface, attached to a mobile cart or mounted on the wall. Venue, Venue Go and Venue Fit have a high-resolution color LCD monitor, with a simple, multi-touch user interface that makes the systems intuitive.

    The Venue Sprint is used together with the Vscan Air probes and provides the user interface for control of the probes and the needed software functionality for analysis of the ultrasound images and saving/storage of the related images and videos.

    The Venue, Venue Go, Venue Fit and Venue Sprint systems can be powered through an electrical wall outlet for long term use or from an internal battery for a short time with full functionality and scanning. A barcode reader and RFID scanner are available as additional input devices. The systems meet DICOM requirements to support users image storage and archiving needs and allows for output to printing devices.

    The Venue, Venue Go and Venue Fit systems are capable of displaying the patient's ECG trace synchronized to the scanned image. This allows the user to view an image from a specific time of the ECG signal which is used as an input for gating during scanning. The ECG signal can be input directly from the patient or as an output from an ECG monitoring device. ECG information is not intended for monitoring or diagnosis. Compatible biopsy kits can be used for needle-guidance procedures.

    AI/ML Overview

    The provided document, a 510(k) Clearance Letter and Submission Summary, primarily focuses on the substantial equivalence of the GE Healthcare Venue series of diagnostic ultrasound systems to previously cleared predicate devices. It specifically details the "Auto Bladder Volume (ABV)" feature as an AI-powered component and provides a summary of its testing.

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based only on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance (for Auto Bladder Volume - ABV)

    Acceptance CriteriaReported Device Performance
    At least 90% success rate in automatic caliper placement for bladder volume measurements when bladder wall is entirely visualized.Automatic caliper placement success rate: 95.09% (with a 95% confidence level)
    Performance demonstrated consistent across key subgroups including subjects with known BMI (healthy weight, obese, overweight).Healthy weight (18.5-24.9): 95.64%Obese (25-29.9): 95.59%Overweight (Over 30): 92.6%

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set (Verification Dataset) Sample Size: 1874 images from 101 individuals.
    • Data Provenance:
      • Country of Origin: USA and Israel.
      • Retrospective or Prospective: Not explicitly stated as either retrospective or prospective. However, the description of "data collected from several different Console variants" for training and verification suggests pre-existing data, which often leans towards a retrospective collection.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated. The document refers to "annotators" who performed manual annotation.
    • Qualifications of Experts: Not explicitly stated. The annotators are described as performing "manual annotation," implying they are skilled in this task, but specific qualifications (e.g., radiologists, sonographers, years of experience) are not provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document mentions "annotators performed manual annotation," but does not detail if multiple annotators were used for each case or any specific adjudication process (e.g., 2+1, 3+1 consensus).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The document states: "The subjects of this premarket submission, Venue, Venue Go, Venue Fit and Venue Sprint, did not require clinical studies to support substantial equivalence." The testing described for ABV is a standalone algorithm performance validation against established ground truth, not a comparative human-AI study.
    • Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes. The "AI Summary of Testing" section describes a study for the Auto Bladder Volume (ABV) feature, which assesses the algorithm's "automatic caliper placement success rate" against manually established ground truth. This is a standalone performance evaluation of the algorithm.

    7. Type of Ground Truth Used (for ABV Test Set)

    • Ground Truth Type: Expert consensus/manual annotation. The document states: "Ground truth annotations of the verification dataset were obtained as follows: In all Training/Validation and Verification datasets, annotators performed manual annotation on images converted from DICOM files." They identified "landmarks, which represent the bladder edges," corresponding to standard measurement locations.

    8. Sample Size for the Training Set (for ABV)

    • Training Set Sample Size: Total dataset included 8,392 images from 496 individuals. Of these, 1,874 were used for the verification dataset, and "the rest" were used for training/validation. This implies the training/validation set would be 8392 - 1874 = 6518 images from the remaining individuals not included in the verification set.

    9. How the Ground Truth for the Training Set Was Established (for ABV)

    • Ground Truth Establishment: Similar to the verification dataset, "annotators performed manual annotation on images converted from DICOM files" for both Training/Validation and Verification datasets. They chose "4-6 images that represent different bladder volume status" for each individual and annotated "4 different landmarks" per view (transverse and longitudinal) representing bladder edges.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251342
    Date Cleared
    2025-07-16

    (77 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoPAC Software Only / EchoPAC Plug-in is intended for diagnostic review and analysis of ultrasound images, patient record management and reporting, for use by, or on the order of a licensed physician. EchoPAC Software Only / EchoPAC Plug-in allows post-processing of raw data images from GE ultrasound scanners and DICOM ultrasound images.

    Ultrasound images are acquired via B (2D), M, Color M modes, Color, Power, Pulsed and CW Doppler modes, Coded Pulse, Harmonic, 3D, and Real time (RT) 3D Mode (4D).

    Clinical applications include: Fetal/Obstetrics; Abdominal (including renal and GYN); Urology (including prostate); Pediatric; Small organs (breast, testes, thyroid); Neonatal and Adult Cephalic; Cardiac (adult and pediatric); Peripheral Vascular; Transesophageal (TEE); Musculo-skeletal Conventional; Musculo-skeletal Superficial; Transrectal (TR); Transvaginal (TV); Intraoperative (vascular); Intra-Cardiac; Thoracic/Pleural and Intra-Luminal.

    Device Description

    EchoPAC Software Only / EchoPAC Plug-in provides image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality to ultrasound images that are acquired via the GE Healthcare Vivid family of ultrasound systems, as well as DICOM images from other ultrasound systems. EchoPAC Software Only will be offered as SW only to be installed directly on customer PC hardware and EchoPAC Plug-in is intended to be hosted by a generalized PACS host workstation. EchoPAC Software Only / EchoPAC Plug-in is DICOM compliant, transferring images and data via LAN between systems, hard copy devices, file servers and other workstations.

    AI/ML Overview

    The provided 510(k) clearance letter and summary discuss the EchoPAC Software Only / EchoPAC Plug-in, including a new "AI Cardiac Auto Doppler" feature. The acceptance criteria and the study proving the device meets these criteria are primarily detailed for this AI-driven feature.

    Here's an organized breakdown of the information:


    1. Acceptance Criteria and Reported Device Performance (AI Cardiac Auto Doppler)

    Acceptance CriteriaReported Device Performance
    Feasibility score of more than 95%The verification requirement included a step to check for a feasibility score of more than 95%. (Implies this was met for the AI Cardiac Auto Doppler).
    Expected accuracy threshold calculated as the mean absolute difference in percentage for each measured parameter.The verification requirement included a step to check mean percent absolute error across all cardiac cycles against a threshold. All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits, passed this check. These results indicate that observed accuracy of each of the individual clinical parameters met the acceptance criteria.
    For Tissue Doppler performance metric: Threshold not explicitly stated, but comparative values for BMI groups are provided.BMI < 25: Mean performance metric = -0.002 (SD = 0.077)
    For Flow Doppler performance metric: Threshold not explicitly stated, but comparative values for BMI groups are provided.BMI $\ge$ 25: Mean performance metric = -0.006 (SD = 0.081)
    BMI < 25: Mean performance metric = 0.021 (SD = 0.073)
    BMI $\ge$ 25: Mean performance metric = 0.003 (SD = 0.057)

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size:

      • Tissue Doppler: 4106 recordings from 805 individuals.
      • Doppler Trace: 3390 recordings from 1369 individuals.
      • BMI Sub-analysis: 41 patients, 433 Doppler measurements (subset of Vivid Pioneer dataset).
    • Data Provenance: Retrospective, collected from standard clinical practices.

      • Countries of Origin: USA (several locations), Australia, France, Spain, Norway, Italy, Germany, Thailand, Philippines.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts:

      • Annotators: Two cardiologists.
      • Review Panel: Five clinical experts.
    • Qualifications of Experts:

      • Annotators: Cardiologists, implying medical expertise in cardiac imaging and diagnosis. They followed US ASE (American Society of Echocardiography) based annotation guidelines.
      • Review Panel: Clinical experts, implying medical professionals with experience in the relevant clinical domain.

    4. Adjudication Method for the Test Set

    The ground truth establishment process involved:

    • Two cardiologists performed initial annotations.
    • A review panel of five clinical experts provided feedback on these annotations.
    • Annotations were corrected (as needed) until a consensus agreement was achieved between the annotators and reviewers. This suggests an iterative consensus-based adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was explicitly mentioned. The provided document focuses on the standalone performance of the AI algorithm against expert-derived ground truth, not human-in-the-loop performance.
    • Therefore, an effect size of how much human readers improve with AI vs. without AI assistance is not provided.

    6. Standalone (Algorithm Only) Performance

    • Yes, a standalone performance evaluation was done. The "AI Auto Doppler Summary of Testing" section describes the performance of the AI Cardiac Auto Doppler algorithm itself, without human intervention for the critical performance metrics (e.g., "All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits passed this check").

    7. Type of Ground Truth Used

    • The ground truth was established by expert consensus (two cardiologists performing annotations, reviewed and corrected by a panel of five clinical experts until consensus).
    • It was based on manual measurements and assessments of Doppler signal quality and ECG signal quality on curated images, following US ASE based annotation guidelines.

    8. Sample Size for the Training Set

    • Tissue Doppler: 1482 recordings from 4 unique clinical sites.
    • Doppler Trace: 2070 recordings from 4 unique clinical sites.

    9. How the Ground Truth for the Training Set Was Established

    • The ground truth for both development (training) and verification (testing) datasets was established using the same "truthing" process:
      • Annotators (two cardiologists) performed manual measurements after assessing Doppler signal quality and ECG signal quality of curated images.
      • These annotations followed US ASE based annotation guidelines.
      • A review panel of five clinical experts provided feedback, and corrections were made until a consensus agreement was achieved between the annotators and reviewers.
    • It is explicitly stated that the development dataset was selected from clinical sites not used for the testing dataset, ensuring independence between training and test data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251169
    Device Name
    Vivid Pioneer
    Date Cleared
    2025-07-10

    (86 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vivid Pioneer is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by, or under the direction of a qualified and trained physician or sonographer for ultrasound imaging, measurement, display and analysis of the human body and fluid.

    Vivid Pioneer is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab or in private medical offices. The systems support the following clinical applications:

    Fetal/Obstetrics, Abdominal (including renal, GYN), Pediatric, Small Organ (breast, testes, thyroid), Neonatal Cephalic, Adult Cephalic, Cardiac (adult and pediatric), Peripheral Vascular, Musculo-skeletal Conventional, Musculo-skeletal Superficial, Urology (including prostate), Transesophageal, Transvaginal, Transrectal, Intra-cardiac, Intra-luminal and Interventional Guidance (including Biopsy, Vascular Access), Thoracic/Pleural and Intraoperative (vascular).

    Modes of operation include: 3D, Real time (RT) 3D Mode (4D), B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/Color M, B/PWD or CWD, B/Color/PWD or CWD, B/Power/PWD.

    Device Description

    The proposed Vivid Pioneer is a general purpose, Track 3, diagnostic ultrasound system, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It provides digital acquisition, processing, display and analysis capabilities. It consists of a mobile console with a height-adjustable control panel, color LCD touch panel, and a display monitor.

    Vivid Pioneer includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array or dual array format, including dedicated CW transducers and real time 3D transducer. The proposed Vivid Pioneer can be used with the stated compatible OEM ICE transducers. The system includes capability to output data to other devices like printing devices.

    The user-interface includes an operator control panel, a 23.8" High-Definition Ultrasound LCD type of display monitor (mounted on an arm for rotation and / or adjustment of height), a layout of pre-defined user controls (hard-keys) and a 15.6-inch multi-touch LCD panel with mode-and operation dependent soft-keys.

    The operator panel also includes two loudspeakers for audio, shelves for convenient placement of papers or accessories, and 6 holders with cable management for the connected transducers.

    The lower console is mounted on 4 rotational wheels with brakes, for ergonomic transport and safe parking. The lower console also includes all electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for image storage, hard copy printing, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USB-adapter) connection.

    AI/ML Overview

    This document describes the acceptance criteria and study proving the device meets the criteria for two AI features of the Vivid Pioneer Ultrasound System: AI Cardiac Auto Doppler and AI FlexiViews LAA.


    1. Table of Acceptance Criteria and Reported Device Performance

    AI Cardiac Auto Doppler

    Acceptance CriteriaReported Device Performance
    Feasibility score of > 95%All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits, passed the check for mean percent absolute error across all cardiac cycles against a threshold. This implies the accuracy threshold was met, which indirectly suggests successful feasibility to achieve this accuracy.
    Expected accuracy threshold calculated as the mean absolute difference in percentage for each measured parameter.All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits, passed this check.
    Mean percent absolute error across all cardiac cycles against a threshold.All clinical parameters, as performed by AI Cardiac Auto Doppler without user edits, passed this check.
    Consistent model performance across BMI groups (<25 and $\ge$ 25) with predefined metric quantifying agreement between manual and AI-derived peak velocities.Tissue Doppler: Mean performance metric = -0.002 (SD = 0.077) for BMI < 25; -0.006 (SD = 0.081) for BMI $\ge$ 25.Flow Doppler: Mean performance metric = 0.021 (SD = 0.073) for BMI < 25; 0.003 (SD = 0.057) for BMI $\ge$ 25.

    AI FlexiViews LAA

    Acceptance CriteriaReported Device Performance
    Greater than 80% success rate of LAA region localization and landmark extractionThe model achieved a verification success rate of 85%, with a sensitivity of 84.91% and a specificity of 91.82%. Consistent model performance observed across TEE angles (0 to 100 degrees) with a success rate of 80% or higher. Strong model performance for individuals with a BMI above 25 (over 85% accuracy).

    2. Sample Size Used for the Test Set and Data Provenance

    AI Cardiac Auto Doppler:

    • Tissue Doppler test set: 4106 recordings from 805 individuals.
    • Doppler Trace test set: 3390 recordings from 1369 individuals.
    • Data Provenance: Retrospective, collected from USA (several locations), Australia, France, Spain, Norway, Italy, Germany, Thailand, Philippines.

    AI FlexiViews LAA:

    • Test set: 342 recordings from 84 individuals.
    • Data Provenance: Retrospective, collected from USA, Norway, Italy, France, Philippines.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    AI Cardiac Auto Doppler:

    • Experts for annotations: Two cardiologists.
    • Review panel for consensus: Five clinical experts.
    • Qualifications: The document specifies "cardiologists" and "clinical experts" but does not explicitly state years of experience or board certification details.

    AI FlexiViews LAA:

    • Experts for annotations: Two cardiologists.
    • Supervision for annotations: Two US certified clinicians.
    • Review panel for consensus: Three clinical experts.
    • Qualifications: The document specifies "cardiologists" and "US certified clinicians" and "clinical experts" but does not explicitly state years of experience or board certification details.

    4. Adjudication Method for the Test Set

    AI Cardiac Auto Doppler:

    • Annotations were performed by two cardiologists.
    • A review panel of five clinical experts provided feedback.
    • Annotations were corrected (as needed) until a consensus agreement was achieved between the annotators and reviewers. This suggests an adjudication method aimed at reaching a single agreed-upon ground truth.

    AI FlexiViews LAA:

    • Annotations were performed by two cardiologists, supervised by two US certified clinicians.
    • A review panel of three clinical experts provided feedback.
    • Annotations were corrected (as needed) until a consensus agreement was achieved between the annotators and reviewers. Similar to Auto Doppler, this indicates a consensus-based adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text does not mention a multi-reader multi-case (MRMC) comparative effectiveness study to assess how much human readers improve with AI vs. without AI assistance for either AI Cardiac Auto Doppler or AI FlexiViews LAA. The evaluation focused on the standalone performance of the AI algorithms against expert-derived ground truth.


    6. Standalone Performance (Algorithm Only)

    Yes, standalone (algorithm only without human-in-the-loop performance) studies were done for both AI features.

    • AI Cardiac Auto Doppler: Performance was evaluated based on the AI algorithm's measurements directly compared to expert-derived ground truth. The verification explicitly states "AI Cardiac Auto Doppler without user edits passed this check."
    • AI FlexiViews LAA: The "model achieved a verification success rate of 85%" based on its localization and landmark extraction, directly reflecting standalone performance.

    7. Type of Ground Truth Used

    Expert Consensus.

    For both AI Cardiac Auto Doppler and AI FlexiViews LAA, the ground truth was established through:

    • Manual measurements/annotations performed by cardiologists.
    • Assessment of Doppler/ECG signal quality.
    • Supervision by US certified clinicians (for LAA).
    • Review and consensus agreement among a panel of clinical experts.

    8. Sample Size for the Training Set

    AI Cardiac Auto Doppler:

    • Tissue Doppler development dataset: 1482 recordings from 4 unique clinical sites.
    • Doppler Trace development dataset: 2070 recordings from 4 unique clinical sites.

    AI FlexiViews LAA:

    • Total development dataset: 612 recordings from 5 unique clinical sites.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the development (training/validation) datasets was established in the same manner as the ground truth for the test sets:

    • For both AI Cardiac Auto Doppler and AI FlexiViews LAA:
      • Annotators (cardiologists, supervised by US certified clinicians for LAA) performed manual measurements/annotations after assessing image quality (Doppler signal quality and ECG signal quality for Auto Doppler, LAA contour and specific points for FlexiViews LAA).
      • Annotations followed US ASE (American Society of Echocardiography) based annotation guidelines.
      • A review panel of clinical experts (five for Auto Doppler, three for FlexiViews LAA) provided feedback.
      • Annotations were corrected (as needed) until a consensus agreement was achieved between the annotators and reviewers.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250543
    Date Cleared
    2025-05-29

    (94 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Voluson™ Performance 16 / Voluson™ Performance 18 are a general-purpose diagnostic ultrasound system intended for use by a qualified and trained healthcare professional that are legally authorized or licensed by law in the country, state or other local municipality in which he or she practices for ultrasound imaging, measurement, display and analysis of the human body and fluid. The users may or may not be working under supervision or authority of a physician. Voluson™ Performance 16 / Voluson™ Performance 18 clinical applications include: Fetal/Obstetrics; Abdominal (including Renal and Gynecology/ Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid, etc.); Neonatal Cephalic; Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular (PV); Musculo-skeletal Conventional and Superficial; Transrectal (including Urology/Prostate) (TR); Transvaginal (TV).

    Mode of operation include: B, M, AMM, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, HD-Flow, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Contrast and Combined modes: B/M, B/Color, B/PWD, B/Power/PWD. The Voluson™ Performance 16 / Voluson™ Performance 18 system are intended to be used in a hospital or medical clinic.

    Device Description

    The systems are full-featured Track 3 ultrasound systems, primarily for general radiology use and specialized for OB/GYN with particular features for real-time 3D/4D acquisition. They consist of a mobile console with keyboard control panel; color LCD/TFT touch panel, color video display and optional image storage and printing devices. They provide high performance ultrasound imaging and analysis and have comprehensive networking and DICOM capability. They utilize a variety of linear, curved linear, matrix phased array transducers including mechanical and electronic scanning transducers, which provide accurate real-time three-dimensional imaging supporting all standard acquisition modes.

    AI/ML Overview

    Based on the provided FDA 510(k) clearance letter, the device in question, Voluson™ Performance 16/18, is a general-purpose diagnostic ultrasound system. The document explicitly states that "The subject of this premarket submission, Voluson™ Performance 16/18 did not require clinical studies to support substantial equivalence."

    This means that no clinical study was conducted to prove the device meets specific acceptance criteria based on its performance in a clinical setting against a defined ground truth. Instead, the substantial equivalence determination relies on comparisons to predicate devices, non-clinical tests (acoustic output, biocompatibility, electrical/mechanical safety, etc.), and the migration of existing, already-cleared AI features.

    Therefore, many of the requested details about acceptance criteria, study methodologies, and performance metrics (clinical study details, sample sizes, expert qualifications, ground truth, MRMC studies, standalone performance) are not available in this document because a clinical performance study was not deemed necessary for this 510(k) clearance.

    Here's a breakdown of what can be extracted from the document:


    1. A table of acceptance criteria and the reported device performance:

    Since no clinical performance study was conducted to establish new acceptance criteria for direct device performance in terms of diagnostic accuracy or reader improvement, a table of this nature cannot be provided from this document. The "acceptance criteria" here are related to non-clinical safety and performance standards for an ultrasound system, and the "reported device performance" is a statement of compliance with these standards and equivalence to predicates.

    Acceptance Criteria CategorySpecific Criteria (as implied by document)Reported Device Performance
    Non-Clinical SafetyAcoustic output below FDA limitsComplies with applicable FDA limits
    Biocompatibility of materials (patient contact)Materials evaluated and found safe; biocompatible
    Cleaning and disinfection effectivenessEvaluated (details not given beyond "evaluated")
    Thermal, electrical, electromagnetic, mechanical safety compliantConforms to applicable medical device safety standards
    Standards ComplianceAdherence to specific IEC, ISO, AAMI, NEMA standardsComplies with listed voluntary standards (e.g., AAMI/ANSI ES60601-1, IEC 60601-1-2, ISO 14971, NEMA PS 3.1-3.20)
    Software QualityRisk Analysis, Requirements Reviews, Design Reviews, Testing (unit, integration, performance, safety)Quality assurance measures applied to development (listed)
    Functional EquivalenceSame clinical intended use as predicatesProposed device has same clinical intended use as predicates
    Similar imaging modes to predicatesSimilar imaging modes; does not include B-Flow mode (minor difference)
    Similar measurement, imaging, review, reporting capabilitiesSimilar capability to predicates
    Probes supported are identical to predicatesProbes supported are identical
    AI Feature MigrationNo changes to algorithmic flow or AI components post-migration; works on subject deviceConfirmed no changes to algorithms; regression tests confirmed functionality

    Regarding the Study That Proves the Device Meets Acceptance Criteria:

    As noted, no clinical study was conducted for this specific 510(k) clearance. The basis for clearance is substantial equivalence to legally marketed predicate devices, supported by non-clinical testing and the migration of already-cleared AI features.

    Therefore, for the remaining points (2-9), the answer is largely that this information is not applicable or not provided in this 510(k) summary because a de novo clinical performance study was not performed.

    2. Sample size used for the test set and the data provenance: Not applicable, as no clinical test set was used for a de novo performance study. The AI features were migrated from already-cleared devices (Voluson Expert 22/20/18, K242168), implying their original validation would have occurred with those previous clearances. Details of those previous validations are not in this document.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. No MRMC study was performed for this clearance. The AI features are already cleared on previous devices, and their performance improvement with AI assistance would have been part of those prior clearances, not described here.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not explicitly stated for this clearance. Given it's an ultrasound system, the AI features (SonoPelvicFloor 3.0, SonoAVCfollicle 2.0, Fibroid Mapping, SonoLyst Live) are typically integrated tools that assist the sonographer or physician, rather than standalone diagnostic algorithms. Their standalone performance would have been assessed during their original clearance (K242168).

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable for this specific clearance. For the migrated AI features, their original ground truth establishment would have been part of the K242168 submission.

    8. The sample size for the training set: Not applicable, as no new training was described for this submission. The AI features are migrated and not undergoing new development or training for this device.

    9. How the ground truth for the training set was established: Not applicable for this submission. This would pertain to the original development and clearance of the migrated AI features, information not provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250087
    Device Name
    Vscan Air
    Date Cleared
    2025-05-01

    (107 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vscan Air is a battery-operated software-based general-purpose ultrasound imaging system for use by qualified and trained healthcare professionals or practitioners that are legally authorized or licensed by law in the country, state or other local municipality in which he or she practices. The users may or may not be working under supervision or authority of a physician. Users may also include medical students working under the supervision or authority of a physician during their education / training. The device is enabling visualization and measurement of anatomical structures and fluid including blood flow.

    Vscan Air's pocket-sized portability and simplified user interface enables integration into training sessions and examinations in professional healthcare facilities (ex. Hospital, clinic, medical office), home environment, road/air ambulance and other environments as described in the user manual. The information can be used for basic/focused assessments and adjunctively with other medical data for clinical diagnosis purposes during routine, periodic follow-up, and triage.

    Vscan Air supports Black/ white (B-mode), Color flow (Color doppler), Pulsed wave Doppler mode, M-mode, combined (B + Color Doppler) and Harmonic Imaging modes with curved, linear and sector array transducers.

    With the curved array transducer of the dual headed probe solution, the specific clinical applications and exam types include: abdominal, fetal/obstetrics, gynecological, urology, thoracic/lung, cardiac (adult and pediatric, 40 kg and above), vascular/peripheral vascular, musculoskeletal (conventional), pediatrics, interventional guidance (includes free hand needle/catheter placement, fluid drainage, nerve block and biopsy).

    With the linear array transducer of the dual headed probe solution, the specific clinical applications and exam types include: vascular/peripheral vascular, musculoskeletal (conventional and superficial), small organs, thoracic/lung, ophthalmic, pediatrics, neonatal cephalic, interventional guidance (includes free hand needle/catheter placement, fluid drainage, nerve block, vascular access and biopsy).

    With the sector array transducer of the dual headed probe solution, the specific clinical applications and exam types include: cardiac (adult and pediatric, 40 kg and above), abdominal, fetal/obstetrics, gynecological, urology, thoracic/ lung, pediatrics, adult cephalic, interventional guidance (includes free hand needle/catheter placement, fluid drainage, nerve block and biopsy).

    Device Description

    Vscan Air™ is a battery-operated general-purpose diagnostic ultrasound imaging system for use by qualified and trained healthcare professionals. It enables ultrasound imaging guidance, visualization and measurement of anatomical structures and fluid.

    Vscan Air consists of an app which can be installed on Android™ or iOS devices, and 2 probes which use wireless technology for communication.

    Its pocket-sized portability and simplified user interface enable integration into training sessions and examinations in professional healthcare facilities (ex. Hospital, clinic, medical office), home environment, road/air ambulance and in other environments. The information can be used for basic/focused assessments and adjunctively with other medical data for clinical diagnosis purposes during routine, periodic follow-up, and triage assessments for adult, pediatric and neonatal patients. Vscan Air can also be useful for interventional guidance.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Vscan Air's Auto Bladder Volume feature, extracted from the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Device Performance for Auto Bladder Volume

    Acceptance CriteriaReported Device Performance
    At least 90% success rate in automatic caliper placement for bladder volume measurements when bladder wall is entirely visualized.Automatic caliper placement success rate: 92.24% with a 90% confidence level. Consistent performance across key subgroups, e.g., BMI Overweight (>25): 92%, BMI Normal (<25): 95%.

    Study Details for Auto Bladder Volume (Test Set)

    1. Sample size used for the test set and data provenance:

      • Sample Size: 1,817 images from 142 individuals. Each individual was scanned in two views (Transverse and Longitudinal).
      • Data Provenance: The dataset included images from individuals of various ethnicities/countries, including USA, Germany, UK, Japan, and India. The study used retrospective data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document does not specify the exact number of experts, but states that the verification dataset was assessed by "experts for accuracy." It does not provide their specific qualifications (e.g., years of experience as radiologists).
    3. Adjudication method for the test set:

      • The document describes "annotators" performing manual annotation on images to establish ground truth. It does not explicitly mention a formal adjudication method like "2+1" or "3+1" for resolving disagreements among multiple annotators.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

      • No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not done. This study solely focused on the standalone performance of the Auto Bladder Volume feature.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance study was done for the Auto Bladder Volume feature. The performance reported is that of the algorithm's automatic caliper placement.
    6. The type of ground truth used:

      • Expert Consensus/Manual Annotation. Ground truth annotations were obtained through manual placement of landmarks (bladder edges) by annotators on images, representing where measurement calipers would be placed.
    7. The sample size for the training set:

      • The total dataset included 4,014 images from 301 individuals. Since 1,817 images from 142 individuals were used for verification, the remaining 2,197 images from 159 individuals were used for training/validation (tuning).
    8. How the ground truth for the training set was established:

      • Similar to the verification dataset, "annotators performed manual annotation on images converted from vscanet files." These annotations involved identifying 4-6 images representing different bladder volume statuses, and on each view (transverse and longitudinal), marking 4 distinct landmarks representing the bladder edges.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243620
    Device Name
    Vivid iq
    Date Cleared
    2025-02-11

    (81 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Vivid ig is high-performance compact diagnostic ultrasound system designed for Cardiovascular and Shared Services. It is intended for use by qualified and trained Healthcare professionals for Ultrasound imaging, measurement, display and analysis of the human body and fluid.

    Vivid in clinical applications include: Fetal/Obstetrics, Abdominal (includes GYN), Pediatric, Small Organ (includes breast, testes, thyroid), Neonatal Cephalic, Cardiac (includes Adult and Pediatric), Peripheral Vascular, Musculoskeletal Conventional, Musculoskeletal Superficial, Urology (Including prostate), Transcranial, Transvaginal, Transesophageal, Interventional Guidance (including Biopsy, Vascular access), Thoracic/Pleural, Intraoperative (Vascular), Intracardiac and Intraluminal.

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M, Power Doppler, Harmonic Imaging, Real-Time (RT) 3D Mode (4D), Coded Pulse and Combined modes: B/M, B/Color M, B/PWD, B/Color/PWD, B/Color/CWD, B/Power/PWD.

    The device is intended for use in an indoor hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab, in private medical offices, and in limited settings outside of professional Healthcare facilities.

    Device Description

    The proposed Vivid ig system is a general-purpose, Track 3, diagnostic ultrasound device, primarily intended for cardiovascular diagnostic use and shared service imaging. It is an ultrasound imaging & analysis system, consisting of a compact console with control panel including a track pad, color LCD Touch Panel that includes an on-screen alphanumeric keyboard. The system also has standard height-adjustable new ergonomic mobile cart for comfortable standing and sitting positions. The Charge Box in the Mobile Cart provides Vivid iq up to 4 hours scanning time without power supply.

    There are options for image storage, USB wireless connectivity, cardiac signal input for cardiac gating and output capabilities to printing devices. Vivid ig utilizes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array, or dual array format, including dedicated CW transducers and real time 3D transducer. The system can also be used with compatible ICE transducers.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for image storage, hard copy printing, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USB-adapter) connection.

    AI/ML Overview

    The provided text is a 510(k) Summary for the GE Medical Systems Ultrasound and Primary Care Diagnostics, LLC Vivid iq. It details the device's characteristics and its comparison to predicate devices, but it explicitly states that no clinical studies were required to support substantial equivalence for this particular submission. Therefore, it is not possible to provide acceptance criteria or a study that proves the device meets those criteria, as such studies were not conducted or submitted for this 510(k).

    The document is primarily focused on demonstrating substantial equivalence to a predicate device (Vivid iq K221148) through design similarities, conformance to recognized performance standards, and non-clinical performance testing.

    Here's what can be extracted based on the provided text, while acknowledging the absence of clinical study data for this submission:

    Information CategoryDescription
    1. Acceptance Criteria and Reported Device PerformanceNot applicable. The submission states, "The subject of this premarket submission, Vivid iq, did not require clinical studies to support substantial equivalence." Therefore, no specific clinical acceptance criteria or reported device performance from such a study are provided in this document. Device performance is implicitly accepted through compliance with non-clinical standards and substantial equivalence to the predicate.
    2. Sample size and Data Provenance (Test Set)Not applicable. No clinical test set was used or described for this 510(k) submission.
    3. Number and Qualifications of Experts (Test Set)Not applicable. No clinical test set was used or described for this 510(k) submission.
    4. Adjudication Method (Test Set)Not applicable. No clinical test set was used or described for this 510(k) submission.
    5. MRMC Comparative Effectiveness StudyNo. The document explicitly states that no clinical studies were required. Therefore, no MRMC study was conducted or reported for this submission.
    6. Standalone Performance StudyNo. The document explicitly states that no clinical studies were required. Therefore, no standalone algorithm-only performance study was conducted or reported for this submission.
    7. Type of Ground Truth UsedNot applicable. No clinical studies requiring ground truth were conducted or reported for this submission.
    8. Sample Size for Training SetNot applicable. The submission does not describe a machine learning algorithm that would require a training set. The device is a diagnostic ultrasound system, and its performance is evaluated through engineering and safety standards, as well as comparison to a predicate device.
    9. How Ground Truth for Training Set was EstablishedNot applicable. Please see response for point 8.

    Summary of Non-Clinical Tests (from the document):

    The document does list the non-clinical tests conducted and the standards to which the device conforms:

    • Acoustic output
    • Biocompatibility
    • Cleaning and disinfection effectiveness
    • Thermal, electrical, electromagnetic and mechanical safety

    Voluntary Standards Complied With:

    • AAMI/ANSI ES60601-1, Medical Electrical Equipment Part 1: General Requirements for Safety, 2005/A2:2021
    • AAMI TIR69:2017/(R2020) Technical Information Report Risk management of radio-frequency wireless coexistence for medical devices and systems
    • IEC 60601-1-2, Medical Electrical Equipment Part 1-2: General Requirements for Basic Safety and Essential Performance - Collateral Standard: Electromagnetic Disturbance - Requirements and Tests, Edition 4.1, 2020
    • IEC 60601-2-37, Medical Electrical Equipment Part 2-37: Particular Requirements for the Safety of Ultrasonic Medical Diagnostic and Monitoring Equipment, Edition 2.1, 2015
    • ISO 10993-1, Biological Evaluation of Medical Devices-Part 1: Evaluation and Testing Within Risk Management Process, Fifth edition, 2018
    • ISO 14971, Application of risk management to medical devices. 2019
    • NEMA PS 3.1 3.20, Digital Imaging and Communications in Medicine (DICOM) Set. (Radiology), 2022d
    • IEC 62359, Ultrasonics - Field characterization - Test methods for the determination of thermal and mechanical indices related to medical diagnostic ultrasonic fields, Edition 2.1, 2017

    Quality Assurance Measures:

    • Risk Analysis
    • Requirements Reviews
    • Design Reviews
    • Testing on unit level (Module verification)
    • Integration testing (System verification)
    • Performance testing (Verification & Validation)
    • Safety testing (Verification)

    In conclusion, for this specific 510(k) submission (K243620), "Vivid iq," the device met acceptance criteria by demonstrating substantial equivalence to a predicate device through non-clinical testing and adherence to recognized standards, rather than through clinical studies with specific performance metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243628
    Date Cleared
    2025-02-11

    (78 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Vivid T9/Vivid T8 is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by qualified and trained Healthcare professionals for ultrasound imaging, measurement, display and analysis of the human body and fluid.

    The device is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab, and in private medical offices.

    The systems support the following clinical applications: Fetal/Obstetrics, Abdominal (includes GYN), Pediatric, Small Organ (includes breast, testes, thyroid), Neonatal Cephalic, Adult Cephalic, Cardiac (includes Adult and Pediatric), Peripheral Vascular, Musculoskeletal Conventional, Musculoskeletal Superficial, Urology (Including prostate), Transcranial, Transesophageal, Transrectal, Transvaginal, Interventional guidance (including Biopsy, Fluid Drainage, Vascular Access), Thoracic/Pleural, Intraoperative(Vascular). Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/ PWD, B/Color/PWD, B/Power/PWD.

    Device Description

    Vivid T9/Vivid T8 is a Track 3, diagnostic ultrasound system, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capability.

    The Vivid T9/Vivid T8 consists of a mobile console with control panel color LCD touch panel, LCD display monitor and optional image storage and printing devices. It includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array, and dual array including dedicated CW transducers.

    The user-interface includes an operator control panel, a 21.5-inch-wide screen LCD monitor (mounted on an arm for rotation and/or adjustment of height), a 10.1-inch touch panel with multi-touch capabilities and alphanumeric keyboard.

    The smart standby battery is an option to allow the system to not have to be powered down when moving from room to room. Imaging is not allowed when it is not plugged in.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for image storage, hard copy printing, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USB-adapter) connection.

    Vivid T8 and Vivid T9 are based on the same SW platform and similar HW design. Each system may have different configurations available. The configurations may differ by available SW-options and transducers provided commercially. Vivid T9 has height-adjustable control panel, while Vivid T8's control panel can't be adjustable. Vivid T9 has monitor flexible arm, while Vivid T8 has a monitor fixed arm and monitor flexible arm as an option.

    The product named Vivid T9 represents the system that has the full functionality and is offered with full support for transducers.

    AI/ML Overview

    The provided text is a 510(k) Premarket Notification from the FDA, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study proving performance against specific acceptance criteria for a new, innovative device.

    Therefore, much of the requested information regarding detailed acceptance criteria, specific study design, sample sizes, expert qualifications, and ground truth establishment for performance claims cannot be found in this document.

    The document primarily states that the device is "substantially equivalent" to predicate devices, and relies on non-clinical tests (safety, electrical, etc.) and design similarities to justify this claim. It explicitly states: "The subject of this premarket submission, Vivid T9/Vivid T8, did not require clinical studies to support substantial equivalence."

    Below is a table summarizing the information that could be extracted from the provided text, and noted where information is explicitly not available or not applicable based on the content.

    Acceptance Criteria and Device Performance

    Acceptance Criteria CategoryReported Device Performance
    Clinical PerformanceNot applicable per document; no clinical studies were required to support substantial equivalence. The device is considered substantially equivalent to its predicate.
    Acoustic Output ConformityDevice has been evaluated for acoustic output and found to conform with applicable medical device safety standards.
    BiocompatibilityDevice has been evaluated for biocompatibility and found to conform with applicable medical device safety standards. Transducer materials and other patient contact materials are biocompatible.
    Cleaning and Disinfection EffectivenessDevice has been evaluated for cleaning and disinfection effectiveness and found to conform with applicable medical device safety standards.
    Thermal SafetyDevice has been evaluated for thermal safety and found to conform with applicable medical device safety standards.
    Electrical SafetyDevice has been evaluated for electrical safety and found to conform with applicable medical device safety standards (e.g., AAMI/ANSI ES60601-1, IEC 60601-2-37).
    Electromagnetic SafetyDevice has been evaluated for electromagnetic safety and found to conform with applicable medical device safety standards (e.g., IEC 60601-1-2).
    Mechanical SafetyDevice has been evaluated for mechanical safety and found to conform with applicable medical device safety standards.
    Risk ManagementApplication of risk management to medical devices (ISO 14971) is applied.
    Quality AssuranceRisk Analysis, Requirements Reviews, Design Reviews, Testing on unit level (Module verification), Integration testing (System verification), Performance testing (Verification & Validation), Safety testing (Verification) are applied to development.
    DICOM ConformityConforms to NEMA PS 3.1 - 3.20. Digital Imaging and Communications in Medicine (DICOM) Set (Radiology), 2022d. DICOM Encapsulated PDF reports feature allows transfer through DICOM data flows.
    Ultrasonics Field CharacterizationConforms to IEC 62359. Ultrasonics - Field characterization - Test methods for the determination of thermal and mechanical indices related to medical diagnostic ultrasonic fields, Edition 2.1, 2017.

    Study Details

    1. Sample size used for the test set and the data provenance: Not applicable. The document explicitly states: "The subject of this premarket submission, Vivid T9/Vivid T8, did not require clinical studies to support substantial equivalence." Therefore, no "test set" in the context of clinical performance evaluation is described. The non-clinical tests (acoustic, electrical, thermal, etc.) inherently involve testing of the device itself rather than patient data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as no clinical test set requiring expert ground truth was performed for substantial equivalence.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable, as no clinical test set requiring adjudication was performed.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This document pertains to a general-purpose ultrasound system without specific mention of AI features that would necessitate an MRMC study for improved human reader performance. The "Clarity +" feature is described as "real-time image processing/filtering technique," not an AI-driven diagnostic aid that would directly impact human reader performance in a quantifiable way for this type of submission.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. The device is an ultrasound system, not a standalone algorithm. Its "Clarity +" feature is an image processing technique integrated into the system, not a separate diagnostic algorithm.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable for a clinical performance evaluation, as no clinical studies were deemed necessary. For the non-clinical tests (e.g., safety, electrical, acoustic), the "ground truth" is adherence to recognized performance standards and internal quality assurance measures.
    7. The sample size for the training set: Not applicable. As this device did not require clinical studies, there is no mention of a "training set" for algorithm development related to diagnostic performance.
    8. How the ground truth for the training set was established: Not applicable, as no training set for diagnostic algorithm development is mentioned or required in this submission.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242168
    Date Cleared
    2024-12-20

    (149 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is a general purpose ultrasound system intended for use by qualified and trained healthcare professionals. Specific clinical applications remain the same as previously cleared: Fetall OB; Abdominal (including GYN, pelvic and infertility monitoring/follicle development); Pediatric; Small Organ (breast, testes, thyroid etc.); Neonatal and Adult Cephalic; Cardiac (adult and pediatric); Musculo-skeletal Conventional and Superficial; Vascular; Transvaginal (including GYN); Transrectal

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/ PWD, B/ Elastography. The Voluson™ Expert 18, Voluson™ Expert 20, Voluson™ Expert 22 is intended to be used in a hospital or medical clinic.

    Device Description

    The systems are full-featured Track 3 ultrasound systems, primarily for general radiology use and specialized for OB/GYN with particular features for real-time 3D/4D acquisition. They consist of a mobile console with keyboard control panel; color LCD/TFT touch panel, color video display and optional image storage and printing devices. They provide high performance ultrasound imaging and analysis and have comprehensive networking and DICOM capability. They utilize a variety of linear, curved linear, matrix phased array transducers including mechanical and electronic scanning transducers, which provide highly accurate real-time three-dimensional imaging supporting all standard acquisition modes.

    AI/ML Overview

    The provided document describes the predicate devices as the Voluson Expert 18, Voluson Expert 20, Voluson Expert 22. The K-number for the primary predicate device is K231965. The document does NOT describe the acceptance criteria or study that proves the device meets the acceptance criteria for those predicate devices. Instead, it details the testing and acceptance criteria for new or updated AI software features introduced with the new Voluson Expert Series devices (K242168).

    Here's a breakdown of the requested information based on the AI testing summaries provided for the new/updated features: Sono Pelvic Floor 3.0 (MHD and Anal Sphincter), SonoAVC Follicle 2.0, and 1st/2nd Trimester SonoLyst/SonoLystlive.


    Acceptance Criteria and Device Performance for New/Updated AI Features

    1. Table of Acceptance Criteria and Reported Device Performance

    AI FeatureAcceptance CriteriaReported Device Performance
    Sono Pelvic Floor 3.0 (MHD)MHD Tracking, Minimum MHD Frame Detection, Maximum MHD Frame Detection:- On datasets marked as "Good Image Quality": success rate should be 70% or higher.- On datasets marked as "Challenging Image Quality": success rate should be 60% or higher. Overall MHD:- On "Good IQ" datasets: 70% or higher. - On "Challenging Quality" datasets: 60% or higher.MHD Tracking:- Good Image Quality: 89.3%- Challenging Image Quality: 77.7%Minimum MHD Frame Detection:- Good Image Quality: 89.3%- Challenging Image Quality: 83.3%Maximum MHD Frame Detection:- Good Image Quality: 90.66%- Challenging Image Quality: 77.7%Overall MHD:- On Good IQ datasets: 81.9%- On Challenging quality datasets: 60.9%
    Sono Pelvic Floor 3.0 (Anal Sphincter)- On datasets marked as "Good Image Quality": success rate should be 70% or higher.- On datasets marked as "Challenging Image Quality": success rate should be 60% or higher.The document states "Verification results on actual verification data is as follows" but then the table structure is missing the actual performance metrics for Anal Sphincter. It only lists "On Good IQ datasets: 81.9%" and "On Challenging quality datasets: 60.9%" under the MHD section, implying those might be overall success rates for the entire Sono Pelvic Floor 3.0 feature across both components, but it's not explicitly clear. Therefore, the specific reported device performance for "Anal Sphincter" is not clearly presented in the provided text.
    SonoAVC Follicle 2.0- The success rate for the AI feature should be 70% or higher. (This appears to be an overall accuracy criterion).Accuracy:- On test data acquired together with train cohort: 94.73%- On test data acquired consecutively post model development: 92.8%- Overall Accuracy: 93.6%Dice Coefficient by Size Range:- 3-5 mm: 0.937619- 5-10 mm: 0.946289- 10-15 mm: 0.962315- >15 mm: 0.93206
    1st Trimester SonoLyst/SonoLystLive- The average success rate of SonoLyst 1st Trimester IR, X and SonoBiometry CRL and overall traffic light accuracy is 80% or higher.The document states "The average success rate...is 80% or higher" as the acceptance criteria and then mentions "Data used for both training and validation has been collected across multiple geographical sites..." but it does not explicitly provide the numerically reported device performance value that met or exceeded the 80% criterion.
    2nd Trimester SonoLyst/SonoLystLive- Acceptance criteria are met for both subgroups (variety of ultrasound systems/data formats vs. target platform). (The specific numerical criteria for acceptance are not explicitly stated, but rather that the performance met them for demonstration of generalization.)The document states "For both subgroups the acceptance criteria are met." but does not explicitly provide the numerically reported device performance values.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter):

      • Test Set Sample Size: 93 volumes for MHD, 106 volumes for Anal Sphincter.
      • Data Provenance: Data is provided by external clinical partners who de-identified the data. Original data collected in 4D volume Cines (*.vol5 or *.4dv6) or 4D/3D volume acquisitions (*.vol2 or *.4dv3).
      • Countries: A diverse range of countries contributed to the test data including Italy, U.S.A, Australia, Germany, Czech Republic, France, India (for MHD); and Italy, U.S.A, France, Germany, India (for Anal Sphincter).
      • Retrospective/Prospective: The data collection method ("re-process data to our needs retrospectively during scan conversion") suggests a retrospective approach to assembling the dataset, although a "standardized data collection protocol was followed for all acquisitions." New data was also acquired post-model development from previously unseen sites to test robustness.
    • SonoAVC Follicle 2.0:

      • Test Set Sample Size: 138 datasets, with a total follicle count of 2708 across all volumes.
      • Data Provenance: External clinical partners provided de-identified data in 3D volumes (*.vol or *.4dv).
      • Countries: Germany, India, Spain, United Kingdom, USA.
      • Retrospective/Prospective: The data was split into train/validation/test at the start of model development (suggesting retrospective). Additionally, consecutive data was acquired post-model development from previously unseen systems and probes to test robustness (suggesting some prospective element for this later test set).
    • 2nd Trimester SonoLyst/SonoLystLive:

      • Test Set Sample Size: "Total number of images: 2.2M", "Total number of cine loops: 3595". It's not explicitly stated how much of this was test data vs. training data, but it implies a large dataset for evaluation.
      • Data Provenance: Systems used for data collection included GEHC Voluson V730, E6, E8, E10, Siemens S2000, and Hitachi Aloka. Formats included DICOM & JPEG for still images and RAW data for cine loops.
      • Countries: UK, Austria, India, and USA.
      • Retrospective/Prospective: Not explicitly stated, but "All training data is independent from the test data at a patient level" implies a pre-existing dataset split rather than newly acquired prospective data solely for testing.
    • 1st Trimester SonoLyst/SonoLystLive:

      • Test Set Sample Size: SonoLyst 1st Trim IR: 5271 images, SonoLyst 1st Trim X: 2400 images, SonoLyst 1st Trim Live: 6000 images, SonoBiometry CRL: 110 images.
      • Data Provenance: Systems included GE Voluson V730, P8, S6/S8, E6, E8, E10, Expert 22, Philips Epiq 7G. Formats included DICOM & JPEG for still images and RAW data for cine loops.
      • Countries: UK, Austria, India, and USA.
      • Retrospective/Prospective: "All training data is independent from the test data at a patient level." "A statistically significant subset of the test data is independent from the training data at a site level, with no test data collected at the site being used in training." This indicates a retrospective collection with careful splitting, and some test data from unseen sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter), SonoAVC Follicle 2.0, 2nd Trimester SonoLyst/SonoLystLive, 1st Trimester SonoLyst/SonoLystLive:

      • Number of Experts: Three independent reviewers.
      • Qualifications: "at least two being US Certified sonographers, with extensive clinical experience."
    • Additional for 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive:

      • For sorting/grading accuracy review, a "5-sonographer review panel" was used. Qualifications are not specified beyond being sonographers.

    4. Adjudication Method for the Test Set

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter), SonoAVC Follicle 2.0, 2nd Trimester SonoLyst/SonoLystLive, 1st Trimester SonoLyst/SonoLystLive:
      • The evaluation was "based on interpretation of the AI output by reviewing clinicians." The evaluation was "conducted by three independent reviewers."
      • For 2nd and 1st Trimester SonoLyst/SonoLystLive, where sorting/grading accuracy was determined, if initial sorting/grading differed from the ground truth (established by a single sonographer then refined), a 5-sonographer review panel was used, and reclassification was based upon the "majority view of the panel." This implies a form of majority vote adjudication for these specific sub-tasks.
      • The general approach for the three reviewers, especially when evaluating AI output, implies an independent review, and while not explicitly stated, differences would likely lead to discussion or a form of consensus/adjudication. However, a strict 'X+Y' model (like 2+1 or 3+1) is not explicitly detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure how human readers improve with AI vs. without AI assistance. The studies described are primarily aimed at assessing the standalone performance or workflow utility of the AI features.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone performance was assessed for all described AI features. The "Summary test Statistics" and "Verification Results" sections for each feature (Sono Pelvic Floor 3.0, SonoAVC Follicle 2.0, 1st/2nd Trimester SonoLyst/SonoLystLive) report the algorithm's direct performance (e.g., success rates, accuracy, Dice coefficient) against the established ground truth, indicating standalone evaluation. The "interpretation of the AI output by reviewing clinicians" method primarily focuses on validating the AI's direct result rather than a comparative human performance study.

    7. The Type of Ground Truth Used

    • Expert Consensus/Annotation:
      • Sono Pelvic Floor 3.0 (MHD): Ground truth was established through a "two-stage curation process." Curators identified the MHD plane and marked anatomical structures. These curated datasets were then "reviewed by expert arbitrators."
      • Sono Pelvic Floor 3.0 (Anal Sphincter): Ground truth involved "3D segmentation of the Anal Canal using VOCAL tool in the 4D View5 Software." Each volume was "reviewed by a skilled arbitrator for correctness."
      • SonoAVC Follicle 2.0: The "Truthing process for training dataset" indicates a "detailed curation protocol (developed by clinical experts)" and a "two-step approach" with an arbitrator reviewing all datasets for clinical accuracy.
      • 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive: Ground truth for sorting/grading was initially done by a single sonographer, then reviewed by a "5-sonographer review panel" for accuracy, with reclassification based on majority view if needed.

    8. The Sample Size for the Training Set

    • Sono Pelvic Floor 3.0 (MHD): Total Volumes: 983
    • Sono Pelvic Floor 3.0 (Anal Sphincter): Total Volumes: 828
    • SonoAVC Follicle 2.0: Total Volumes: 249
    • 2nd Trimester SonoLyst/SonoLystLive: "Total number of images: 2.2M", "Total number of cine loops: 3595". (The precise breakdown of training vs. test from this total isn't given for this feature, but it's a large overall dataset).
    • 1st Trimester SonoLyst/SonoLystLive: 122,711 labelled source images from 35,861 patients.

    9. How the Ground Truth for the Training Set Was Established

    • Sono Pelvic Floor 3.0 (MHD): A two-stage curation process. First, curators identify the MHD plane and then mark anatomical structures. These curated datasets are then reviewed by expert arbitrators and "changes/edits made if necessary to maintain correctness and consistency in curations."
    • Sono Pelvic Floor 3.0 (Anal Sphincter): "3D segmentation of the Anal Canal using VOCAL tool in the 4D View5 Software." Curation protocol involved aligning the volume and segmenting the Anal Canal. Each volume was "reviewed by a skilled arbitrator for correctness."
    • SonoAVC Follicle 2.0: A "two-step approach" was followed. First, curators were trained on a "detailed curation protocol (developed by clinical experts)." Second, an automated quality control step confirmed mask/marking availability, and an arbitrator reviewed all datasets from each curator's completed data pool for clinical accuracy, with inconsistencies discussed by the curation team.
    • 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive: The images were initially "curated (sorted and graded) by a single sonographer." If these differed from the ground truth (which implies a higher standard or previous ground truth for comparison), a "5-sonographer review panel" reviewed them and reclassified based on majority view to achieve the final ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 7