Search Filters

Search Results

Found 7 results

510(k) Data Aggregation

    K Number
    K242168
    Date Cleared
    2024-12-20

    (149 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K231301 Vscan Air, K211488 Logiq E10, K240111: Venue

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is a general purpose ultrasound system intended for use by qualified and trained healthcare professionals. Specific clinical applications remain the same as previously cleared: Fetall OB; Abdominal (including GYN, pelvic and infertility monitoring/follicle development); Pediatric; Small Organ (breast, testes, thyroid etc.); Neonatal and Adult Cephalic; Cardiac (adult and pediatric); Musculo-skeletal Conventional and Superficial; Vascular; Transvaginal (including GYN); Transrectal

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/ PWD, B/ Elastography. The Voluson™ Expert 18, Voluson™ Expert 20, Voluson™ Expert 22 is intended to be used in a hospital or medical clinic.

    Device Description

    The systems are full-featured Track 3 ultrasound systems, primarily for general radiology use and specialized for OB/GYN with particular features for real-time 3D/4D acquisition. They consist of a mobile console with keyboard control panel; color LCD/TFT touch panel, color video display and optional image storage and printing devices. They provide high performance ultrasound imaging and analysis and have comprehensive networking and DICOM capability. They utilize a variety of linear, curved linear, matrix phased array transducers including mechanical and electronic scanning transducers, which provide highly accurate real-time three-dimensional imaging supporting all standard acquisition modes.

    AI/ML Overview

    The provided document describes the predicate devices as the Voluson Expert 18, Voluson Expert 20, Voluson Expert 22. The K-number for the primary predicate device is K231965. The document does NOT describe the acceptance criteria or study that proves the device meets the acceptance criteria for those predicate devices. Instead, it details the testing and acceptance criteria for new or updated AI software features introduced with the new Voluson Expert Series devices (K242168).

    Here's a breakdown of the requested information based on the AI testing summaries provided for the new/updated features: Sono Pelvic Floor 3.0 (MHD and Anal Sphincter), SonoAVC Follicle 2.0, and 1st/2nd Trimester SonoLyst/SonoLystlive.


    Acceptance Criteria and Device Performance for New/Updated AI Features

    1. Table of Acceptance Criteria and Reported Device Performance

    AI FeatureAcceptance CriteriaReported Device Performance
    Sono Pelvic Floor 3.0 (MHD)MHD Tracking, Minimum MHD Frame Detection, Maximum MHD Frame Detection:
    • On datasets marked as "Good Image Quality": success rate should be 70% or higher.
    • On datasets marked as "Challenging Image Quality": success rate should be 60% or higher.
      Overall MHD:
    • On "Good IQ" datasets: 70% or higher.
    • On "Challenging Quality" datasets: 60% or higher. | MHD Tracking:
    • Good Image Quality: 89.3%
    • Challenging Image Quality: 77.7%
      Minimum MHD Frame Detection:
    • Good Image Quality: 89.3%
    • Challenging Image Quality: 83.3%
      Maximum MHD Frame Detection:
    • Good Image Quality: 90.66%
    • Challenging Image Quality: 77.7%
      Overall MHD:
    • On Good IQ datasets: 81.9%
    • On Challenging quality datasets: 60.9% |
      | Sono Pelvic Floor 3.0 (Anal Sphincter)| - On datasets marked as "Good Image Quality": success rate should be 70% or higher.
    • On datasets marked as "Challenging Image Quality": success rate should be 60% or higher. | The document states "Verification results on actual verification data is as follows" but then the table structure is missing the actual performance metrics for Anal Sphincter. It only lists "On Good IQ datasets: 81.9%" and "On Challenging quality datasets: 60.9%" under the MHD section, implying those might be overall success rates for the entire Sono Pelvic Floor 3.0 feature across both components, but it's not explicitly clear. Therefore, the specific reported device performance for "Anal Sphincter" is not clearly presented in the provided text. |
      | SonoAVC Follicle 2.0 | - The success rate for the AI feature should be 70% or higher. (This appears to be an overall accuracy criterion). | Accuracy:
    • On test data acquired together with train cohort: 94.73%
    • On test data acquired consecutively post model development: 92.8%
    • Overall Accuracy: 93.6%

    Dice Coefficient by Size Range:

    • 3-5 mm: 0.937619
    • 5-10 mm: 0.946289
    • 10-15 mm: 0.962315
    • 15 mm: 0.93206 |
      | 1st Trimester SonoLyst/SonoLystLive | - The average success rate of SonoLyst 1st Trimester IR, X and SonoBiometry CRL and overall traffic light accuracy is 80% or higher. | The document states "The average success rate...is 80% or higher" as the acceptance criteria and then mentions "Data used for both training and validation has been collected across multiple geographical sites..." but it does not explicitly provide the numerically reported device performance value that met or exceeded the 80% criterion. |
      | 2nd Trimester SonoLyst/SonoLystLive | - Acceptance criteria are met for both subgroups (variety of ultrasound systems/data formats vs. target platform). (The specific numerical criteria for acceptance are not explicitly stated, but rather that the performance met them for demonstration of generalization.) | The document states "For both subgroups the acceptance criteria are met." but does not explicitly provide the numerically reported device performance values. |

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter):

      • Test Set Sample Size: 93 volumes for MHD, 106 volumes for Anal Sphincter.
      • Data Provenance: Data is provided by external clinical partners who de-identified the data. Original data collected in 4D volume Cines (*.vol5 or *.4dv6) or 4D/3D volume acquisitions (*.vol2 or *.4dv3).
      • Countries: A diverse range of countries contributed to the test data including Italy, U.S.A, Australia, Germany, Czech Republic, France, India (for MHD); and Italy, U.S.A, France, Germany, India (for Anal Sphincter).
      • Retrospective/Prospective: The data collection method ("re-process data to our needs retrospectively during scan conversion") suggests a retrospective approach to assembling the dataset, although a "standardized data collection protocol was followed for all acquisitions." New data was also acquired post-model development from previously unseen sites to test robustness.
    • SonoAVC Follicle 2.0:

      • Test Set Sample Size: 138 datasets, with a total follicle count of 2708 across all volumes.
      • Data Provenance: External clinical partners provided de-identified data in 3D volumes (*.vol or *.4dv).
      • Countries: Germany, India, Spain, United Kingdom, USA.
      • Retrospective/Prospective: The data was split into train/validation/test at the start of model development (suggesting retrospective). Additionally, consecutive data was acquired post-model development from previously unseen systems and probes to test robustness (suggesting some prospective element for this later test set).
    • 2nd Trimester SonoLyst/SonoLystLive:

      • Test Set Sample Size: "Total number of images: 2.2M", "Total number of cine loops: 3595". It's not explicitly stated how much of this was test data vs. training data, but it implies a large dataset for evaluation.
      • Data Provenance: Systems used for data collection included GEHC Voluson V730, E6, E8, E10, Siemens S2000, and Hitachi Aloka. Formats included DICOM & JPEG for still images and RAW data for cine loops.
      • Countries: UK, Austria, India, and USA.
      • Retrospective/Prospective: Not explicitly stated, but "All training data is independent from the test data at a patient level" implies a pre-existing dataset split rather than newly acquired prospective data solely for testing.
    • 1st Trimester SonoLyst/SonoLystLive:

      • Test Set Sample Size: SonoLyst 1st Trim IR: 5271 images, SonoLyst 1st Trim X: 2400 images, SonoLyst 1st Trim Live: 6000 images, SonoBiometry CRL: 110 images.
      • Data Provenance: Systems included GE Voluson V730, P8, S6/S8, E6, E8, E10, Expert 22, Philips Epiq 7G. Formats included DICOM & JPEG for still images and RAW data for cine loops.
      • Countries: UK, Austria, India, and USA.
      • Retrospective/Prospective: "All training data is independent from the test data at a patient level." "A statistically significant subset of the test data is independent from the training data at a site level, with no test data collected at the site being used in training." This indicates a retrospective collection with careful splitting, and some test data from unseen sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter), SonoAVC Follicle 2.0, 2nd Trimester SonoLyst/SonoLystLive, 1st Trimester SonoLyst/SonoLystLive:

      • Number of Experts: Three independent reviewers.
      • Qualifications: "at least two being US Certified sonographers, with extensive clinical experience."
    • Additional for 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive:

      • For sorting/grading accuracy review, a "5-sonographer review panel" was used. Qualifications are not specified beyond being sonographers.

    4. Adjudication Method for the Test Set

    • Sono Pelvic Floor 3.0 (MHD & Anal Sphincter), SonoAVC Follicle 2.0, 2nd Trimester SonoLyst/SonoLystLive, 1st Trimester SonoLyst/SonoLystLive:
      • The evaluation was "based on interpretation of the AI output by reviewing clinicians." The evaluation was "conducted by three independent reviewers."
      • For 2nd and 1st Trimester SonoLyst/SonoLystLive, where sorting/grading accuracy was determined, if initial sorting/grading differed from the ground truth (established by a single sonographer then refined), a 5-sonographer review panel was used, and reclassification was based upon the "majority view of the panel." This implies a form of majority vote adjudication for these specific sub-tasks.
      • The general approach for the three reviewers, especially when evaluating AI output, implies an independent review, and while not explicitly stated, differences would likely lead to discussion or a form of consensus/adjudication. However, a strict 'X+Y' model (like 2+1 or 3+1) is not explicitly detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure how human readers improve with AI vs. without AI assistance. The studies described are primarily aimed at assessing the standalone performance or workflow utility of the AI features.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone performance was assessed for all described AI features. The "Summary test Statistics" and "Verification Results" sections for each feature (Sono Pelvic Floor 3.0, SonoAVC Follicle 2.0, 1st/2nd Trimester SonoLyst/SonoLystLive) report the algorithm's direct performance (e.g., success rates, accuracy, Dice coefficient) against the established ground truth, indicating standalone evaluation. The "interpretation of the AI output by reviewing clinicians" method primarily focuses on validating the AI's direct result rather than a comparative human performance study.

    7. The Type of Ground Truth Used

    • Expert Consensus/Annotation:
      • Sono Pelvic Floor 3.0 (MHD): Ground truth was established through a "two-stage curation process." Curators identified the MHD plane and marked anatomical structures. These curated datasets were then "reviewed by expert arbitrators."
      • Sono Pelvic Floor 3.0 (Anal Sphincter): Ground truth involved "3D segmentation of the Anal Canal using VOCAL tool in the 4D View5 Software." Each volume was "reviewed by a skilled arbitrator for correctness."
      • SonoAVC Follicle 2.0: The "Truthing process for training dataset" indicates a "detailed curation protocol (developed by clinical experts)" and a "two-step approach" with an arbitrator reviewing all datasets for clinical accuracy.
      • 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive: Ground truth for sorting/grading was initially done by a single sonographer, then reviewed by a "5-sonographer review panel" for accuracy, with reclassification based on majority view if needed.

    8. The Sample Size for the Training Set

    • Sono Pelvic Floor 3.0 (MHD): Total Volumes: 983
    • Sono Pelvic Floor 3.0 (Anal Sphincter): Total Volumes: 828
    • SonoAVC Follicle 2.0: Total Volumes: 249
    • 2nd Trimester SonoLyst/SonoLystLive: "Total number of images: 2.2M", "Total number of cine loops: 3595". (The precise breakdown of training vs. test from this total isn't given for this feature, but it's a large overall dataset).
    • 1st Trimester SonoLyst/SonoLystLive: 122,711 labelled source images from 35,861 patients.

    9. How the Ground Truth for the Training Set Was Established

    • Sono Pelvic Floor 3.0 (MHD): A two-stage curation process. First, curators identify the MHD plane and then mark anatomical structures. These curated datasets are then reviewed by expert arbitrators and "changes/edits made if necessary to maintain correctness and consistency in curations."
    • Sono Pelvic Floor 3.0 (Anal Sphincter): "3D segmentation of the Anal Canal using VOCAL tool in the 4D View5 Software." Curation protocol involved aligning the volume and segmenting the Anal Canal. Each volume was "reviewed by a skilled arbitrator for correctness."
    • SonoAVC Follicle 2.0: A "two-step approach" was followed. First, curators were trained on a "detailed curation protocol (developed by clinical experts)." Second, an automated quality control step confirmed mask/marking availability, and an arbitrator reviewed all datasets from each curator's completed data pool for clinical accuracy, with inconsistencies discussed by the curation team.
    • 2nd Trimester SonoLyst/SonoLystLive & 1st Trimester SonoLyst/SonoLystLive: The images were initially "curated (sorted and graded) by a single sonographer." If these differed from the ground truth (which implies a higher standard or previous ground truth for comparison), a "5-sonographer review panel" reviewed them and reclassified based on majority view to achieve the final ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K231989
    Date Cleared
    2023-11-07

    (125 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K211488 LOGIQ E10 Diagnostic Ultrasound System, K202035 Vscan Air, K181685 Vivid E80/ Vivid E90/ Vivid

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The LOGIQ E10s and LOGIQ Fortis are intended for use by a qualified physician for ultrasound evaluation.
    Specific clinical applications and exam types include: Fetal / Obstetrics; Abdominal (including Renal, Gynecology/Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid); Neonatal Cephalic; Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular; Musculo-skeletal Conventional and Superficial; Urology (including Prostate); Transrectal; Transvaginal; Transesophageal and Intraoperative (Abdominal, Vascular).
    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.
    The LOGIQ E10s and LOGIQ Fortis are intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ E10s is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 585 mm wide (keyboard), 991 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 12-inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor.
    The LOGIQ Fortis is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 575 mm wide (keyboard). 925 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a digital keyboard (physical keyboard as an option), specialized controls, 12inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor (or 23.8inch High Resolution LED LCD monitor as an option).

    AI/ML Overview

    The provided text describes three AI features of the LOGIQ E10s and LOGIQ Fortis systems: Auto Renal Measure Assistant, Auto Abdominal Color Assistant, and Auto Preset Assistant. The information provided for each feature allows for a detailed breakdown of their acceptance criteria and the studies conducted to prove they meet these criteria.

    Here's the requested information structured for clarity:


    1. Table of Acceptance Criteria and Reported Device Performance

    AI FeatureAcceptance CriteriaReported Device Performance
    Auto Renal Measure AssistantLongitudinal model accuracy for length measurements expected to be > 80%. Transverse model accuracy for width measurements expected to be > 70%.Longitudinal model for length measurements: Average accuracy of 96.45% (95% CI: ±1.26%), average absolute error of 0.35cm (95% CI: ±0.12 cm).
    Transverse model for width measurements (first mention): Average accuracy of 92.94% (95% CI: ±3.02%), average absolute error of 0.38cm (95% CI: ±0.14 cm).
    Transverse model for width measurements (second mention, likely a typo/repetition): Average accuracy of 93.13% (95% CI: ±3.63%), average absolute error of 0.37cm (95% CI: ±0.14 cm).
    Auto Abdominal Color AssistantOverall model success rate for Aorta, Kidney, Liver, GB, and Pancreas view suggestion expected to be 80% or higher.Specific accuracy percentages for each view are not individually reported in the summary, but the success rate is implied to have met or exceeded the 80% threshold, as the device is deemed substantially equivalent. The summary states "Calculated the accuracies of the algorithm against each class," which suggests these were evaluated.
    Auto Preset AssistantOverall model success rate for Abdomen, Air, Breast, Carotid, Leg, MSK, Scrotal, Thyroid, and Carotid/Thyroid (Mixed) view suggestion expected to be 80% or higher.Specific accuracy percentages for each view are not individually reported in the summary, but the success rate is implied to have met or exceeded the 80% threshold, as the device is deemed substantially equivalent. The summary states "Calculated the accuracies of the algorithm against each class," which suggests these were evaluated.

    2. Sample Sizes and Data Provenance for Test Sets

    • Auto Renal Measure Assistant:
      • Test Set Sample Size: 30 patients, resulting in 60 samples (30 longitudinal views, 30 transverse views).
      • Data Provenance: Prospective collection. Data from USA (58%) and Japan (42%).
    • Auto Abdominal Color Assistant:
      • Test Set Sample Size: 50+ patients, resulting in 1100+ images.
      • Data Provenance: Not explicitly stated as retrospective or prospective, but collected from USA (77%) and Australia (23%).
    • Auto Preset Assistant:
      • Test Set Sample Size: 110+ patients, resulting in 2600+ images.
      • Data Provenance: Not explicitly stated as retrospective or prospective, but collected from USA (41.2%), Austria (3.8%), Australia (1.1%), Japan (41.3%), Italy (0.7%), and Greece (12%).

    3. Number of Experts and Qualifications for Ground Truth

    • Auto Renal Measure Assistant:
      • Number of Experts: 2 "Readers" and 1 "Board Certified Nephrologist" for arbitration.
      • Qualifications: "certified sonographer/Clinician" for the two readers. "Board Certified Nephrologist" for the arbitrator.
    • Auto Abdominal Color Assistant:
      • Number of Experts: Unspecified number of "Readers".
      • Qualifications: "certified sonographer/Clinician" for the readers.
    • Auto Preset Assistant:
      • Number of Experts: Unspecified number of "Readers".
      • Qualifications: "certified sonographer/Clinician" for the readers.

    4. Adjudication Method for Test Sets

    • Auto Renal Measure Assistant:
      • Method: A "Board Certified Nephrologist arbitrated the ground truth between the above two readers to establish the reference standard". This implies a 2+1 (two readers, one arbitrator) method.
    • Auto Abdominal Color Assistant & Auto Preset Assistant:
      • Method: The text states, "Readers (certified sonographer/Clinician) to ground truth the 'anatomy' visible in static B-Mode image." There is no mention of multiple readers or an arbitration process, implying no explicit inter-reader adjudication method was described beyond individual expert annotation.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was explicitly described in the provided text. The studies focus on the standalone performance of the AI algorithms against a derived ground truth, rather than comparing human reader performance with and without AI assistance. Therefore, no effect size for human readers' improvement with AI assistance is reported.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, standalone (algorithm only) performance studies were done for all three AI features listed. The studies evaluate the accuracy or success rate of the AI algorithms in performing their intended functions (measurement, view suggestion) against an established ground truth, without a human-in-the-loop component being explicitly tested or reported.

    7. Type of Ground Truth Used

    • Auto Renal Measure Assistant: Expert Consensus, as it involved two readers and an arbitrator to establish the reference standard for measurements.
    • Auto Abdominal Color Assistant & Auto Preset Assistant: Expert Annotation/Consensus, established by "Readers (certified sonographer/Clinician) to ground truth the 'anatomy'visible in static B-Mode image." While not explicitly stated as consensus among multiple readers, it is established by qualified experts.

    8. Sample Size for Training Sets

    • The training set sample sizes are not explicitly provided in the summaries for any of the AI features. The document only mentions that the "verification data was acquired independently during validation process after the development of the model," and "The exams used for test/training validation purpose are separated from the ones used during training process." This implies training data existed but its size is not detailed.

    9. How Ground Truth for Training Sets Was Established

    • The document does not explicitly describe how the ground truth for the training sets was established. It focuses primarily on the process for the test/validation sets. However, it can be inferred that a similar process involving expert clinicians/sonographers would have been used to establish ground truth for training data, as is common practice in medical imaging AI development.
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Reference Devices :

    K210699, K192159, K211488, K200643, K100931, K212704

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Resona R9/Resona R9 Exp/Resona R9S/Nuewa R9/Nuewa R9/Nuewa R9 Pro/Nuewa R9 Pro/Nuewa R9S/Resona 7/ Resona 7CV/Resona 7EXP/Resona 7OB/Resona 7OB/Resona Y / Resona R9W/ Resona R9W/ Resona R7W/ Nuewa R9W/ Nuewa R7W Diagnostic Ultrasound System is applicable for adults, pregnant women, pediatric patients and neonates. It is intended for use in fetal, abdominal, Intra-operative, small organ(breast, thyroid, testes), neonatal and adult cephalic, trans-rectal, trans-vaginal, musculo-skeletal(conventional, adult and pediatric cardiac, trans-esoph. (Cardiac), peripheral vessel,urology exams.

    This device is a general purpose diagnostic ultrasound system intended for use by qualified and trained healtheare professionals for ultrasound imaging, measurement, display and analysis of the human body and fluid, which is intended to be used in a hospital or medical clinic.

    Modes of operation include: B, M, PWD(Pulse wave Doppler), CWD(Continuous wave Doppler), Color Doppler, Amplitude Doppler, Combined mode(B+M, PW+B, Color+B, PW+Color+B, Power+PW+B), Tissue Harmonic Imaging, Smart 3D, 4D(Real-time 3D), iScape View, TDI(Tissue Doppler Imaging), Color M, Strain Elastography, Contrast imaging (Contrast agent for LVO(Left Ventricular Opacification)), V Flow(Vector Flow), STE(Sound Touch Elastography), STQ(Sound Touch Quantification), Contrast imaging (Contrast agent for Liver).

    Device Description

    The Resona R9, Resona R9 Exp, Resona R9 Pro, Resona R9S, Nuewa R9, Nuewa R9 Exp, Nuewa R9 Pro, Nuewa R9S, Resona 7, Resona 7CV, Resona 7EXP, Resona 7S, Resona 70B, Resona 7PRO, Imagyn 7, Resona Y, Resona R9W, Resona R7W, Nuewa R9W. Nuewa R7W Diagnostic Ultrasound System is a general purpose. mobile, software controlled, ultrasonic diagnostic system.

    This system is a Track 3 device that employs an array of probes that include linear array, Phased array, pencil phased and convex array.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Resona R9 series Diagnostic Ultrasound System, which introduces modifications and new features to an already cleared predicate device (Resona R9, K202785). The submission focuses on demonstrating substantial equivalence to the predicate devices, rather than comprehensive clinical studies on the device's diagnostic performance for specific conditions.

    The study presented here is a non-clinical validation of new features against predefined engineering performance criteria, primarily using phantom studies.

    Here's the breakdown of the information requested, based on the provided text:


    Acceptance Criteria and Reported Device Performance

    The acceptance criteria and reported device performance are specified for three new features: FH Tissue Tracking QA, UltraSound ATtenuation analysis, and HepatoRenal Index Plus. These are performance metrics related to the accuracy of quantitative measurements.

    FeatureAcceptance CriteriaReported Device Performance
    FH Tissue Tracking QABias within ±20%Evaluation Method: Obtained 10 fetal heart B-mode image samples. Compared manually obtained values with FH TTQA-obtained values. Calculated the deviation. (Implicitly, the results met the ±20% bias criteria as the device was cleared for market).
    UltraSound ATtenuation analysisBias within ±5%Evaluation Method: Selected four groups of phantoms with different acoustic attenuation values. Measured acoustic attenuation values and calculated the deviation between measured and calibrated phantom values. (Implicitly, the results met the ±5% bias criteria as the device was cleared for market).
    HepatoRenal Index PlusBias within ±10%Evaluation Method: Selected four groups of H/R-ROIs with different gray-scales in a phantom. Calculated the deviation between measured values and target values of the phantom. (Implicitly, the results met the ±10% bias criteria as the device was cleared for market).

    Additional Information on the Study:

    1. Sample size used for the test set and the data provenance:

      • FH Tissue Tracking QA: 10 fetal heart B-mode image samples. Data provenance is not explicitly stated (e.g., country of origin, retrospective/prospective), but the context of non-clinical testing with "image samples" suggests these were likely existing or specifically generated images, not new prospective patient data for this submission.
      • UltraSound ATtenuation analysis: Four groups of phantoms.
      • HepatoRenal Index Plus: Four groups of H/R-ROIs in a phantom.
      • Data Provenance: For the quantitative features, the testing primarily involved phantoms or existing image samples rather than new prospective patient data. The document does not specify country of origin for any human data or the retrospective/prospective nature of image samples.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • FH Tissue Tracking QA: Ground truth was established by "manual-obtained values." The number of experts and their qualifications (e.g., "radiologist with 10 years of experience") are not specified.
      • UltraSound ATtenuation analysis & HepatoRenal Index Plus: Ground truth was established by the "calibrated value of the phantom" or "target value of the phantom." This implies a reference standard from the phantom's known properties, not human experts.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • For the quantitative measurements using phantoms, adjudication is generally not applicable as the phantom itself provides the ground truth.
      • For "FH Tissue Tracking QA" where "manual-obtained values" are compared, the adjudication method is not specified.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical Studies: Not applicable. The subject of this submission... does not require clinical studies to support substantial equivalence." The studies described are non-clinical engineering performance assessments of new features.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • The tests for "FH Tissue Tracking QA", "UltraSound ATtenuation analysis", and "HepatoRenal Index Plus" assessed the performance of the algorithm/system in extracting quantitative measurements, comparing them to ground truth (manual measurement or phantom values). This is essentially a standalone (algorithm only) performance evaluation for these specific features.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • FH Tissue Tracking QA: Manual-obtained values (presumably from an expert, though not detailed).
      • UltraSound ATtenuation analysis & HepatoRenal Index Plus: Calibrated/target values of phantoms (physical reference standards).
    7. The sample size for the training set:

      • The document does not provide information regarding the sample size for any training set. This submission is for a modification/upgrade to an existing device, and the focus is on the performance of added features rather than the development of the core algorithm from fresh training data.
    8. How the ground truth for the training set was established:

      • Since information on a training set is not provided, how its ground truth was established is also not specified.

    Ask a Question

    Ask a specific question about this device

    K Number
    K222441
    Manufacturer
    Date Cleared
    2022-12-07

    (117 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K211488

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended Use:
    The system is a diagnostic ultrasound imaging system used by qualified and trained healthcare professionals for ultrasound imaging, human body fluid flow analysis and puncture and biopsy guidance.

    Indications to Use:
    The clinical applications and exam types include:
    Fetal (including Obstetrics), Abdominal, Pediatric, Intra-operative Neuro (also known as Neurosurgery), Laparoscopic, Small Organ (also known as Small Parts), Adult Cephalic is also known as Adult Trans-cranial), Neonatal Cephalic, Trans-rectal, Trans-vaginal, Musculo-skeletal (Conventional and Superficial), Cardiac Adult, Transesophageal (Cardiac) and Peripheral Vessel (also known as Peripheral Vascular).

    Modes of Operation:

    • 2D (B-Mode) including Tissue Harmonic Imaging
    • M-Mode
    • PWD Mode
    • CFM Mode (C, VFI)
    • Power Doppler
    • Contrast Imaging
    • CW Doppler
    • Strain Elastography

    Environment:
    The Ultrasound System 2300 is intended for use in the professional healthcare environment (e.g. hospitals, physician offices)

    Contraindications:
    The Ultrasound System 2300 is not intended for ophthalmic use or any use causing the acoustic beam to pass through the eye.
    The Cardiac Adult application is not intended for direct use on the heart.

    Device Description

    The Ultrasound System 2300 is a multi-purpose mobile, software-controlled diagnostic ultrasound system with an on-screen display for thermal and mechanical indices related to potential bio-effect mechanisms which are offered in different configurations/ models intended for urology, general imaging, surgical and anesthesiology applications.

    The system consists of a mobile console (engine) that provides digital acquisition, processing and display capabilities. The user interface includes a conventional keyboard or a glass touchpad, a 19" Clinical Display Monitor (CDM). In addition, a variety of system accessories are available such as baskets, foot switch, printer start-up kit, remote control, and extra holders.

    The Ultrasound System 2300 is available in the following marketing configurations:

      1. bk3000 available with a conventional keyboard configuration. The bk3000 is primarily intended for applications such as urology and general imaging
      1. bk5000 available with a conventional keyboard configuration. The bk5000 is primarily intended for surgery applications.
      1. bkActiv is a configuration available with a glass user interface (UI). bkActiv is primarily intended for surgical and anesthesiology applications.

    All configurations run on the previously cleared SW platform and HW platform (engine) (K180737). The various configurations of the Ultrasound System 2300 are intended to be used for different applications as described above with various transducers and options.

    AI/ML Overview

    The provided text does not contain detailed information about acceptance criteria or a study that proves the device meets those criteria. The document is an FDA 510(k) summary for an ultrasound system, focusing on its substantial equivalence to a predicate device.

    Specifically, the "Performance Data" section (page 19-20) only mentions non-clinical performance (bench testing) related to safety and compliance with voluntary standards (e.g., acoustic output, biocompatibility, cleaning and disinfection, thermal, electrical, electromagnetic, and mechanical safety). It explicitly states:

    • "Animal Testing: Not applicable - animal testing was not required to support substantial equivalence to the predicate device."
    • "Clinical Studies: Not applicable – clinical studies were not required to support substantial equivalence to the predicate device."

    Therefore, I cannot extract the information required to answer your request regarding acceptance criteria and a study proving the device meets those criteria, as no such study is described in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220882
    Date Cleared
    2022-07-22

    (119 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    LOGIQ E10 (K211488), Venue (K202132), Vivid E95 (K181685), Collaboration Live (K200179), Customer Remote

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vivid E80/Vivid E90/Vivid E95 is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by, or under the direction of a qualified and trained physician for ultrasound imaging, measurement, display and analysis of the human body and fluid. The device is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab or in private medical offices. The systems support the following clinical applications: Fetal/Obstetrics, Abdominal (including renal, GYN), Pediatric, Small Organ (breast, testes, thyroid), Neonatal Cephalic, Cardiac (adult and pediatric), Peripheral Vascular, Musculo-skeletal Conventional, Musculo-skeletal Superficial, Urology (including prostate), Transvaginal, Transvaginal, Transrectal, Intra-cardiac and Intra-luminal Guidance (including Biopsy, Vascular Access), ThoracicPleural and Intraoperative (vascular). Modes of operation include: 3D, Real time (RT) 3D (4D), B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/Color M, B/PWD or CWD, B/Color/PWD or CWD, B/Power/PWD.

    Device Description

    Vivid™ E80 / Vivid E90 / Vivid E95 is a Track 3, diagnostic ultrasound system for use by qualified and trained healthcare professionals, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capabilities.

    The Vivid E80 / Vivid E90 / Vivid E95 consists of a mobile console with a height-adjustable control panel, color LCD touch panel, and display monitor. It includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array or dual array format, including dedicated CW transducers and real time 3D transducers. System can also be used with compatible ICE transducer.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for image storage, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USB-adapter) connection. The system includes capability to output data to other devices like printing devices.

    AI/ML Overview

    The device in question is the Vivid E80/Vivid E90/Vivid E95 ultrasound system, which includes Artificial Intelligence (AI) features named Easy Auto EF and Easy AFI LV.

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria (for AI algorithm accuracy)Reported Device Performance (Average Dice Score)
    92% or higher for datasets from different countries92% or higher
    91% or higher for datasets from different scanning views91% or higher
    92% or higher for datasets from different left ventricle volumes92% or higher

    2. Sample Size Used for the Test Set and Data Provenance:

    • Number of individual patients' images: 45 exams from assumed 45 patients (exact number of patients unknown due to anonymization).
    • Number of samples (images): 135 images extracted from the 45 exams.
    • Data Provenance: Retrospective, collected from different countries across Europe, Asia, and the US. The dataset included adult patients; specific age and gender were unknown due to anonymization.
    • Clinical Subgroups and Confounders: The test dataset included images from different countries, different scanning views, and a range of different Left Ventricle (LV) volumes.
    • Equipment and Protocols: Mixed data from 5 different probes and 4 different Console variants. Data collection protocol was standardized across all sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    • Initial Ground Truthing: Two certified cardiologists.
    • Adjudication/Consensus: A panel of experienced experts further reviewed annotations that the two cardiologists could not agree on.
    • Qualifications: "Certified cardiologists" for initial delineation and "experienced experts" for the panel. Specific experience levels (e.g., years of experience) are not provided.

    4. Adjudication Method for the Test Set:

    • Method: A 2+1 (or 2+panel) adjudication method was used.
      • First, two certified cardiologists performed manual delineation and reviewed each other's annotations.
      • A consensus reading was performed where the two cardiologists discussed disagreements.
      • If they could not agree, a panel of experienced experts reviewed the annotations to reach a final consensus.
    • Ground Truth Definition: The ground truth used was the annotations that the initial two cardiologists agreed upon, and the consensus annotations achieved by the expert panel for disagreed cases.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned in the provided text. The evaluation focuses on the standalone performance of the AI algorithm.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation of the AI algorithm (Easy Auto EF and Easy AFI LV) was conducted. The accuracy was measured using the average Dice score based on the ground truth established by expert consensus.

    7. The Type of Ground Truth Used:

    • Expert Consensus: The ground truth for the test set was established through a multi-stage process involving manual delineation by two certified cardiologists, their peer review, and a final consensus by a panel of experienced experts.

    8. The Sample Size for the Training Set:

    • The document states that to ensure independence, "we used datasets from different clinical sites for testing as compared to the clinical sites for training." However, the specific sample size of the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established:

    • The document implies that training data existed ("datasets from different clinical sites for training"), but it does not explicitly describe how the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220940
    Date Cleared
    2022-07-22

    (113 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K211488, K202658, K202132

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EchoPAC Software Only / EchoPAC Plug-in is intended for diagnostic review and analysis of ultrasound images, patient record management and reporting, for use by, or on the order of a licensed physician. EchoPAC Software Only / EchoPAC Plug-in allows post-processing of raw data images from GE ultrasound scanners and DICOM ultrasound images.

    Ultrasound images are acquired via B (2D), M, Color M modes, Color, Power, Pulsed and CW Doppler modes, Coded Pulse, Harmonic,3D, and Real time (RT) 3D Mode (4D).

    Clinical applications include: Fetal/Obstetrics; Abdominal (including renal and GYN); Urology (including prostate); Pediatric; Small organs (breast, testes, thyroid); Neonatal and Adult and pediatric); Peripheral Vascular; Transesophageal (TEE); Musculo-skeletal Conventional; Musculo-skeletal Superficial; Transrectal (TR); Transvaginal (TV): Intraoperative ( vascular); Intra-Cardiac; Thoracic/Pleural and Intra-Luminal.

    Device Description

    EchoPAC Software Only / EchoPAC Plug-in provides image processing, annotation, analysis, measurement, report generation, communication, storage and retrieval functionality to ultrasound images that are acquired via the GE Healthcare Vivid family of ultrasound systems, as well as DICOM images from other ultrasound systems. EchoPAC Software Only will be offered as SW only to be installed directly on customer PC hardware and EchoPAC Plug-in is intended to be hosted by a generalized PACS host workstation. EchoPAC Software Only / EchoPAC Plug-in is DICOM compliant, transferring images and data via LAN between systems, hard copy devices, file servers and other workstations.

    AI/ML Overview

    The provided FDA 510(k) summary for GE Medical Systems Ultrasound and Primary Care Diagnostics, LLC's EchoPAC Software Only/EchoPAC Plug-in includes an "AI Summary of Testing" section for the Easy Auto EF and Easy AFI LV algorithms. This section provides information relevant to acceptance criteria and study details.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the reported performance figures, as they state the accuracy achieved.

    Acceptance Criteria (Implied)Reported Device Performance (Accuracy)
    ≥ 92% average Dice score (general)92% or higher
    ≥ 91% average Dice score (different scanning views)91% or higher
    ≥ 92% average Dice score (different left ventricle volumes)92% or higher

    Note: The document only provides Dice score for "accuracy." Other common performance metrics like sensitivity, specificity, or F1-score are not explicitly stated.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set:
      • Individual patients' images: 45 exams from assumed 45 patients (exact number of patients unknown due to anonymization).
      • Number of samples (images): 135 images extracted from the 45 exams.
    • Data Provenance: Europe, Asia, US (retrospective, as indicated by anonymization and collection for testing).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Two certified cardiologists initially, with a panel of experienced experts for adjudication.
    • Qualifications of Experts:
      • Two certified cardiologists (for initial manual delineation and review).
      • A panel of experienced experts (for reviewing annotations that the two cardiologists could not agree on). Specific years of experience are not mentioned beyond "experienced."

    4. Adjudication Method

    The adjudication method used was a 2+1 process (consensus followed by expert panel review):

    1. Consensus Reading: Two certified cardiologists performed manual delineation and then reviewed each other's annotations. They discussed disagreements to reach a consensus.
    2. Expert Panel Review: If the two cardiologists could not agree on an annotation, a panel of experienced experts further reviewed those annotations to establish the final ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. It focuses on the standalone performance of the AI algorithm. Therefore, no effect size of human readers improving with AI vs. without AI assistance is provided.

    6. Standalone Performance Study

    Yes, a standalone (i.e., algorithm-only without human-in-the-loop performance) study was done. The reported Dice scores directly evaluate the algorithm's accuracy in segmenting regions of interest, independent of human interaction during the measurement process.

    7. Type of Ground Truth Used

    The type of ground truth used was expert consensus. It was derived from manual delineations by certified cardiologists, with a further review and consensus by an expert panel for disagreements.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. It only mentions that "datasets from different clinical sites for testing as compared to the clinical sites for training" were used.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. It only describes the process for the test set's ground truth. However, it is generally assumed that similar expert-driven annotation processes would have been used for training data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220619
    Date Cleared
    2022-07-15

    (134 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    LOGIQ E10 (K211488), Vivid E95 (K202658), Venue (K202132), Collaboration Live (K200179), Customer Remote

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vivid S60N/Vivid S70N is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by, or under the direction of a qualified and trained physician for ultrasound imaging, measurement, display and analysis of the human body and fluid. The device is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab or in private medical offices. The systems support the following clinical applications: Fetal/Obstetrics, Abdominal (including renal, GYN), Pediatric, Small Organ (breast, thyroid), Neonatal Cephalic, Adult Cephalic, Cardiac (adult and pediatric), Peripheral Vascular, Musculo-skeletal Conventional, Musculo-skeletal Superficial, Urology (including prostate), Transvaginal, Transvaginal, Transrectal, Intra-cardiac and Intra-luminal, Interventional Guidance (including Biopsy, Vascular Access), Thoracic/Pleural, and Intraoperative (vascular). Modes of operation include: 3D/4D Imaging mode, B, M, PW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/PWD or CWD, B/ Color/PWD or CWD, B/Power/PWD.

    Device Description

    Vivid S60N / Vivid S70N is a Track 3, diagnostic ultrasound system for use by qualified and trained healthcare professionals, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capability.

    The Vivid S60N / Vivid S70N consists of a mobile console with a height-adjustable control panel, color LCD touch panel, LCD display monitor and optional image storage and printing devices. It includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array or dual array format, including dedicated CW transducers and real time 3D transducer. System can also be used with compatible ICE transducers.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for Image storage, hard copy printing, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USBadapter) connection.

    AI/ML Overview

    The provided text focuses on the 510(k) premarket notification for the GE Vivid S60N/S70N ultrasound system. It details device descriptions, intended use, technological characteristics, and non-clinical tests. Crucially, it includes information on the "AI Summary of Testing: Easy Auto EF and Easy AFI LV," which addresses the performance of the AI algorithms incorporated into the device.

    Here's a breakdown of the requested information based on the provided text:

    Acceptance Criteria and Study that Proves Device Meets Acceptance Criteria

    The document states that the acceptance criterion for the AI algorithms (Easy Auto EF and Easy AFI LV) is an average dice score of 91% or higher across various testing conditions.

    Study Proving Device Meets Acceptance Criteria:

    The study involved testing the AI algorithms on datasets from different countries, scanning views, and left ventricle volumes.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/MetricAcceptance CriteriaReported Device Performance
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different countries$\geq$ 91%*92% or higher
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different scanning views$\geq$ 91%*91% or higher
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different left ventricle volumes$\geq$ 91%*92% or higher

    *Note: The text states "92% or higher" and "91% or higher" for the reported performance, implying the acceptance criterion was at least 91%.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: 45 exams from assumed 45 patients (exact number of patients unknown due to anonymization). 135 images extracted from the 45 exams.
    • Data Provenance:
      • Country of Origin: Europe, Asia, US (mixed data from different countries).
      • Retrospective/Prospective: Not explicitly stated, but the description of "data collection protocol was standardized across all data collection sites" and "During testing of the AI algorithm, we have included images from different countries..." suggests a pre-existing collected dataset, making it likely retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • Number of Experts:
      • Initial Delineation and Review: 2 certified cardiologists.
      • Consensus Review: A panel of experienced experts.
    • Qualifications of Experts:
      • "Certified cardiologists" (for initial delineation and review).
      • "Experienced experts" (for the consensus review panel). Specific number of years of experience is not provided, but "certified" and "experienced" imply relevant qualifications.

    4. Adjudication Method for the Test Set:

    • Method: A multi-stage adjudication process was used:
      1. Two certified cardiologists performed manual delineation.
      2. They then reviewed each other's annotations.
      3. A "consensus reading" was performed where the two cardiologists discussed agreement/disagreement.
      4. A panel of experienced experts further reviewed annotations that the two cardiologists could not agree on.
    • The final ground truth relied on annotations that the two cardiologists agreed upon, and the consensus annotations achieved by the expert panel.

    5. If a Multi Reader Multi Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study was done. The information provided focuses on the standalone performance of the AI algorithm (Easy Auto EF and Easy AFI LV) in terms of Dice score accuracy for image segmentation, not on reader performance with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation of the AI algorithm was done. The reported "average dice score" is a metric for the algorithm's performance in automatically segmenting cardiac structures (Left Ventricle volume). The study describes the AI's accuracy in delineating these structures.

    7. The Type of Ground Truth Used:

    • Expert Consensus. The ground truth was established through manual delineation by certified cardiologists, followed by their mutual review, and a final consensus adjudicated by a panel of experienced experts.

    8. The Sample Size for the Training Set:

    • Not explicitly stated in the provided text. The document only mentions that "To ensure that the testing dataset is not mixed with the training data, we used datasets from different clinical sites for testing as compared to the clinical sites for training." This implies a training set existed and was distinct, but its size is not given.

    9. How the Ground Truth for the Training Set Was Established:

    • Not explicitly stated in the provided text. While the method for establishing ground truth for the test set is detailed, the process for the training set is not described. It is implied that ground truth was established, as AI models require labeled data for training, but the specific methodology is omitted.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1