Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K232346
    Manufacturer
    Date Cleared
    2023-10-27

    (84 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K213668, K150193, K052423

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Digital Expert Access with Remote Scanning is intended as a remote collaboration tool to view and review MR images, to remotely control MR Imaging Devices and to initiate MRI scans remotely.

    Digital Expert Access with Remote Scanning is a remote scan assistance solution which allows remote control of an MR Imaging Device including the ability to initiate a scan remotely. This access provides real time communication mechanisms between the remote and onsite users to facilitate the acquisition occurring on the device. Access must be granted by the onsite user operating the system. Images reviewed remotely are not for diagnostic use.

    Device Description

    Digital Expert Access is a variation of the Customer Remote Console cleared under K150193. It is a remote scan assistance solution designed to address the skill variability in technologists and their need for ondemand support by allowing them to interact directly with a remote expert connected to the hospital network. By enabling the collaboration between an Onsite Technologist and Remote Expert, Digital Expert Access helps the onsite technologist to seek guidance and real time support on scanning related queries including but not limited to training, procedure assessment, and scanning parameter management.

    Digital Expert Access with Remote Scanning introduces a feature that enables the Remote Expert to initiate a scan and make changes in real time during the scanning session. This remote scan feature is only available when Digital Expert Access is connected to a compatible GE HealthCare MRI system.

    Digital Expert Access with Remote Scanning enables the following capabilities for the Onsite Technologist and the Remote Expert:

      1. Collaborative session between an Onsite Technologist and Remote Expert
      1. Real-time scanner screen share and live annotation
    • ന് Remote console access and control
      1. Remote Scan Initiation

    Digital Expert Access with Remote Scanning is not intended for diagnostic use or patient safety-related management. This solution is not intended to be used by individuals who are not properly trained in the operation of GE HealthCare Medical Imaging systems. Digital Expert Access with Remote Scanning does not directly interface with any patients and requires the Onsite Technologist to be continuously present throughout the scanning procedure. Digital Expert Access with Remote Scanning does not acquire any MRI images, nor does it do any post image processing. All image acquisition and image processing is conducted by the GE HealthCare MRI system.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for a device called "Digital Expert Access with Remote Scanning." This document outlines the device's purpose, intended use, and substantial equivalence to predicate devices, but it does not contain information about specific acceptance criteria or an analytical study proving the device meets those criteria in terms of performance metrics like accuracy, sensitivity, or specificity.

    The document emphasizes that the device did not require clinical studies to support substantial equivalence. Instead, the focus was on non-clinical tests, primarily design verification and validation testing, to ensure proper implementation of design requirements and that no new safety or effectiveness issues were introduced.

    Therefore, many of the requested details about acceptance criteria for performance, sample sizes, ground truth establishment, expert involvement, and MRMC studies are not present in this document because the submission strategy relied on substantial equivalence based on technological similarity and non-clinical testing rather than performance studies against defined quantitative acceptance criteria.

    However, based on the information provided, here's what can be extracted and inferred regarding the "acceptance criteria" and "study":

    Acceptance Criteria (Inferred from Non-Clinical Testing and Device Functionality):

    Since no performance metrics are given, the acceptance criteria are implicitly related to the successful functionality and safety of the remote scanning feature. These would likely be qualitative or functional:

    Acceptance Criterion (Inferred)Reported Device Performance (Summary)
    1. Successful Remote Control of MR Imaging DeviceDemonstrated through non-clinical verification and validation testing on a subset of GE HealthCare MRI systems.
    2. Successful Remote Scan InitiationDemonstrated through non-clinical verification and validation testing on a subset of GE HealthCare MRI systems.
    3. Real-time Communication Mechanisms (Onsite & Remote Users)Device described as providing real-time communication mechanisms; functionality tested during design verification.
    4. Access Granting Mechanism by Onsite UserFunctionality described and tested: "Access must be granted by the onsite user circulating the system."
    5. No New Potential Safety Risks IntroducedConcluded based on design verification and validation testing; no unexpected test results or safety issues observed.
    6. Proper Implementation of Design RequirementsVerified through design control testing per GE HealthCare's quality system (e.g., Requirement Definition, Risk Analysis, Technical Design Reviews).
    7. Maintenance of Safety & EffectivenessConfirmed by design verification and validation testing, concluding it has not been affected.

    Study Information (Based on Non-Clinical Testing):

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not explicitly stated as a number of "cases" or "patients" in a clinical study. The testing was conducted on a "subset of GE HealthCare MRI systems" on the bench. This implies a sample of hardware configurations rather than patient data.
      • Data Provenance: Not applicable as it was non-clinical "bench" testing, not involving patient data or a specific country of origin for such data. It was performed internally by GE HealthCare.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • Not applicable. As this was non-clinical, functional testing, there was no "ground truth" in the clinical diagnostic sense established by experts. Testing would have involved engineers and quality assurance personnel verifying the system's intended functionality.
    3. Adjudication Method for the Test Set:

      • Not applicable for functional verification testing. Adjudication methods are typically used when comparing human interpretations or algorithmic outputs against a gold standard in diagnostic or clinical performance studies.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:

      • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "The subject of this premarket submission, Digital Expert Access with Remote Scanning did not require clinical studies to support substantial equivalence..." The device is a remote control and collaboration tool, not an AI-assisted diagnostic device meant to improve human reader performance.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

      • Not applicable in the context of performance. The device is inherently a "human-in-the-loop" collaboration tool. While its technical components were likely tested in isolation (standalone components), the overall device's "performance" is tied to its functional remote control and communication capabilities, not an algorithmic diagnostic output.
    6. The Type of Ground Truth Used:

      • For the non-clinical testing, the "ground truth" was the pre-defined design requirements and expected functional behavior of the device. This is typical for engineering verification and validation testing, where the "truth" is whether the system performs as designed and specified (e.g., does the remote expert successfully initiate a scan?).
    7. The Sample Size for the Training Set:

      • Not applicable. The document does not describe the device as employing machine learning or AI that would require a "training set" of data in the sense of a deep learning model. It is a control and communication software system.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable. As no training set for a machine learning model is mentioned, ground truth establishment for such a set is irrelevant to this submission.

    In summary, the FDA 510(k) clearance for this device was based on a demonstration of "substantial equivalence" to existing predicate devices through non-clinical design verification and validation testing, focusing on ensuring the new remote scanning feature did not introduce new safety or effectiveness issues, rather than requiring quantitative performance studies with clinical acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K220882
    Date Cleared
    2022-07-22

    (119 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    Venue (K202132), Vivid E95 (K181685), Collaboration Live (K200179), Customer Remote Console (CRC) (K150193

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vivid E80/Vivid E90/Vivid E95 is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by, or under the direction of a qualified and trained physician for ultrasound imaging, measurement, display and analysis of the human body and fluid. The device is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab or in private medical offices. The systems support the following clinical applications: Fetal/Obstetrics, Abdominal (including renal, GYN), Pediatric, Small Organ (breast, testes, thyroid), Neonatal Cephalic, Cardiac (adult and pediatric), Peripheral Vascular, Musculo-skeletal Conventional, Musculo-skeletal Superficial, Urology (including prostate), Transvaginal, Transvaginal, Transrectal, Intra-cardiac and Intra-luminal Guidance (including Biopsy, Vascular Access), ThoracicPleural and Intraoperative (vascular). Modes of operation include: 3D, Real time (RT) 3D (4D), B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/Color M, B/PWD or CWD, B/Color/PWD or CWD, B/Power/PWD.

    Device Description

    Vivid™ E80 / Vivid E90 / Vivid E95 is a Track 3, diagnostic ultrasound system for use by qualified and trained healthcare professionals, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capabilities.

    The Vivid E80 / Vivid E90 / Vivid E95 consists of a mobile console with a height-adjustable control panel, color LCD touch panel, and display monitor. It includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array or dual array format, including dedicated CW transducers and real time 3D transducers. System can also be used with compatible ICE transducer.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for image storage, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USB-adapter) connection. The system includes capability to output data to other devices like printing devices.

    AI/ML Overview

    The device in question is the Vivid E80/Vivid E90/Vivid E95 ultrasound system, which includes Artificial Intelligence (AI) features named Easy Auto EF and Easy AFI LV.

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria (for AI algorithm accuracy)Reported Device Performance (Average Dice Score)
    92% or higher for datasets from different countries92% or higher
    91% or higher for datasets from different scanning views91% or higher
    92% or higher for datasets from different left ventricle volumes92% or higher

    2. Sample Size Used for the Test Set and Data Provenance:

    • Number of individual patients' images: 45 exams from assumed 45 patients (exact number of patients unknown due to anonymization).
    • Number of samples (images): 135 images extracted from the 45 exams.
    • Data Provenance: Retrospective, collected from different countries across Europe, Asia, and the US. The dataset included adult patients; specific age and gender were unknown due to anonymization.
    • Clinical Subgroups and Confounders: The test dataset included images from different countries, different scanning views, and a range of different Left Ventricle (LV) volumes.
    • Equipment and Protocols: Mixed data from 5 different probes and 4 different Console variants. Data collection protocol was standardized across all sites.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    • Initial Ground Truthing: Two certified cardiologists.
    • Adjudication/Consensus: A panel of experienced experts further reviewed annotations that the two cardiologists could not agree on.
    • Qualifications: "Certified cardiologists" for initial delineation and "experienced experts" for the panel. Specific experience levels (e.g., years of experience) are not provided.

    4. Adjudication Method for the Test Set:

    • Method: A 2+1 (or 2+panel) adjudication method was used.
      • First, two certified cardiologists performed manual delineation and reviewed each other's annotations.
      • A consensus reading was performed where the two cardiologists discussed disagreements.
      • If they could not agree, a panel of experienced experts reviewed the annotations to reach a final consensus.
    • Ground Truth Definition: The ground truth used was the annotations that the initial two cardiologists agreed upon, and the consensus annotations achieved by the expert panel for disagreed cases.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned in the provided text. The evaluation focuses on the standalone performance of the AI algorithm.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation of the AI algorithm (Easy Auto EF and Easy AFI LV) was conducted. The accuracy was measured using the average Dice score based on the ground truth established by expert consensus.

    7. The Type of Ground Truth Used:

    • Expert Consensus: The ground truth for the test set was established through a multi-stage process involving manual delineation by two certified cardiologists, their peer review, and a final consensus by a panel of experienced experts.

    8. The Sample Size for the Training Set:

    • The document states that to ensure independence, "we used datasets from different clinical sites for testing as compared to the clinical sites for training." However, the specific sample size of the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established:

    • The document implies that training data existed ("datasets from different clinical sites for training"), but it does not explicitly describe how the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220619
    Date Cleared
    2022-07-15

    (134 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    Vivid E95 (K202658), Venue (K202132), Collaboration Live (K200179), Customer Remote Console (CRC) (K150193

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vivid S60N/Vivid S70N is a general-purpose ultrasound system, specialized for use in cardiac imaging. It is intended for use by, or under the direction of a qualified and trained physician for ultrasound imaging, measurement, display and analysis of the human body and fluid. The device is intended for use in a hospital environment including echo lab, other hospital settings, operating room, Cath lab and EP lab or in private medical offices. The systems support the following clinical applications: Fetal/Obstetrics, Abdominal (including renal, GYN), Pediatric, Small Organ (breast, thyroid), Neonatal Cephalic, Adult Cephalic, Cardiac (adult and pediatric), Peripheral Vascular, Musculo-skeletal Conventional, Musculo-skeletal Superficial, Urology (including prostate), Transvaginal, Transvaginal, Transrectal, Intra-cardiac and Intra-luminal, Interventional Guidance (including Biopsy, Vascular Access), Thoracic/Pleural, and Intraoperative (vascular). Modes of operation include: 3D/4D Imaging mode, B, M, PW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse and Combined modes: B/M, B/PWD or CWD, B/ Color/PWD or CWD, B/Power/PWD.

    Device Description

    Vivid S60N / Vivid S70N is a Track 3, diagnostic ultrasound system for use by qualified and trained healthcare professionals, which is primarily intended for cardiac imaging and analysis but also includes vascular and general radiology applications. It is a full featured diagnostic ultrasound system that provides digital acquisition, processing, analysis and display capability.

    The Vivid S60N / Vivid S70N consists of a mobile console with a height-adjustable control panel, color LCD touch panel, LCD display monitor and optional image storage and printing devices. It includes a variety of electronic array transducers operating in linear, curved, sector/phased array, matrix array or dual array format, including dedicated CW transducers and real time 3D transducer. System can also be used with compatible ICE transducers.

    The system includes electronics for transmit and receive of ultrasound data, ultrasound signal processing, software computing, hardware for Image storage, hard copy printing, and network access to the facility through both LAN and wireless (supported by use of a wireless LAN USBadapter) connection.

    AI/ML Overview

    The provided text focuses on the 510(k) premarket notification for the GE Vivid S60N/S70N ultrasound system. It details device descriptions, intended use, technological characteristics, and non-clinical tests. Crucially, it includes information on the "AI Summary of Testing: Easy Auto EF and Easy AFI LV," which addresses the performance of the AI algorithms incorporated into the device.

    Here's a breakdown of the requested information based on the provided text:

    Acceptance Criteria and Study that Proves Device Meets Acceptance Criteria

    The document states that the acceptance criterion for the AI algorithms (Easy Auto EF and Easy AFI LV) is an average dice score of 91% or higher across various testing conditions.

    Study Proving Device Meets Acceptance Criteria:

    The study involved testing the AI algorithms on datasets from different countries, scanning views, and left ventricle volumes.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/MetricAcceptance CriteriaReported Device Performance
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different countries$\geq$ 91%*92% or higher
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different scanning views$\geq$ 91%*91% or higher
    AI Algorithm Accuracy (Average Dice Score) - Datasets from different left ventricle volumes$\geq$ 91%*92% or higher

    *Note: The text states "92% or higher" and "91% or higher" for the reported performance, implying the acceptance criterion was at least 91%.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: 45 exams from assumed 45 patients (exact number of patients unknown due to anonymization). 135 images extracted from the 45 exams.
    • Data Provenance:
      • Country of Origin: Europe, Asia, US (mixed data from different countries).
      • Retrospective/Prospective: Not explicitly stated, but the description of "data collection protocol was standardized across all data collection sites" and "During testing of the AI algorithm, we have included images from different countries..." suggests a pre-existing collected dataset, making it likely retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • Number of Experts:
      • Initial Delineation and Review: 2 certified cardiologists.
      • Consensus Review: A panel of experienced experts.
    • Qualifications of Experts:
      • "Certified cardiologists" (for initial delineation and review).
      • "Experienced experts" (for the consensus review panel). Specific number of years of experience is not provided, but "certified" and "experienced" imply relevant qualifications.

    4. Adjudication Method for the Test Set:

    • Method: A multi-stage adjudication process was used:
      1. Two certified cardiologists performed manual delineation.
      2. They then reviewed each other's annotations.
      3. A "consensus reading" was performed where the two cardiologists discussed agreement/disagreement.
      4. A panel of experienced experts further reviewed annotations that the two cardiologists could not agree on.
    • The final ground truth relied on annotations that the two cardiologists agreed upon, and the consensus annotations achieved by the expert panel.

    5. If a Multi Reader Multi Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study was done. The information provided focuses on the standalone performance of the AI algorithm (Easy Auto EF and Easy AFI LV) in terms of Dice score accuracy for image segmentation, not on reader performance with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance evaluation of the AI algorithm was done. The reported "average dice score" is a metric for the algorithm's performance in automatically segmenting cardiac structures (Left Ventricle volume). The study describes the AI's accuracy in delineating these structures.

    7. The Type of Ground Truth Used:

    • Expert Consensus. The ground truth was established through manual delineation by certified cardiologists, followed by their mutual review, and a final consensus adjudicated by a panel of experienced experts.

    8. The Sample Size for the Training Set:

    • Not explicitly stated in the provided text. The document only mentions that "To ensure that the testing dataset is not mixed with the training data, we used datasets from different clinical sites for testing as compared to the clinical sites for training." This implies a training set existed and was distinct, but its size is not given.

    9. How the Ground Truth for the Training Set Was Established:

    • Not explicitly stated in the provided text. While the method for establishing ground truth for the test set is detailed, the process for the training set is not described. It is implied that ground truth was established, as AI models require labeled data for training, but the specific methodology is omitted.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1