Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K191493
    Date Cleared
    2019-10-16

    (133 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VMS+ 3.0 is an adjunct to existing ultrasound imaging systems and is intended to record, analyze, store and retrieve digital ultrasound images for computerized 3-dimensional image processing.

    The VMS+ 3.0 is indicated for use where Left Ventricle (LV), Right Ventricle (RV), Left Atrium (LA), and Right Atrium (RA) volumes and ejection fractions are warranted or desired

    Device Description

    The VMS+ was cleared under 510(k) (K173810) for use in evaluation where Right Ventricle (RV). Left Ventricle (LV). Right Atrium (RA), and Left Atrium (RA) volumes and eiection fractions are warranted or desired. The modified VMS+ (VMS+ 3.0) has the same operating principle and employs the same fundamental scientific technology to that of the cleared device.

    AI/ML Overview

    The Ventripoint Medical System Plus (VMS+) 3.0 is a modified version of a previously cleared device (VMS+). The FDA letter and 510(k) summary do not detail a study involving AI assistance or a multi-reader multi-case (MRMC) comparative effectiveness study, nor do they specify acceptance criteria related to a general performance benchmark table or expert-based ground truth establishment as one might find for an AI/ML device.

    Instead, the documentation focuses on demonstrating substantial equivalence to the predicate device (GE EchoPAC) and the reference device (VMS+) by showing that the modifications do not introduce new questions of safety or effectiveness and that the device performs as intended and as well as the predicate device(s).

    Here's an breakdown of the information that is available and a note on what is not provided in the given text:

    Acceptance Criteria & Device Performance:

    The document broadly states that "Predefined acceptance criteria were applied during testing and were met" for specific types of nonclinical performance bench studies. However, the specific quantitative acceptance criteria for performance metrics (e.g., accuracy, precision) of volume measurements are not explicitly provided in a table within this document. It states that the device "delivers volume measurements that are equivalent in accuracy when compared with volumes obtained using the legally marketed VMS+."

    Acceptance Criteria (Generic Statement)Reported Device Performance (General Statement)
    Predefined acceptance criteria for nonclinical performance bench testing were applied and met.VMS+ 3.0 delivers volume measurements that are equivalent in accuracy when compared with volumes obtained using the legally marketed VMS+. Test results demonstrate the device is as safe, as effective, performs as intended and as well as the predicate device (VMS+).
    Software verification and validation test reports were successful according to acceptance criteria.The verification and validation of existing and new features demonstrate that VMS+ 3.0 performs as intended, specifications conform to user needs and intended uses, and that requirements implemented can be consistently fulfilled.
    Compliance with ISO 10993-1 for biocompatibility.Patient contacting components comply with ISO 10993-1.
    Compliance with IEC 60601-1 for electrical safety and essential performance.Complies with IEC 60601-1.
    Compliance with IEC 60601-1-2 for electromagnetic compatibility (EMC).Complies with IEC 60601-1-2.

    Study Details (Based on the provided text):

    1. Sample Size used for the test set and data provenance:

      • The document mentions "nonclinical performance bench study" and "software verification and validation testing" but does not specify the sample size (e.g., number of cases or patients) used for these tests.
      • The document does not specify the country of origin of the data or whether the data was retrospective or prospective. Given it's a bench study and software V&V, it likely refers to engineered test data or data from phantoms/previous device performance.
    2. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

      • The document does not specify a number of experts, their qualifications, or their role in establishing ground truth for the test set as one might expect for a clinical performance study. The ground truth appears to be based on comparison to a previously cleared device's performance benchmarks.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The document does not mention any adjudication method for a test set based on expert review.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not explicitly mentioned or detailed. The device appears to be an image analysis system, but the submission focuses on its equivalence to a previous version and predicate, not on human-AI interaction or improvement. The document explicitly states "No clinical tests were conducted to support substantial equivalence for the subject device."
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The "nonclinical performance bench study" and "software verification and validation" would represent standalone performance assessments of the algorithm and system. The text states these tests demonstrate the device "performs as intended" and "delivers volume measurements that are equivalent in accuracy when compared with volumes obtained using the legally marketed VMS+." This implies an algorithm-only evaluation against established benchmarks from the reference device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The "ground truth" for the nonclinical performance bench study was the "volumes obtained using the legally marketed VMS+" (the reference device). This implies a comparison to the established performance of a prior cleared device, rather than an independent expert consensus, pathology, or outcomes data.
    7. The sample size for the training set:

      • The document describes the VMS+ 3.0 as an updated version of a previous device utilizing a "Knowledge-Based Reconstruction (KBR) algorithm." It does not specify a separate "training set" in the context of machine learning model development. For "knowledge-based" systems, the "training" often refers to the creation and refinement of the underlying rules and models based on anatomical principles and potentially a dataset of examples. The document does not provide a sample size for a training set.
    8. How the ground truth for the training set was established:

      • As this is described as a "Knowledge-Based Reconstruction (KBR) algorithm" and an update to an existing system, the concept of "ground truth for a training set" as typically applied to large-scale deep learning models is not explicitly detailed. The "ground truth" for developing such a knowledge-based system would involve meticulously defined anatomical landmarks and their relationships, likely established through anatomical studies or prior medical imaging analysis principles. The document does not provide details on how the original knowledge base was implicitly "trained" or how its "ground truth" was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K173810
    Date Cleared
    2018-05-14

    (150 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VMS+ is an adjunct to existing ultraging systems and is intended to record, analyze, store and retrieve digital ultrasound images for computerized 3-dimensional image processing.

    The VMS+ is indicated for use where Left Ventricle (LV), Right Ventricle (RV), Left Atrium (LA), and Right Atrium (RA) volumes and ejection fractions are warranted or desired.

    Device Description

    The VentriPoint Medical System was cleared under 510(k) K150628 for use in right ventricle evaluation where RV volumes and ejection fractions are warranted or desired. This current submission is intended to expand system use to Left ventricle (LV), Right Atrium (RA), and Left Atrium (LA) volumes and ejection fractions. LV, RA, LA evaluation is accomplished by the addition of KBR heart catalogs containing a variety of heart models for each chamber. VMS+ employs the same fundamental scientific technology to that of the cleared device.

    AI/ML Overview

    The provided text describes the Ventripoint Medical System Plus (VMS+) and its substantial equivalence to a predicate device, the Ventripoint Medical System (K150628). The submission aims to expand the system's use from only the right ventricle (RV) to include the left ventricle (LV), right atrium (RA), and left atrium (LA) volumes and ejection fractions.

    However, the document explicitly states that "The VMS+, subject of this submission, did not require clinical studies to support the determination of substantial equivalence." This means that a clinical study proving the device meets specific acceptance criteria for clinical performance as an AI-powered diagnostic tool was not conducted or submitted for this particular expansion of indications for use. The focus of the provided text is on demonstrating substantial equivalence based on technological characteristics and the performance of its "catalogs" for LV, LA, and RA evaluation through bench testing.

    Therefore, many of the requested elements for describing clinical acceptance criteria and a study proving their fulfillment cannot be directly extracted from the provided text for the expanded indications.

    Here's a breakdown of what can and cannot be answered based on the provided text:


    1. A table of acceptance criteria and the reported device performance

    Based on the provided text, specific clinical acceptance criteria (e.g., sensitivity, specificity, accuracy targets) for LV, LA, and RA volume/ejection fraction measurement are not explicitly stated or reported.

    The document mentions "Performance bench testing of the LV, LA, and RA catalogues was completed to verify suitability for left ventricle, left atrium, and right atrium evaluation. Testing of the LV, LA, and RA catalogs consisted of a robust series of automated and manual testing to verify reconstruction accuracy."
    However, the results of this testing, in terms of quantitative performance against specific criteria, are not detailed in this submission summary.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The text mentions "Performance bench testing" but does not specify a "test set" in the context of human subject data, nor does it provide details on sample size, or data provenance. This is consistent with the statement that clinical studies were not required.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as a clinical test set with expert-established ground truth is not described.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study is mentioned or described in the provided text. The device is described as an "adjunct to existing ultraging systems" and a "computerized 3-dimensional image processing" system, implying it's a tool for measurement rather than an AI for interpretation that would typically require MRMC studies.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The "Performance bench testing of the LV, LA, and RA catalogues... to verify reconstruction accuracy" could be considered a form of standalone performance evaluation for the reconstruction accuracy component. However, the quantitative results and the specific methodology are not detailed.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    For the "reconstruction accuracy" bench testing, the ground truth would likely be a known, precisely measured physical model or a highly accurate reference standard within the testing environment. The text does not specify the exact nature of this ground truth.

    8. The sample size for the training set

    The device uses "KBR heart catalogs containing a variety of heart models for each chamber." This refers to a "Knowledge Based Reconstruction database." The sample size or specific details of this "training set" (in the machine learning sense) are not provided. The term "catalogs" suggests a collection of models used by the system for its reconstruction process, rather than a dynamically trained AI model in the contemporary sense.

    9. How the ground truth for the training set was established

    The text states the device uses a "Knowledge Based Reconstruction database" for its 3-D reconstruction. For such a system, the "ground truth" for building these "heart catalogs" would typically involve precise anatomical measurements from a diverse set of real hearts (cadaveric, surgical, or high-fidelity imaging such as MRI/CT) to create a statistical model or library of normal and abnormal heart shapes and sizes. However, the specific methodology for establishing this ground truth for the KBR catalogs is not detailed in the provided document.


    Summary regarding acceptance criteria and study details based solely on the provided text:

    The submission for the VMS+ expanding its indications to LV, LA, and RA did not include clinical studies demonstrating performance against specific clinical acceptance criteria. The basis for substantial equivalence for these new indications rested on the device employing the "same fundamental scientific technology" as the cleared predicate and undergoing "Performance bench testing... to verify reconstruction accuracy" for the new LV, LA, and RA catalogs. No specific quantitative targets or results from this bench testing are provided in this summary, nor are details on the test set, ground truth derivation, or expert involvement for clinical validation.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1