Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K223322
    Date Cleared
    2023-07-24

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for use of the Argus Cognitive ReVISION Software are quantification and reporting of right ventricle (RV) structures and function of patients (adults only) with suspected disease to support physicians' diagnosis.

    Device Description

    The Argus Cognitive ReVISION Software ("ReVISION Software") assesses global and segmental function of the right ventricle (RV) of the heart via the decomposition of longitudinal, radial, and anteroposterior motions of the RV walls and quantifies their relative contribution to global RV ejection fraction along with longitudinal, circumferential and area strains using 3D datasets obtained by echocardiography. Segmentation is performed on the end-diastolic 3D RV model using a rule-based, non-AI/ML algorithm to have 15 RV segments. This segmentation is then projected to all models in the corresponding cardiac cycle.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Test (Category)Acceptance CriteriaReported Performance
    Unit/Integration Testing
    Mesh ProcessingMesh orientation parameters for apex, inflow, and outflow centers are within specifications.Pass
    Volume CalculationVolume calculations are consistent with predicate device and are reproducible.Pass
    Motion DecompositionInterobserver and intraobserver variability of decomposed end-systolic volumes are within specifications. Pre- and postoperative values for ejection fraction are reproducible.Pass
    Volumetric SegmentationThe relative volumes of each segment to the total volume fall within specifications. The sum of segmental volumes is equal to the global volume. Interobserver and intraobserver variability of segmental end-diastolic and end-systolic volumes are within specifications.Pass
    3D Strain CalculationSeptal and free wall longitudinal strain, and global circumferential strain values are consistent with predicate device. Interobserver and intraobserver reliability of global longitudinal strain are within specifications.Pass
    Verification TestsGlobal and segmental metrics are calculated in more than 99% of cases. When inadequate input is presented, the data is not analyzed and an error message is presented. Movement decomposition and volume calculations of known geometric data are accurate.Pass
    Mathematical Unit TestsThe area of a two-dimensional polygon is calculated accurately. The relations between a two-dimensional line or a three-dimensional plane and a point are calculated accurately. The closest point on a line, segment, ray, or plane to a given point is calculated accurately. The distance between two points, a segment and a point, a line and a point, a ray and a point, a plane and a point is calculated accurately. The collinearity of three points or of two lines is calculated accurately. The relation of a point to another point, line, segment, ray, plane, or polygon is calculated accurately. The intersection point of two lines, a line and a triangle, a line and a plane, a ray and a triangle, a ray and a plane is calculated accurately. The intersection of two segments, a segment and a line, a segment and a plane, a segment and a ray, a segment and a triangle is calculated accurately. The points of a polygon are not duplicated and are labeled accurately.Pass
    Evaluation TimeAverage evaluation time must be under 15 minutes.Pass
    System Level Testing
    Response TimeResponse time must be under 10 seconds.Pass
    Response to Stress ConditionsNo delay in the ReVISION Software functionality.Pass
    Validation Testing
    Accuracy ValidationFor segmental metrics: <30% relative difference between ReVISION Software and manual measurements by cardiologists. For global longitudinal and circumferential strains: <10% relative difference between ReVISION Software and manual measurements by cardiologists. For decomposed ejection fractions: 3D model shortens only in the corresponding direction in 100% of cases (visual verification by experts).Pass
    Database Validation1. ReVISION-derived EDV has a clinically negligible bias compared with TomTec-derived EDV. 2. ReVISION-derived ESV has a clinically negligible bias compared with TomTec-derived ESV. 3. ReVISION-derived SV has a clinically negligible bias compared with TomTec-derived SV. 4. ReVISION-derived EF has a clinically negligible bias compared with TomTec-derived EF. 5. ReVISION-derived global longitudinal strain correlates with TomTec-derived EF. 6. ReVISION-derived global circumferential strain correlates with TomTec-derived EF. 7. ReVISION-derived global area strain correlates with TomTec-derived EF. These evaluations were performed across various populations (entire database, clinical subpopulations, age category subpopulations, RV dysfunction present/absent).Pass
    cMRI ValidationFor EDV: ±45 mL absolute difference. For ESV: ±28 mL absolute difference. For EF: ±10 % absolute difference.Pass
    Usability Testing
    Usability EvaluationAcceptable results found no further risks, hazards, or areas for immediate modification after 15 participants completed a usability test focusing on the main elements of ReVISION Software workflow and use errors.Pass

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Accuracy Validation: 30 subjects (10 healthy adults, 10 adult subjects with established cardiac diseases but maintained right ventricular ejection fraction, and 10 adult subjects with established cardiac diseases and reduced right ventricular ejection fraction).

      • Data Provenance: Retrospective, selected from a comprehensive clinical database of 811 subjects with available echocardiographic data. (No specific country of origin is mentioned, but "clinical database" suggests real-world patient data).
    • Database Validation: 811 subjects.

      • Data Provenance: Retrospective 3D echocardiographic clinical database. (No specific country of origin is mentioned).
    • cMRI Validation: 3 subjects.

      • Data Provenance: Retrospective Cardiac Magnetic Resonance Imaging (cMRI) data. (No specific country of origin is mentioned).
    • Usability Evaluation: 15 participants.

      • Data Provenance: Not specified, but generally, usability testing involves representative users in a simulated environment rather than patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Accuracy Validation: 3 expert cardiologists.

      • Qualifications: "with expertise in 3D echocardiography."
    • Database Validation: The comparison was against the predicate device (TomTec-derived values), implying that the predicate acted as a form of ground truth or reference standard for comparison. The document does not describe expert adjudication for these 811 cases beyond the predicate's output.

    • cMRI Validation: The comparison was against cMRI, which is often considered a gold standard for cardiac imaging. No human expert count or specific qualifications for establishing ground truth from cMRI are mentioned in the context of this validation, as cMRI itself serves as the reference.

    • Usability Evaluation: Not applicable, as this was a usability test, not an accuracy test requiring medical ground truth.

    4. Adjudication Method for the Test Set

    • Accuracy Validation: The text states, "The cardiologists manually performed segmentation and contouring on the same 3D models that were analyzed by ReVISION Software." This implies an independent assessment by each expert, with the acceptance criteria defining the allowable difference between the device and these manual measurements. There is no explicit mention of an adjudication process (e.g., 2+1, 3+1 consensus) if experts disagreed; it implies each expert's manual measurement was a "ground truth" to be compared against for those specific acceptance criteria. For the visual verification of decomposed ejection fractions, it states "the expert cardiologists' task was to verify visually," again implying individual expert assessment.

    • Database Validation: No adjudication method mentioned; comparison was directly between device output and predicate device output.

    • cMRI Validation: No adjudication method mentioned; comparison was directly between device output and cMRI measurements.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned or described in the provided text. The studies focused on the standalone performance and accuracy of the device against manual measurements, predicate device measurements, or cMRI.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, extensive standalone performance testing was done.

    • Unit/Integration Testing, System Level Testing, Accuracy Validation, Database Validation, and cMRI Validation all describe studies where the Argus Cognitive ReVISION Software's output was directly compared against reference standards (manual expert measurements, predicate device output, or cMRI) without a human-in-the-loop scenario explicitly described to modify or interpret the AI's initial output for the purpose of the performance metrics. The device's "accuracy" was assessed independently.

    7. The type of ground truth used

    The ground truth used varied depending on the validation test:

    • Expert Consensus/Manual Measurements: For "Accuracy Validation," the ground truth for segmental volumes, EFs, and strains was established by 3 expert cardiologists performing manual segmentation and contouring. For decomposed EFs, it was visual verification by expert cardiologists.
    • Predicate Device Data: For "Database Validation," the predicate device's calculated parameters (EDV, ESV, SV, EF) served as the reference standard for comparison.
    • Outcomes Data: Not explicitly mentioned as a ground truth. The focus is on measurement accuracy against established methods.
    • Pathology: Not mentioned.
    • cMRI: For "cMRI Validation," Cardiac Magnetic Resonance Imaging (cMRI) measurements were used as a reference for RV volumes and ejection fraction.

    8. The Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set used to develop the Argus Cognitive ReVISION Software. The description of the device mentions a "rule-based, non-AI/ML algorithm" for initial segmentation, and while the name "Argus Cognitive" implies AI/ML, the performance testing focuses on the resulting calculations rather than the development or training specifics.

    9. How the Ground Truth for the Training Set was Established

    Since the document predominantly describes the device's segmentation as "rule-based, non-AI/ML algorithm," the concept of a "training set" and "ground truth for the training set" as typically understood in machine learning might not directly apply in the same way for these core components. For rule-based systems, the "ground truth" during development involves expert-defined rules and specifications, which are then verified through unit and integration testing against known geometric data and clinical cases, as described in the "Mathematical Unit Tests" and "Verification Tests" sections. If there are AI/ML components (which the company name suggests), the process of establishing ground truth for those training sets is not detailed in this submission.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1