Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K222361
    Date Cleared
    2022-10-20

    (77 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    AI-Rad Companion (Musculoskeletal)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion (Musculoskeletal) is an image processing software that provides quantitative andysis from previously acquired Computed Tomography DICOM images to support radiologists and physicians from emergency medicine, specialty care, urgent care, and general practice in the evaluation and assessment of musculoskeletal disease.

    It provides the following functionality:

    • · Segmentation of vertebras
    • · Labelling of vertebras
    • · Measurements of heights in each vertebra and indication if they are critically different
    • · Measurement of mean Hounsfield value in volume of interest within vertebra.

    Only DICOM images of adult patients are considered to be valid input.

    Device Description

    AI-Rad Companion (Musculoskeletal) SW version VA20 is an enhancement to the previously cleared device AI-Rad Companion (Musculoskeletal) K193267 that utilizes deep learning algorithms to provide quantitative and qualitative analysis to computed tomography DICOM images to support qualified clinicians in the evaluation and assessment of the spine.

    As an update to the previously cleared device, the following modifications have been made:

    • Enhanced AI Algorithm The vertebrae segmentation accuracy has been improved through retraining the algorithm.
    • DICOM Reports

    The reports generated out of the system have been enhanced to support both human and machine-readable formats. Additionally, an update version of the system changed the DICOM structured report format to TID 1500 for applicable content.

    • Architecture Enhancement for on premise Edge deployment The system supports the existing cloud deployment as well as an on premise "edge" deployment. The system remains hosted in the teamplay digital health platform and remains driven by the AI-Rad Companion Engine. Now the edge deployment implies that the processing of clinical data and the generation of results can be performed onpremises within the customer network. The edge system is fully connected to the cloud for monitoring and maintenance of the system from remote.
    AI/ML Overview

    Here's a summary of the acceptance criteria and the study proving the device meets those criteria, based on the provided document:

    Acceptance Criteria and Device Performance Study

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeAcceptance CriteriaReported Device Performance
    Mislabeling of Vertebrae or absence of height measurementRatio of cases that are mislabeled or missing measurements shall be 1.0 mm)For cases with slice thickness > 1.0 mm, the difference should be within the LoA for ≥ 85% of cases
    Consistency of height and density measurement across critical subgroupsFor each sub-group, the ratio of measurements within the corresponding LoA should not drop by more than 5% compared to the ratio for all data setsOverall failure rate of the subject device was consistent with the predicate, and results of all sub-group analysis were rated equal or superior to the predicate regarding the ratio of measurements within the corresponding LoA.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 140 Chest CT scans, comprising 1,553 thoracic vertebrae.
    • Data Provenance: The document lists two sources for the data:
      • KUM (N=80): Primary indications and various medical conditions are detailed (e.g., Lung/airways, infect focus, malignancy, cardiovascular, trauma).
      • NLST (N=60): Comorbidities are detailed (e.g., diabetes, heart disease, hypertension, cancer, smoking history).
      • The patient demographics (sex, age, manufacturer of CT scanner, slice thickness, dose, reconstruction method, kernel, contrast enhancement) are provided.
      • The document implies this is retrospective data collected from existing patient studies, as it describes the "testing data information" with pathologies and patient information already existing.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Four board-certified radiologists.
    • Qualifications of Experts: Board-certified radiologists. (No specific years of experience are mentioned).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Each case was read independently by two radiologists in randomized order.
      • For outliers (cases where the initial two radiologists' annotations potentially differed significantly or inconsistently), a third annotation was blindly provided by a radiologist who had not previously annotated that specific case.
      • The ground truth was then generated by the average of the two most concordant measurements.
      • For all other cases (non-outliers), the two initial annotations were used as ground truth. This suggests a form of 2+1 adjudication for outliers and 2-reader consensus for non-outliers.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • The document describes a validation study comparing the device's performance against ground truth established by human readers. However, it does not describe a multi-reader multi-case (MRMC) comparative effectiveness study designed to measure the effect size of how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of the AI algorithm.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone study was performed. The "Summary Performance Data" directly reports the "Failure Rate" and "Inter-reader variability" (difference between AIRC and ground truth) of the AI-Rad Companion (Musculoskeletal) itself. The study's design of comparing device measurements to expert-established ground truth evaluates the algorithm's standalone accuracy.

    7. The Type of Ground Truth Used

    • Expert Consensus. The ground truth for the test set was established by the manual measurements and annotations of four board-certified radiologists, utilizing an adjudication process to determine the most concordant measurements for vertebra heights and average density (HU) values.

    8. The Sample Size for the Training Set

    • The document does not specify the exact sample size for the training set. It only states that the "training data used for the training of the post-processing algorithm is independent of the data used to test the algorithm."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only mentions that the "vertebrae segmentation accuracy has been improved through retraining the algorithm," implying that training data with associated ground truth was used for this process, but the method of establishing that ground truth is not detailed in this submission.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1