Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241925
    Manufacturer
    Date Cleared
    2024-10-02

    (93 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    VitruvianScan (v1.0)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VitruvianScan is indicated for use as a magnetic resonance diagnostic device software application for non-invasive fat and muscle evaluation that enables the generation, display and review of magnetic resonance medical image data.

    VitruvianScan produces quantified metrics and composite images from magnetic resonance medical image data which when interpreted by a trained healthcare professional, yield information that may assist in clinical decisions.

    Device Description

    VitruvianScan is a standalone, post processing software medical device. VitruvianScan enables the generation, display and review of magnetic resonance (MR) medical image data from a single timepoint (one patient visit).

    When a referring healthcare professional requests quantitative analysis using VitruvianScan, relevant images are acquired from patients at MRI scanning clinics and are transferred to the Perspectum portal through established secure gateways. Perspectum trained analysts use the VitruvianScan software medical device to process the MRI images and produce the quantitative metrics and composite images. The device output information is then sent to the healthcare professionals for their clinical use.

    The metrics produced by VitruvianScan are intended to provide insight into the composition of muscle and fat of a patient. The device is intended to be used as part of an overall assessment of a patient's health and wellness and should be interpreted whilst considering the device's limitations when reviewing or interpreting images.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study provided in the document for the VitruvianScan (v1.0) device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document unfortunately does not explicitly state the specific quantitative acceptance criteria for each performance aspect (e.g., a specific percentage for repeatability, a particular correlation coefficient). Instead, it states that "All aspects of the performance tests met the defined acceptance criteria." and that the device "successfully passed the acceptance criteria with no residual anomalies."

    However, based on the described performance tests, we can infer the types of acceptance criteria that would have been defined. The document also lacks specific numerical results for the device performance that directly map to these criteria.

    Performance AspectInferred Acceptance Criteria (Example)Reported Device Performance
    Repeatability of metricsCoefficient of Variation (CV) or Intraclass Correlation Coefficient (ICC) for various metrics (Visceral Fat, Subcutaneous Fat, Muscle Area) to be within a pre-defined threshold for the same subject, scanner, field strength, and day."met the defined acceptance criteria"
    Reproducibility of metricsCoefficient of Variation (CV) or Intraclass Correlation Coefficient (ICC) for various metrics (Visceral Fat, Subcutaneous Fat, Muscle Area) to be within a pre-defined threshold for the same subject, scanner (different field strength), and day."met the defined acceptance criteria"
    Inter-operator variabilityLow variability (e.g., high ICC or low CV) in metric measurements between different trained operators using VitruvianScan."Characterization of inter-operator variability" met acceptance criteria
    Intra-operator variabilityLow variability (e.g., high ICC or low CV) in metric measurements by the same trained operator over repeated measurements."Characterization of intra-operator variability" met acceptance criteria
    Benchmarking against reference deviceEstablished equivalence or non-inferiority in metric measurements when compared to a validated reference regulated device (OSIRIX MD)."Results...compared with the results from testing using reference regulated device 'OSIRIX MD' for benchmarking performance" met acceptance criteria
    Comparison to gold standard (human experts)High agreement (e.g., high ICC, low mean absolute error) between device output and the gold standard (mean of 3 radiologists' results)."Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)" met acceptance criteria

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify the sample size used for the test set.

    The data provenance (country of origin, retrospective/prospective) is not explicitly stated. However, the context of an FDA submission for a device used in clinical settings suggests the data would likely be from a clinical or research environment, potentially multi-center, but this is an inference.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 3 radiologists
    • Qualifications of Experts: The document states "3 radiologists results" but does not provide specific qualifications (e.g., years of experience, subspecialty).

    4. Adjudication Method for the Test Set

    The adjudication method used for the test set is implicitly a "mean (average) of 3 radiologists results" for establishing the gold standard. This suggests a form of consensus, where the average of their interpretations serves as the reference. It's not a typical "X+1" method (like 2 out of 3, or 3 out of 1 for disagreement), but rather a central tendency measure of their assessments.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study was NOT explicitly done in the sense of comparing human readers with AI assistance versus without AI assistance.

    The study did involve "Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)," which evaluates the device's output and how operators use it against a human expert consensus. However, it doesn't describe a scenario where human readers improve with AI assistance in their own diagnostic performance compared to their performance without the AI. The stated use case is that "Perspectum trained analysts use the VitruvianScan software medical device to process the MRI images and produce the quantitative metrics and composite images," and then these are sent to "trained Healthcare Professionals who then utilize these to make clinical decisions." This suggests the device provides quantitative data to healthcare professionals, rather than directly assisting their image interpretation to improve their diagnostic accuracy from images alone.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance assessment was conducted. The "Comparative testing between the operators' results and the gold standard (mean of 3 radiologists results)" and the "benchmarking performance" against OSIRIX MD inherently assess the algorithm's output (via the trained analysts) against a reference, which signifies a standalone evaluation of the device's quantitative capabilities.

    7. Type of Ground Truth Used

    The primary type of ground truth used for the comparative testing was expert consensus (mean of 3 radiologists results).

    8. Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set for the VitruvianScan algorithm. This information is typically crucial for understanding the generalizability and robustness of an AI/ML device.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide any information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1