Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K061418
    Device Name
    QUANTIVA
    Date Cleared
    2006-07-21

    (60 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Quantiva™is a software application intended to co-register and display 2D & 3D multimodality (CT & PET) medical images data. The medical practitioner can visualize, process, render, view, store, print and distribute DICOM 3.0 compliant medical image data within the system and/or across computer networks as distributed locations utilizing standard P.C. hardware.

    The volume and linear measurement functions are intended for evaluation and quantification of tumor measurements, location/displacement study, analysis and evaluation of both hard and soft tissue. The software also supports interactive segmentation of the region of interest, automated contouring of multi-slice ROI and labeling of avoidance structures during evaluation.

    Typical users of Quantiva™ are for trained professionals (including but not limited to: radiologists, clinicians and technicians). When interpreted by a trained physician, reviewed images may be used as an element for diagnosis.

    The Quantiva™is indicated for use when it is necessary to acquire, record, review and distribute these images. The Quantiva™ is a prescription device. The labeling, instructions and user operations are designed for trained, licensed medical professionals.

    Device Description

    The Quantiva™ is a Class II software application intended to co-register anddisplay fused PET plus CT images enabling a qualified radiological technological technologist to visualize 2D & 3D multimodal (CT and PET) medical image data. The qualified user may process, render, view, store, and print DICOM 3.0 compliant medical image data within the system and/or across computer networks utilizing standard P.C. hardware and software.

    AI/ML Overview

    It is important to note that the provided 510(k) summary for the Quantiva™ software (K061418) does not contain specific details about formal performance testing with acceptance criteria and reported device performance in the way typically expected for a detailed AI/software as a medical device (SaMD) study. The document primarily focuses on establishing substantial equivalence to a predicate device based on intended use and technological characteristics.

    The summary states: "Information submitted in this premarket notification for the Quantiva™ software includes results of performance testing." However, it does not elaborate on what that performance testing entailed, what acceptance criteria were set, or the quantitative results achieved.

    Therefore, much of the requested information cannot be extracted directly from this 510(k) summary. I will answer based on the information available and explicitly state when information is not provided.

    Here's a breakdown of the requested information:


    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    Not provided. The 510(k) summary does not specify any quantitative acceptance criteria for device performance.Not provided. The 510(k) summary states that "Information submitted in this premarket notification for the Quantiva™ software includes results of performance testing," but it does not disclose the specific results or how they demonstrate meeting any criteria.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample size for the test set: Not provided.
    • Data provenance: Not provided. The summary mentions co-registering and displaying PET and CT images, but does not specify the origin or type of data used for any performance testing.
    • Retrospective or prospective: Not provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of experts: Not provided.
    • Qualifications of experts: Not provided.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication method: Not provided.

    5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: Not provided. There is no mention of a comparative effectiveness study involving human readers with and without AI assistance (as the device itself is an image co-registration and visualization tool, not an AI for diagnosis in the modern sense).
    • Effect size: Not applicable, as no such study is reported.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • The document implies that performance testing was done on the software's ability to "co-register and display fused PET plus CT images." However, the details of such a standalone evaluation (e.g., accuracy of co-registration, image quality metrics) are not provided. The device is described as a "software application intended to co-register and display" images for a "qualified radiological technologist to visualize" and for "the diagnosing radiologist," indicating it's a tool for human use rather than a fully autonomous diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of ground truth: Not provided. Given the device's function (image co-registration and display), the "ground truth" for its performance testing might have related to the accuracy of image alignment or visual quality metrics, but this is not specified.

    8. The sample size for the training set

    • Sample size for the training set: Not applicable/Not provided. The 510(k) summary does not indicate that this device utilizes a machine learning model that requires a distinct "training set" in the contemporary sense. The device is described as a "software application" for image processing and display, suggesting traditional software engineering and validation rather than AI model training.

    9. How the ground truth for the training set was established

    • How ground truth was established: Not applicable/Not provided, as the concept of a "training set" and associated ground truth for a machine learning model is not suggested by the device description in this 2006 510(k) summary.

    Summary of available information regarding performance testing from the document:

    The 510(k) summary for Quantiva™ (K061418) acknowledges that "Information submitted in this premarket notification for the Quantiva™ software includes results of performance testing." However, it does not disclose any specifics about these tests, such as:

    • The methodologies used.
    • The metrics evaluated.
    • The acceptance criteria defined.
    • The actual performance results achieved.
    • Any details about the datasets (sample size, provenance).
    • Involvement of experts or how ground truth was established.

    This type of high-level statement regarding performance testing, without detailed results in the public summary, was common for 510(k) submissions in 2006, particularly for software that primarily serves as a visualization and processing tool rather than a diagnostic AI algorithm. The primary focus of this submission was likely to demonstrate substantial equivalence to the predicate device (Fusion7D, K033955) based on similar intended use and technological characteristics, rather than extensive quantitative clinical performance studies that are now more prevalent for AI/ML-based SaMDs.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1