Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K223660
    Device Name
    SaintView
    Manufacturer
    Date Cleared
    2023-08-14

    (251 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SaintView is a software device that receives medical images and data from various images and data can be stored, communicated, processed, and displayed within the system or across computer networks at distributed locations. Only preprocessed DICOM for presentation images can be interpreted for primary image diagnosis in mammography. Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using a monitor that meets technical specification identified by FDA. Typical users of this system are trained professionals, e.g. physicians, radiologists. nurses, and medical technicians.

    Device Description

    SaintView is a product for checking, analyzing, recording and storing images of medical images in hospitals. It has functions such as storing, transmitting, viewing, zooming in, reducing, moving and rotating medical images, as well as measuring length to aid in reading, and displaying various tools and reference positions. This product has a worklist for ease of reading, support for high-resolution medical monitors, and software division function for large monitors. In addition, it supports DICOM standard so that DICOM video can be checked in the viewer, and DICOM standard is also supported for video reception, transmission, CD creation, and printing.

    AI/ML Overview

    The provided text describes the Inviz Corporation's SaintView device, a medical image management and processing system, and its performance tests.

    Here's an analysis of the acceptance criteria and study that proves the device meets them:

    1. A table of acceptance criteria and the reported device performance

    CategoryAcceptance CriteriaReported Device Performance
    Length / TapelineAcceptable tolerance of error rate within ± 0.03mmMet the evaluation criteria (within ± 0.03mm)
    AngleQuantitative measurements aligned with the actual valuesMet the evaluation criteria (aligned with actual values)
    ROIAcceptable tolerance of error rate within ± 0.01 square inchesMet the evaluation criteria (within ± 0.01 square inches)
    AreaAcceptable tolerance of error rate within ± 0.01 square centimetersMet the evaluation criteria (within ± 0.01 square centimeters)
    VolumeAcceptable tolerance of error rate within ± 0.15 mm$^3$Met the evaluation criteria (within ± 0.15 mm$^3$)

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document mentions using a "standardized AAPM CT phantom" for performance measurements. It does not specify a sample size in terms of number of cases or images, nor does it provide information about data provenance (e.g., country of origin, retrospective/prospective). The use of a phantom suggests controlled, synthetic data rather than real patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. The performance tests rely on measurements against a standardized phantom, which inherently has known ground truth values for length, angle, ROI, area, and volume. Therefore, human expert review for establishing ground truth on a test set (in the typical sense for diagnostic AI) would not be directly applicable for these specific measurement accuracy tests.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Adjudication methods are not applicable for the reported performance tests. The ground truth for the measurement accuracy tests is derived from the known dimensions of the standardized AAPM CT phantom, not from expert consensus.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study. The tests described focus solely on the device's measurement accuracy on a phantom, not on its impact on human reader performance or diagnostic effectiveness with human-in-the-loop.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The presented performance tests on the AAPM CT phantom are effectively standalone performance tests for the device's measurement capabilities, as they assess the algorithm's ability to accurately measure predefined features without human intervention in the measurement process itself.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the "Measurement accuracy test for angles and distances" was established by the known, actual values of a standardized AAPM CT phantom. This represents a form of objective, physical ground truth.

    8. The sample size for the training set

    The document does not provide any information about a training set or its sample size. This type of device is an image processing and management system, not a diagnostic AI algorithm that is typically trained on large datasets. The verification appears to focus on the accuracy of its implemented measurement tools.

    9. How the ground truth for the training set was established

    Since no training set information is provided, this is not applicable/available from the document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1