Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K973653
    Date Cleared
    1997-12-12

    (78 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SeeMor™ software program should be used for the display and image manipulation of multimodality diagnostic medical images.

    Device Description

    This medical device (SeeMor™) is a display program for viewing diagnostic medical images. The program provides the capabilities of manipulating the images being displayed with the following command options: clipping, window/level adjustment, magnification, pan, relate, add, delete, next, cine, lock, select, view, flip vertical/horizontal, set color table, orthogonal view reconstruction, cascade, tile, and reset.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the SeeMor™ device:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    SafetyDetermined through stages of software development: initial design, coding, debugging, testing, and in-house validation.
    EffectivenessEstablished in an in-house trial validation.
    Intended UseDisplay and image manipulation of multi-modality diagnostic medical images.
    EquivalenceSubstantially equivalent to MedVision™ Imaging Software (K924178).

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size: 25 patients
    • Data Provenance: The document states "an in-house trial validation," which suggests the data was collected internally by Areeda Associates Ltd. No specific country of origin is mentioned, nor is it explicitly stated whether the study was retrospective or prospective. Given the small sample size and "in-house trial validation," it's likely a relatively short-term, possibly retrospective or small prospective study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document does not specify the number of experts used or their qualifications for establishing ground truth for the 25-patient test set. It only mentions that the program "serves merely as a display program to aid in the diagnostic interpretation of a patients' study" and "The final responsibility for interpretation of the study lies with the physician." This implies that physicians would be using the display program for their interpretation, but it doesn't detail how a "ground truth" was formally established for the purpose of validating the software's effectiveness, especially since the software itself doesn't provide diagnostic output.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No adjudication method is described or mentioned for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The device is a display program and does not incorporate AI or provide diagnostic interpretive output; therefore, it would not be applicable for measuring human reader improvement with AI assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    A standalone performance study, as typically understood for an AI algorithm (measuring its diagnostic accuracy independently), was not done. The device is a display program, not a diagnostic algorithm. Its "effectiveness" as stated is in its ability to display and manipulate images reliably for a physician.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used. Given that the device is a display program and does not generate diagnostic output, the "effectiveness" validation likely focused on the software's functionality (e.g., correct display of images, proper application of manipulation commands like window/level, zoom, etc.) as perceived by users or through technical checks, rather than establishing a true medical diagnosis ground truth for each case. The responsibility for interpretation rests with the physician using the display.

    8. The sample size for the training set

    The document does not mention a training set. This is consistent with the device being a display and manipulation program, which typically does not involve machine learning or training on medical data in the way an AI diagnostic algorithm would. Its development is based on software engineering principles rather than data-driven model training.

    9. How the ground truth for the training set was established

    As there is no mention of a training set, the establishment of ground truth for a training set is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1