Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K041782
    Date Cleared
    2004-08-16

    (46 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SeeMor™ 5.0 software program should be used for the transfer, display and image manipulation of multimodality diagnostic medical images and for the image than backprojection reconstruction reconstruction of SPECT & PET gated and ungated myocardial perfusion data.

    Device Description

    The SeeMor™ 5.0 medical viewing application is used for transfer and viewing diagnostic medical images. The program provides the capabilities of manipulating the images being displayed with the command options including: clipping, window/level adjustment, magnification, pan, relate, add, delete, next, cine, lock, select, view, flip vertical/horizontal, set color table, orthogonal view reconstruction, cascade, tile, and reset. The ReconTool™ processing application within Seemor™ 5.0 can be used to reorient and apply tomographic reconstruction to SPECT & PET gated and ungated myocardial perfusion image data sets.

    AI/ML Overview

    Acceptance Criteria and Study for SeeMor™ 5.0 (K041132)

    Based on the provided text, the SeeMor™ 5.0 device is a medical imaging display and processing program. The study described focuses on proving its "safety and effectiveness" for its intended use, which primarily involves the display, manipulation, reorientation, and tomographic reconstruction of SPECT & PET gated and ungated myocardial perfusion image data sets. It's explicitly stated that the device "does not provide any interpretive output other than the display of the images."

    Due to the nature of the device (a display and reconstruction tool rather than an AI diagnostic aid), the typical metrics for AI performance (e.g., sensitivity, specificity, AUC) and an MRMC comparative effectiveness study are not explicitly mentioned in the provided text. The study primarily validates the functionality and accuracy of the image processing aspect.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or precise performance metrics in the format of "X% accuracy" or "Y level of error." Instead, it relies on the successful completion of "initial design, coding, debugging, testing, and in-house validation" to demonstrate safety and effectiveness. The effectiveness is judged by the ability to correctly reorient and reconstruct images, as observed in the trial.

    Acceptance Criteria (Implied)Reported Device Performance
    Safety: No direct adverse effects on health."This program serves merely as a display program... It poses no direct adverse effect on health since it is only providing a means of displaying the medical images for the physician."
    Effectiveness: Program functions as intended (transfer, display, manipulation, reorientation, and reconstruction of SPECT & PET myocardial perfusion images)."The effectiveness of the program has been established in an in-house trial validation which included an evaluation of 20 patients."
    "We contend that the method employed for the development and the final in-house trial validation results of the SeeMor™ medical software program (with ReconTool™) have proven its safety and effectiveness."
    Equivalence to Predicate: Substantial equivalence to AutoSPECT Plus.Stated as substantially equivalent to AutoSPECT Plus (K992317).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 20 patients.
    • Data Provenance: Not explicitly stated, but implied to be retrospective as it refers to an "evaluation of 20 patients" in an "in-house trial validation." The country of origin is not mentioned.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The document states that "The final responsibility for interpretation of the study lies with the physician," implying that physician judgment would be the ultimate arbiter, but doesn't detail their specific role in establishing ground truth for the technical effectiveness of the reconstruction.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. Given the nature of the device (a display and reconstruction tool), the "ground truth" for its effectiveness would likely be the visual correctness of the reconstructed images compared to expected outputs, rather than a diagnostic consensus among clinicians for a specific diagnosis.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done? No. The document does not describe an MRMC study comparing human readers with and without AI assistance. This is consistent with the device's function as a display and processing tool, not a diagnostic AI.

    6. Standalone (Algorithm Only) Performance

    • Standalone Study Done? Yes, in a sense. The "in-house trial validation" assessed the functionality and output of the SeeMor™ 5.0 program (including ReconTool™) on its own, without human intervention in the image processing itself, although human review was necessary to verify its effectiveness. The device's "effectiveness" was based on its ability to produce accurate reconstructed images.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Implied to be technical correctness/visual fidelity of the reconstructed images. The effectiveness was assessed on whether the program successfully performed the requested reorientation and tomographic reconstruction of SPECT & PET data, likely validated by visual inspection and comparison to expected outputs or existing proven methods. It is not pathology, expert consensus on diagnosis, or outcomes data, as the device does not provide diagnostic interpretations.

    8. Sample Size for the Training Set

    • Sample Size: Not applicable/not specified. The device is a software application for image display and reconstruction, not an AI model that undergoes a "training set" in the machine learning sense. Its development involved "initial design, coding, debugging, testing, and in-house validation," which are typical software development stages, not AI model training.

    9. How Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established: Not applicable. As discussed above, there is no "training set" in the AI sense for this device. Its functionalities are based on established image processing algorithms.
    Ask a Question

    Ask a specific question about this device

    K Number
    K973653
    Date Cleared
    1997-12-12

    (78 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SeeMor™ software program should be used for the display and image manipulation of multimodality diagnostic medical images.

    Device Description

    This medical device (SeeMor™) is a display program for viewing diagnostic medical images. The program provides the capabilities of manipulating the images being displayed with the following command options: clipping, window/level adjustment, magnification, pan, relate, add, delete, next, cine, lock, select, view, flip vertical/horizontal, set color table, orthogonal view reconstruction, cascade, tile, and reset.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the SeeMor™ device:

    1. A table of acceptance criteria and the reported device performance

    Acceptance CriteriaReported Device Performance
    SafetyDetermined through stages of software development: initial design, coding, debugging, testing, and in-house validation.
    EffectivenessEstablished in an in-house trial validation.
    Intended UseDisplay and image manipulation of multi-modality diagnostic medical images.
    EquivalenceSubstantially equivalent to MedVision™ Imaging Software (K924178).

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size: 25 patients
    • Data Provenance: The document states "an in-house trial validation," which suggests the data was collected internally by Areeda Associates Ltd. No specific country of origin is mentioned, nor is it explicitly stated whether the study was retrospective or prospective. Given the small sample size and "in-house trial validation," it's likely a relatively short-term, possibly retrospective or small prospective study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document does not specify the number of experts used or their qualifications for establishing ground truth for the 25-patient test set. It only mentions that the program "serves merely as a display program to aid in the diagnostic interpretation of a patients' study" and "The final responsibility for interpretation of the study lies with the physician." This implies that physicians would be using the display program for their interpretation, but it doesn't detail how a "ground truth" was formally established for the purpose of validating the software's effectiveness, especially since the software itself doesn't provide diagnostic output.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No adjudication method is described or mentioned for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The device is a display program and does not incorporate AI or provide diagnostic interpretive output; therefore, it would not be applicable for measuring human reader improvement with AI assistance.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    A standalone performance study, as typically understood for an AI algorithm (measuring its diagnostic accuracy independently), was not done. The device is a display program, not a diagnostic algorithm. Its "effectiveness" as stated is in its ability to display and manipulate images reliably for a physician.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used. Given that the device is a display program and does not generate diagnostic output, the "effectiveness" validation likely focused on the software's functionality (e.g., correct display of images, proper application of manipulation commands like window/level, zoom, etc.) as perceived by users or through technical checks, rather than establishing a true medical diagnosis ground truth for each case. The responsibility for interpretation rests with the physician using the display.

    8. The sample size for the training set

    The document does not mention a training set. This is consistent with the device being a display and manipulation program, which typically does not involve machine learning or training on medical data in the way an AI diagnostic algorithm would. Its development is based on software engineering principles rather than data-driven model training.

    9. How the ground truth for the training set was established

    As there is no mention of a training set, the establishment of ground truth for a training set is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1