Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K970970
    Device Name
    CARDIOMATCH
    Manufacturer
    Date Cleared
    1997-06-13

    (88 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cardiac Stress and Rest SPECT Imaging Processing Software is a diagnostic software program that quantitatively analyzes myocardial perfusion in patients injected with Cardiolite® (Tc Sestamibi) or Thallium following a rest/stress Single Photon Emission Computerized Tomography (SPECT) acquisition protocol.

    Device Description

    The program automatically determines the alignment parameters for the stress and rest reconstructed MPS SPECT images and, following operator venification of these parameters, performs a size and shape normalization to a template using a published and independently validated 3D image registration method. The normalized images are then compared to a normal distribution and the results of this comparison are used to generate a visual and quantitative representation of the extent and location of perfusion defects. Results are presented in the form of 3D representation of the normalized images including a visualization of the abnormalities as well as a table indicating the number. extent and location of abnormalities.

    AI/ML Overview

    Here's an analysis of the provided text regarding the CARDIOMATCH® device, focusing on acceptance criteria and the supporting study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state specific, quantifiable acceptance criteria (e.g., a specific sensitivity or specificity threshold). Instead, it makes a general claim of "accuracy results obtained with this program are similar or higher to those obtained with previous quantitative analysis programs."

    Therefore, the table below reflects this general statement and the lack of specific, numerical criteria in the document.

    Metric / CriterionAcceptance Criteria (as stated or inferred)Reported Device Performance
    Overall AccuracySimilar or higher than previous quantitative analysis programs (e.g., Cedars-Sinai "Bullseye" and C-Equal™)"similar or higher"

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Over 50 patients
    • Data Provenance: The study was performed at Stanford University. The document does not specify the country of origin of the patients, but given Stanford University's location in the United States, it is highly probable the data is from the US.
    • Retrospective or Prospective: Not explicitly stated. The phrasing "independent validation study...performed at Stanford University" could suggest either. However, without further details, it's impossible to confirm.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used to establish ground truth or their qualifications.

    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study was not performed or reported in the provided text. The device is referred to as an "adjunctive diagnostic tool" for the physician. There is no information about how much human readers improve with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was done. The "independent validation study on over 50 patients performed at Stanford University" assessed the "accuracy results obtained with this program" in a standalone capacity, as it quantifies myocardial perfusion defects. The device itself is "diagnostic software" that "quantitatively analyzes" images. The results are "adjunctive," implying the software's analysis is performed independently before a physician integrates it.

    7. Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). Given the nature of cardiac SPECT imaging for perfusion defects, it is likely that the ground truth would have involved a combination of:

    • Clinical Diagnosis: Based on patient history, stress/rest EKG, and visual interpretation of images by experts.
    • Follow-up Outcomes: Potentially, though less common for initial validation.
    • Expert Consensus: A panel of cardiologists or nuclear medicine physicians reviewing all available clinical data and images.

    However, this is inference, as the document is silent on this specific detail.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set. It mentions the software uses "a published and independently validated 3D image registration method" and "compared to a normal distribution," which implies pre-existing data or models were used, but the size of any specific training set for CARDIOMATCH® is not provided.

    9. How Ground Truth for the Training Set Was Established

    The document does not specify how ground truth for the training set (if CARDIOMATCH® had a distinct training phase beyond using existing validated methods) was established. It refers to comparing normalized images to a "normal distribution," which suggests using a database of images from healthy individuals, but the method for establishing "normalcy" within that distribution is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1