Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K231731
    Date Cleared
    2023-08-21

    (69 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K202679, K123646

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    HeartSee version 4 software for cardiac positron emission tomography (PET) is indicated for determining regional and global absolute rest and stress myocardial perfusion in ml/min/g, Coronary Flow Reserve and their combination into the Coronary Flow Capacity (CFC) Map in patients with suspected or known coronary artery disease (CAD) in order to assist clinical interpretation of PET perfusion images and quantification of their severity.

    HeartSee version 4 is intended for use by trained professionals, such as nuclear medicine or nuclear cardiology physicians, or cardiologists with appropriate training and certification. The climician remains ultimately responsible for the final assessment and diagnosis based on standard practices, clinical judgment and interpretation of PET images or quantitative data.

    Device Description

    HeartSee version 4 is a software tool for cardiac positron emission tomography (PET) for determining regional and global absolute rest and stress myocardial perfusion in cc/min/g, Coronary Flow Reserve and their combination into the Coronary Flow Capacity (CFC) Map for facilitating the interpretation of PET perfusion images in patients with suspected or known coronary artery disease. HeartSee version 4 is intended for use by trained professionals, such as nuclear technicians, nuclear medicine or nuclear cardiology physicians, or cardiologists with appropriate training and certification.

    HeartSee version 4 contains two fundamental components. First, the software imports cardiac PET images in DICOM format from PET scanners with DICOM output. These images are reoriented to cardiac axes to produce standard tomographic and topographic displays of relative uptake. Second, the HeartSee version 4 software quantifies regional absolute rest and stress myocardial perfusion per unit tissue (ml/min/g), Coronary Flow Reserve (CFR) as the stress/rest perfusion ratio, and the Coronary Flow Capacity combining CFR and stress perfusion, all on a pixel basis for regional and global values. HeartSee version 3 (K202679) and version 4 also display stress subendocardial to subepicardial ratio, subendocardial stress to rest ratio on relative activity tomograms and stress relative topogram maps expressed as a fraction of maximum ml/min/g, called relative stress flow (RSF). Archiving output data is supported for clinical diagnostics, quality control and research.

    In addition to these established measurements of perfusion in ml/min/g. CFR and CFC approved by FDA for K202679, HeartSee version 4 has software for determining left ventricular ejection fraction (EF) by PET-CT using Rb-82 compared to EF by the precedent FDA approved Emory Cardiac Toolbox version V4.1.7786.35544.

    AI/ML Overview

    The provided document is a 510(k) premarket notification decision letter from the FDA regarding the "Optional Screen Displays for HeartSee Cardiac P.E.T. Processing Software - HeartSee version 4". While it describes the device, its indications for use, and a summary of performance data comparing it to predicate devices, it does not contain a detailed study report with specific acceptance criteria and a structured study design to prove those criteria are met.

    The document states that "Data also shows that HeartSee measurement of LVEF by PET-CT using Rb-82 provides an accurate, robust measure of LVEF that is comparable to Emory Toolbox for PET-CT EF approved by FDA," and mentions analysis methods like "Cox multivariate analysis and Kaplan-Meier plots" and "ROC analysis and paired t-tests." However, it does not provide the specific numerical acceptance criteria, the detailed methodology of the study (e.g., sample size, ground truth establishment, expert qualifications, adjudication), or the exact results against the criteria.

    Therefore, I cannot populate all the requested information from the provided text. I will extract what is available and indicate where information is missing.


    Acceptance Criteria and Device Performance (Inferred/Missing)

    Acceptance Criteria (Quantitative)Reported Device Performance
    LVEF Measurement Comparability (vs. Emory Toolbox): (Specific metric, e.g., correlation coefficient, mean absolute difference, Passing-Bablok agreement, acceptable limits, or non-inferiority margin)"HeartSee measurement of LVEF by PET-CT using Rb-82 provides an accurate, robust measure of LVEF that is comparable to Emory Toolbox for PET-CT EF approved by FDA." (No specific quantitative performance metrics provided in this document).
    Association with Mortality/MACE (for CFR and stress perfusion): (e.g., Specific hazard ratio, p-value, or area under ROC curve thresholds)CFR and separately stress perfusion "associate significantly with mortality, major adverse coronary events (MACE) and their significant reduction after revascularization." (No specific quantitative performance metrics provided).
    Association with Angina/ST depression (for subendo/subepicardial ratio, stress/rest ratio, RSF): (e.g., Specific sensitivity, specificity, or AUC thresholds)"the stress subendo/subepicardial ratio, the subendocardial stress/rest ratio and the relative stress flow (RSF) associate with angina or ST depression ≥1mm during stress PET in patients with only mildly reduced CFC and no severely reduced CFC." (No specific quantitative performance metrics provided).

    Study Details:

    1. Sample size used for the test set and the data provenance:

      • Sample Size: Not specified in the provided text.
      • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The document implies the data was used to demonstrate equivalence and performance but doesn't detail the study cohort.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified. The document mentions "trained professionals, such as nuclear medicine or nuclear cardiology physicians, or cardiologists with appropriate training and certification" for the intended users, but doesn't detail their role in ground truth establishment for this specific study.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not specified.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study evaluating human reader improvement with AI assistance is explicitly described. The comparison is primarily between the HeartSee software (including its LVEF calculation) and predicate devices/established clinical findings. The software is described as assisting clinical interpretation and quantification, but direct human reader performance improvement is not detailed.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The document implies standalone performance was assessed for LVEF comparability ("HeartSee measurement of LVEF... provides an accurate, robust measure... comparable to Emory Toolbox"). The assessments of CFR, stress perfusion, and various ratios ("by Cox multivariate analysis and Kaplan-Meier plots," "By ROC analysis and paired t-tests") also suggest a focus on the algorithm's direct quantitative output rather than human interpretation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For LVEF: Comparator with "Emory Toolbox for PET-CT EF approved by FDA" served as a reference. This acts as a form of established clinical measurement/reference standard.
      • For CFR and stress perfusion: Clinical outcomes data (mortality, major adverse coronary events (MACE), revascularization) served as the "ground truth" or clinical endpoint for association analysis.
      • For ratios (subendo/subepicardial, stress/rest, RSF): Clinical findings of "angina or ST depression ≥1mm during stress PET" served as the "ground truth" or clinical endpoint for association.
    7. The sample size for the training set:

      • Not specified. The document refers to "Data also shows..." and "By Cox multivariate analysis..." but does not delineate training vs. test sets or their sizes.
    8. How the ground truth for the training set was established:

      • Not specified. Assuming a similar approach to the test set, it would likely involve established clinical measurements, outcomes data, or clinical findings.
    Ask a Question

    Ask a specific question about this device

    K Number
    K192630
    Device Name
    uWS-MI
    Date Cleared
    2020-06-11

    (262 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K183170, K173897, K180077, K123646

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-MI is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The Oncology application is intended to provide tools to display and analyze the follow-up PET/CT data, with which users can do image registration, lesion segmentation, and statistical analysis.

    The Dynamic Analysis application is intended to display PET data and anatomical data such as CT or MR, and supports to do lesion segmentation and output associated time-activity curve.

    The Brain Analysis (NeuroQ™) application is intended to analyze the brain PET scan, give quantitative results of the relative activity of 240 different brain regions, and make comparison of activity of normal brain regions in AC database or between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.

    The Cardiac Analysis (ECTb™) application is intended to provide cardiac short axis reconstruction, browsing function. And it also performs perfusion analysis, activity analysis and cardiac function analysis of the cardiac short axis.

    Device Description

    uWS-MI is a comprehensive software solution designed to process, review and analyze PET, CT or MR Images. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data or anatomical datasets, such as CT. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    This Traditional 510(k) is to request modification for the cleared Picture archiving and communications system (uWS-MI) which have been cleared by FDA via K172998 on April 5, 2018.

    The modifications performed on the uWS-MI (K172998) in this submission are due to the change of the basic application (Image Fusion) and the advance applications (Oncology and Dynamic Analysis).

    The modifications of Brain Analysis application (NeuroQ™ -- cleared by FDA via K180077) is that it can make comparison between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.

    AI/ML Overview

    The provided text describes a 510(k) submission for the uWS-MI software solution, which is intended for viewing, manipulating, and storing medical images, with specialized applications for Oncology, Dynamic Analysis, Brain Analysis (NeuroQ™), and Cardiac Analysis (ECTb™). The submission focuses on demonstrating substantial equivalence to a predicate device (uWS-MI, K172998) and several reference devices.

    Acceptance Criteria and Device Performance:

    The document primarily focuses on demonstrating substantial equivalence rather than explicit acceptance criteria with numerical performance targets. However, the performance verification reports mentioned indicate that the device's algorithms were evaluated. The "Remark" column in the comparison tables serves as a qualitative acceptance criterion, stating whether a function is "Same," "New Function which will not impact safety and effectiveness," or "Modified function which will not impact safety and effectiveness."

    Since no specific quantitative acceptance criteria
    (e.g., minimum sensitivity, specificity, or image quality scores) are listed, the table below will summarize the functions and the qualitative assessment provided for the modified applications, which are the focus of this 510(k) submission.

    Acceptance Criteria (Stated or Implied)Reported Device Performance (Qualitative)
    New functions will not impact safety and effectiveness.Dynamic Analysis - Percentage threshold Segmentation: New Function which will not impact safety and effectiveness.
    New functions will not impact safety and effectiveness.Oncology - Percentage threshold lesion segmentation: New Function which will not impact safety and effectiveness.
    Modified functions will not impact safety and effectiveness.Oncology - Auto registration: Modified function which will not impact safety and effectiveness.
    All other listed functions are "Same" as predicate/reference devices, implying they meet the same safety and effectiveness standards.All other detailed functions across Dynamic Analysis, Oncology, Brain Analysis, and Cardiac Analysis are labeled as "Same," indicating performance equivalent to the predicate/reference devices.
    Core functionalities (Image communication, Hardware/OS, Patient Administration, Review 2D/3D, Filming, Image Fusion, Inner View, Visibility, ROI/VOI, MIP Display, Compare, Report) are "Same" as predicate.All core functionalities are "Same" as the predicate device.

    Study Details:

    The document states that no clinical studies were required. The performance evaluation was based on "Performance Verification" reports for specific algorithms.

    1. Sample Size used for the test set and the data provenance:

      • The document does not specify the sample sizes (number of images or cases) used for the test sets in the performance verification reports (e.g., for Lung Nodule, Lymph Nodule, Non-rigid Registration, Percentage Threshold Segmentation, PET-MR Auto Registration).
      • The data provenance (e.g., country of origin, retrospective or prospective nature) is not mentioned.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided in the document.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • This information is not provided in the document, as no clinical studies are mentioned.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study was done, as explicitly stated that "No clinical study was required." The device is primarily a post-processing software tool.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The document implies that standalone performance verification for specific algorithms (Lung Nodule and Lymph Nodule Segmentation, Non-rigid Registration, Percentage Threshold Segmentation, PET-MR Auto Registration) was conducted. However, detailed results (metrics, effect sizes, etc.) are not provided in this summary. It states "Performance Evaluation Report for..." these algorithms, suggesting the algorithms were evaluated on their own.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The document does not specify the type of ground truth used for the performance verification of the algorithms.
    7. The sample size for the training set:

      • This information is not provided in the document.
    8. How the ground truth for the training set was established:

      • This information is not provided in the document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1