Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K233176
    Device Name
    uOmnispace.MI
    Date Cleared
    2023-12-20

    (83 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K173897, K183170

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmnispace.MI is a software solution intended to be used for viewing, processing, evaluating and analyzing of PET, CT, MR, SPECT images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    u uOmnispace.MI MM Fusion application is intended to provide tools for viewing, analyzing and reporting PET, CT, MR, SPECT data with its flexible workflow and optimized layout protocols for dedicated reporting purposes on oncology, neurology, cardiology.

    u uOmnispace.MI MM Oncology application is intended to provide tools to display and analyze the follow-up PET, CT, MR data, with which users can do image registration, lesion segmentation, and statistical analysis.

    · uOmnispace.MI Dynamic Analysis application is intended to display PET data and anatomical data such as CT or MR, and supports to do lesion segmentation and output associated time activity curve.

    u uOmnispace.MI NeuroQ application is intended to analyze the brain PET scan, give quantitative results of the relative activity of different brain regions, and make comparison of activity of normal brain regions in AC database or between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.

    u uOmnispace.MI Emory Cardiac Toolbox application is intended to provide cardiac short axis reconstruction, browsing function. And it also performs perfusion analysis, activity analysis and cardiac function analysis of the cardiac short axis.

    Device Description

    The uOmnispace.MI is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing PET, CT, MR, SPECT images, can run alone or with other advanced commercially cleared applications.

    This proposed device contains the following applications:

    • uOmnispace.MI MM Fusion
    • uOmnispace.MI MM Oncology
    • . uOmnispace.MI Dynamic Analysis

    Additionally, uOmnispace.MI offers the users the options to run the following third-party applications in uOmnispace.MI:

    • uOmnispace.MI NeuroQ ●
    • uOmnispace.MI Emory Cardiac Toolbox ●
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study detailed in the provided document, addressing each of your requested points:

    Acceptance Criteria and Study Details for uOmnispace.MI

    1. Table of Acceptance Criteria and Reported Device Performance

    For Spine Labeling Algorithm:

    Acceptance CriteriaReported Device Performance (Average Score)Meets Criteria?
    Average score higher than 4 points4.951 pointsYes

    For Rib Labeling Algorithm:

    Acceptance CriteriaReported Device Performance (Average Score)Meets Criteria?
    Average score higher than 4 points5 pointsYes

    Note: The document also states that an average score of "higher than 4 points is equivalent to the mean identification rate of spine labeling is greater than 92% (>83.3%, correctly labeled vertebrae number ≥23, total vertebrae number=25, 23/25=92%), and the mean identification rate of rib labeling is greater than 91.7%(>91.5% , correctly labeled rib number ≥22, total rib number=24, 22/24~91.7%)." This indicates the acceptance criteria are linked to established identification rates from literature, ensuring clinical relevance.

    2. Sample Size Used for the Test Set and Data Provenance

    For Spine Labeling Algorithm:

    • Sample Size: 286 CT scans, corresponding to 267 unique patients.
    • Data Provenance:
      • Countries of Origin: Asian (Chinese) data (106 samples), European data (160 samples), The United States data (20 samples).
      • Retrospective/Prospective: Not explicitly stated, but typically such large datasets collected for algorithm validation are retrospective.

    For Rib Labeling Algorithm:

    • Sample Size: 160 CT scans, corresponding to 156 unique patients.
    • Data Provenance:
      • Countries of Origin: Asian (Chinese) data (80 samples), The United States data (80 samples).
      • Retrospective/Prospective: Not explicitly stated, but likely retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: At least one "senior clinical specialist" is explicitly mentioned for final review and modification. "Well-trained annotators" performed the initial annotations. The exact number of annotators is not specified.
    • Qualifications of Experts:
      • Annotators: Described as "well-trained annotators." Specific professional qualifications (e.g., radiologist, technician) or years of experience are not provided.
      • Reviewer: "A senior clinical specialist." Specific professional qualifications or years of experience are not provided.

    4. Adjudication Method for the Test Set

    The adjudication method involved a multi-step process:

    1. Initial annotations were done by "well-trained annotators" using an interactive tool.
    2. For rib labeling, annotators "check each other's annotation."
    3. A "senior clinical specialist" performed a final check and modification to ensure correctness.

    This indicates a multi-annotator review with a senior specialist as the final adjudicator. It is not explicitly a 2+1 or 3+1 method as such, but rather a hierarchical review process.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text. The performance verification focused on the standalone algorithm's accuracy against a ground truth, rather than comparing human reader performance with and without AI assistance.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, a standalone (algorithm only) performance study was done. The entire performance verification section describes how the deep learning-based algorithms for spine and rib labeling were tested against ground truth annotations to assess their accuracy in an automated fashion. The reported scores explicitly reflect the algorithm's performance.

    7. The Type of Ground Truth Used

    The ground truth for both spine and rib labeling was established through expert consensus based on manual annotations, followed by review and modification by a senior clinical specialist. It is not directly pathology or outcome data.

    8. The Sample Size for the Training Set

    The document explicitly states: "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm." and "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."

    However, the actual sample size for the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the training data and test data were independent. While it describes how the ground truth for the test set was established (well-trained annotators + senior clinical specialist review), it does not explicitly describe the methodology for establishing the ground truth for the training set. It can be inferred that a similar expert annotation process would have been used, but details are not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K230039
    Device Name
    uOmnispace
    Date Cleared
    2023-07-20

    (195 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K183170

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmnispace is a software solution intended to be used for viewing, manipulation, communication and storage of medical images. It allows processing and filming of multimodality DICOM images.

    It can be used as a stand-alone device or together with a variety of cleared and unmodified software options, and also support to plug in multi-vendor applications which meet interface requirements.

    u Omnispace is intended to be used by trained professionals, including but not limited to physicians and medical technicians.

    The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.

    Device Description

    uOmnispace is a software only medical device, the hardware itself is not seen as part of the medical device and therefore not in the scope of this product.

    uOmnispace provides 2D and 3D viewing, annotation and measurement tools, manually and automatically segmentation tools (Rib extraction algorithm is based on Machine Learning) and film and report features to cover the radiological tasks reading images and reporting. uOmnispace supports DICOM formatted images and objects, CT, MRI, PET and DR multimodality are supported.

    uOmnispace is a software medical device that allows multiple users to remotely access clinical applications from compatible computers on a network. The system allows processing and filming of multimodality DICOM images. This software is for use with off the-shelf PC computer technology that meets defined minimum specifications.

    uOmnispace communicates with imaging systems of different modalities and medical information systems of the hospital using the DICOM3.0 standard.

    The system is not intended for the displaying of digital mammography images for diagnosis in the U.S.

    AI/ML Overview

    The acceptance criteria and the study proving the device meets accepted criteria for the uOmnispace Medical Image Post-processing Software are described below.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Validation TypeAcceptance CriteriaReported Device Performance
    Average DICEThe average dice of testing data is higher than 0.8The average dice on testing data set is 0.855

    2. Sample size used for the test set and data provenance:

    • Sample Size: 60 chest CTs.
    • Data Provenance: The data was collected during product development. The document does not specify the country of origin of the data nor explicitly states whether it was retrospective or prospective, though the context of "product development" often implies some level of retrospective analysis of collected data.

    3. Number of experts used to establish the ground truth for the test set and their qualifications:

    • Number of Experts: Not explicitly stated as a specific number, but it involved multiple annotators and a "senior clinical specialist".
    • Qualifications: "Annotators" and a "senior clinical specialist". Specific details like years of experience or medical certifications (e.g., radiologist) are not provided for the individual annotators or the senior specialist.

    4. Adjudication method for the test set:

    • Adjudication Method: A multi-step process:
      1. An initial rib mask was generated using a threshold-based interactive tool.
      2. Annotators refined the first-round annotation.
      3. Annotators checked each other's annotations.
      4. A senior clinical specialist checked and modified annotations to ensure ground truth correctness.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly done to measure human reader improvement with AI assistance. The study focused on validating the standalone performance of the ML-based rib segmentation algorithm against ground truth.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study of the ML-based rib segmentation algorithm was done. The performance was evaluated by comparing its output to established ground truth.

    7. The type of ground truth used:

    • Type of Ground Truth: Expert consensus, established through a multi-step annotation and refinement process involving annotators and a senior clinical specialist.

    8. The sample size for the training set:

    • The document states that the training data used was "independent of the algorithm" but does not specify the sample size for the training set.

    9. How the ground truth for the training set was established:

    • The document does not explicitly describe how the ground truth for the training set was established, only that the training data and testing data were independent.
    Ask a Question

    Ask a specific question about this device

    K Number
    K192630
    Device Name
    uWS-MI
    Date Cleared
    2020-06-11

    (262 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K183170, K173897, K180077, K123646

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-MI is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The Oncology application is intended to provide tools to display and analyze the follow-up PET/CT data, with which users can do image registration, lesion segmentation, and statistical analysis.

    The Dynamic Analysis application is intended to display PET data and anatomical data such as CT or MR, and supports to do lesion segmentation and output associated time-activity curve.

    The Brain Analysis (NeuroQ™) application is intended to analyze the brain PET scan, give quantitative results of the relative activity of 240 different brain regions, and make comparison of activity of normal brain regions in AC database or between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.

    The Cardiac Analysis (ECTb™) application is intended to provide cardiac short axis reconstruction, browsing function. And it also performs perfusion analysis, activity analysis and cardiac function analysis of the cardiac short axis.

    Device Description

    uWS-MI is a comprehensive software solution designed to process, review and analyze PET, CT or MR Images. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data or anatomical datasets, such as CT. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    This Traditional 510(k) is to request modification for the cleared Picture archiving and communications system (uWS-MI) which have been cleared by FDA via K172998 on April 5, 2018.

    The modifications performed on the uWS-MI (K172998) in this submission are due to the change of the basic application (Image Fusion) and the advance applications (Oncology and Dynamic Analysis).

    The modifications of Brain Analysis application (NeuroQ™ -- cleared by FDA via K180077) is that it can make comparison between two studies from the same patient, as well as provide analysis of amyloid uptake levels in the brain.

    AI/ML Overview

    The provided text describes a 510(k) submission for the uWS-MI software solution, which is intended for viewing, manipulating, and storing medical images, with specialized applications for Oncology, Dynamic Analysis, Brain Analysis (NeuroQ™), and Cardiac Analysis (ECTb™). The submission focuses on demonstrating substantial equivalence to a predicate device (uWS-MI, K172998) and several reference devices.

    Acceptance Criteria and Device Performance:

    The document primarily focuses on demonstrating substantial equivalence rather than explicit acceptance criteria with numerical performance targets. However, the performance verification reports mentioned indicate that the device's algorithms were evaluated. The "Remark" column in the comparison tables serves as a qualitative acceptance criterion, stating whether a function is "Same," "New Function which will not impact safety and effectiveness," or "Modified function which will not impact safety and effectiveness."

    Since no specific quantitative acceptance criteria
    (e.g., minimum sensitivity, specificity, or image quality scores) are listed, the table below will summarize the functions and the qualitative assessment provided for the modified applications, which are the focus of this 510(k) submission.

    Acceptance Criteria (Stated or Implied)Reported Device Performance (Qualitative)
    New functions will not impact safety and effectiveness.Dynamic Analysis - Percentage threshold Segmentation: New Function which will not impact safety and effectiveness.
    New functions will not impact safety and effectiveness.Oncology - Percentage threshold lesion segmentation: New Function which will not impact safety and effectiveness.
    Modified functions will not impact safety and effectiveness.Oncology - Auto registration: Modified function which will not impact safety and effectiveness.
    All other listed functions are "Same" as predicate/reference devices, implying they meet the same safety and effectiveness standards.All other detailed functions across Dynamic Analysis, Oncology, Brain Analysis, and Cardiac Analysis are labeled as "Same," indicating performance equivalent to the predicate/reference devices.
    Core functionalities (Image communication, Hardware/OS, Patient Administration, Review 2D/3D, Filming, Image Fusion, Inner View, Visibility, ROI/VOI, MIP Display, Compare, Report) are "Same" as predicate.All core functionalities are "Same" as the predicate device.

    Study Details:

    The document states that no clinical studies were required. The performance evaluation was based on "Performance Verification" reports for specific algorithms.

    1. Sample Size used for the test set and the data provenance:

      • The document does not specify the sample sizes (number of images or cases) used for the test sets in the performance verification reports (e.g., for Lung Nodule, Lymph Nodule, Non-rigid Registration, Percentage Threshold Segmentation, PET-MR Auto Registration).
      • The data provenance (e.g., country of origin, retrospective or prospective nature) is not mentioned.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided in the document.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • This information is not provided in the document, as no clinical studies are mentioned.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC study was done, as explicitly stated that "No clinical study was required." The device is primarily a post-processing software tool.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The document implies that standalone performance verification for specific algorithms (Lung Nodule and Lymph Nodule Segmentation, Non-rigid Registration, Percentage Threshold Segmentation, PET-MR Auto Registration) was conducted. However, detailed results (metrics, effect sizes, etc.) are not provided in this summary. It states "Performance Evaluation Report for..." these algorithms, suggesting the algorithms were evaluated on their own.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The document does not specify the type of ground truth used for the performance verification of the algorithms.
    7. The sample size for the training set:

      • This information is not provided in the document.
    8. How the ground truth for the training set was established:

      • This information is not provided in the document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1