Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K093703
    Device Name
    3DI
    Manufacturer
    Date Cleared
    2010-01-19

    (49 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Di is a software package of PACS workstation for handling multimodality (CT, XA, MR, PET, SPECT & Ultrasound) images, which are using DICOM protocol. It includes volume rendering, Multi-planar reconstruction (MPR) and viewing of the inner and outer surfaces of organs as well as within their walls.

    3Di is intended for use as an interactive tool for assisting professional Radiologists, Cardiologists and specialists to reach their own diagnosis, by providing tools of communication, clinics networking, WEB Serving, image viewing, image manipulation, 2D/3D image visualization, image processing, reporting and archiving. This product is not intended for use with or for diagnostic interpretation of Mammography images.

    The 3Di indications for use are processing of Cardiac CT angiography studies, including coronaries analysis, cardiac functional assessment and CT colonoscopy.

    Device Description

    3Di is a PACS device which enables users to access medical images over a network and to utilize 3Di's image visualization tools to review the images. It provides the following functions: Web server, patient browser, PACS capabilities, multi-modality viewing, CT Cardiac and Colonoscopy clinical applications.

    AI/ML Overview

    Here's an analysis of the provided 510(k) summary regarding the 3Di device, addressing your specific questions.

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state acceptance criteria in a quantitative or pass/fail threshold manner. Instead, it describes a comparative study against a predicate device.

    Acceptance Criteria (Implicit)Reported Device Performance
    General functionality of image reformatting (various modalities)"results of the two devices are very similar"
    Reliability of orientation annotations displayed"results of the two devices are very similar"
    Correctness of measurements"results of the two devices are very similar"
    Image quality"results of the two devices are very similar"
    Cardiac analysis Graphs and Results"results of the two devices are very similar"
    Colon analysis results"results of the two devices are very similar"
    Overall Safety and Effectiveness (compared to predicate)"substantial equivalent in terms of safety and effectiveness to the predicate devices."

    2. Sample Size Used for the Test Set and Data Provenance

    The 510(k) summary does not specify the sample size used for the comparative performance study. It also does not mention the data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    The 510(k) summary does not provide information on the number of experts or their qualifications. The study described is a comparison of the 3Di device against a predicate device (Philips Brilliance) rather than establishing ground truth against expert consensus.

    4. Adjudication Method for the Test Set

    The 510(k) summary does not describe an adjudication method. The study appears to be a direct comparison of the 3Di's output with the predicate device's output across various functions.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The comparison is between two devices, not human readers with and without AI assistance. Therefore, there is no effect size reported for human readers improving with AI.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance assessment was effectively done. The summary states: "Its performance has been validated by comparison to the performance of the Philips Brilliance predicate device." This implies an evaluation of the algorithm's output directly against the predicate device's output, without human intervention in the loop during this specific performance validation.

    7. Type of Ground Truth Used

    The "ground truth" for this study was the performance and output of the legally marketed predicate device (Philips Brilliance). The comparison aimed to demonstrate "substantial equivalence" to this established device, rather than to a clinical ground truth like pathology or patient outcomes.

    8. Sample Size for the Training Set

    The 510(k) summary does not provide information on the sample size used for the training set. Given the submission date (2010) and the description of the device as a PACS workstation with visualization tools, it's possible that traditional "training sets" in the modern machine learning sense might not have been explicitly documented or emphasized in the same way as they would be for deep learning-based AI devices today. The device focuses on visualization and manipulation tools, which might rely more on established graphics algorithms than on data-driven machine learning models.

    9. How the Ground Truth for the Training Set Was Established

    The 510(k) summary does not provide information on how ground truth was established for any training set. If internal validation or verification was performed during development, this information is not detailed in the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1