Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K223556
    Device Name
    DeepCatch
    Manufacturer
    Date Cleared
    2023-06-16

    (200 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DeepCatch analyzes CT images and auto-segments anatomical structures (skin, bone, muscle, visceral fat, subcutaneous fat, internal organs and central nervous system). Then, its volume and proportions are calculated and provided with the relevant 3D model.

    By using DeepCatch, it is possible to obtain accurate values for the volume and proportion of each anatomical structures by secondary utilization of CT images obtained for various purposes in the medical field. The type of input data is whole body CT. This device is intended to be used in conjunction with professional clinical judgement. The physician is responsible for inspecting and confirming all results.

    Device Description

    DeepCatch is medial image processing software that provides 3D reconstruction and visualization of ROI, advanced image quality improvement, auto segmentation for specific target, texture analysis, etc. Data that accurately analyzes the amount of skeletal muscle and adipose tissue distributed in the body in 3D can be used as base data in various fields.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text.

    DeepCatch Device Performance Study Analysis

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the null and alternative hypotheses for the performance tests. The reported device performance is the outcome of these tests.

    Test Type & MetricAcceptance Criteria (Alternative Hypothesis)Reported Device Performance
    Internal Datasets (n=100)
    DSC (between GT & DeepCatch segmentation results)Group's DSC mean ≥ 0.900DSC means ≥ 90% (met)
    External Datasets (n=580)
    DSC (between GT & DeepCatch segmentation results)Group's DSC mean ≥ 0.900DSC mean > 90% in all areas (met)
    Volume (Difference between GT & DeepCatch measurement results)Mean of within-group difference < ±10% (0.10)Mean value of volume < 10% (met)
    Area (Difference between GT & DeepCatch measurement results)Mean of within-group difference < ±10% (0.10)Mean value of area < 10% (met)
    Ratio (Difference between GT & DeepCatch measurement results)Mean of within-group difference < ±1% (0.01)Mean value of ratio < 1% (met)
    Body Circumference (Difference between GT & DeepCatch measurement results)Mean of within-group difference < ±5% (0.05)Mean value of body circumference < 5% (met)
    US-based Datasets (n=167)
    DSC (all anatomical structures)Group's DSC mean ≥ 0.900DSC mean > 90% (met)
    VolumeMean of within-group difference < ±10%Volume < 10% (met)
    AreaMean of within-group difference < ±10%Area < 1% (implies the same criteria for ratio) (met)
    Abdominal Circumference (error measurement)Mean of within-group difference < ±5%Error measurements for GT to abdominal circumference < 5% (met)
    Comparative Performance Test:
    DSC (DeepCatch vs. MEDIP PRO)DeepCatch DSC not inferior to MEDIP PRODeepCatch DSC was not inferior to MEDIP PRO, and showed better performance for Muscle segmentation (met)
    Volume, Ratio, Area, Body Circumference (DeepCatch vs. Synapse 3D)DeepCatch performance not inferior to Synapse 3D for these metricsDeepCatch showed no difference compared to Synapse 3D, and showed better performance in AVF Area (AW) and SF Area (AW) (met)

    2. Sample Sizes and Data Provenance

    • Internal Datasets: n=100. Provenance not explicitly stated for internal datasets, but the context implies they are from unmentioned sources used internally for development/initial testing.
    • External Datasets: n=580.
      • Country of Origin: Korea (562 scans), France (18 scans).
      • Retrospective/Prospective: Not explicitly stated, but typically external datasets for validation are retrospective.
    • US-based Datasets: n=167.
      • Country of Origin: US-based locations (East River Medical Imaging).
      • Retrospective/Prospective: Not explicitly stated, but likely retrospective.
    • Comparative Performance Test (MEDIP PRO): n=100.
      • Country of Origin: US (scanners from Siemens and Healthineers).
      • Retrospective/Prospective: Not explicitly stated, but likely retrospective.
    • Comparative Performance Test (Synapse 3D): n=100.
      • Country of Origin: US (scanners from Siemens and Healthineers).
      • Retrospective/Prospective: Not explicitly stated, but likely retrospective.

    3. Number of Experts and Qualifications for Ground Truth

    • The text states: "Ground truthing for each image was created by a licensed physician" for the comparative performance tests (MEDIP PRO and Synapse 3D comparison sets).
    • For the internal, external, and US-based datasets, it refers to "GT" (Ground Truth) but does not explicitly state how many experts or their specific qualifications for establishing this ground truth, only that DeepCatch's results were compared against this GT. It is strongly implied the GT was expert-created, as is standard practice for medical image segmentation/measurement.

    4. Adjudication Method for the Test Set

    • The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1 consensus) for establishing the ground truth on any of the test sets. It only mentions that "Ground truthing for each image was created by a licensed physician." This might imply a single expert per case, or a process not detailed in the summary.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, a "Multi-Reader Multi-Case (MRMC) comparative effectiveness study" involving human readers improving with AI vs. without AI assistance was not conducted or described.
    • The comparative performance tests focused on comparing the algorithm's performance directly against predicate devices (other algorithms), not on human reader performance with and without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, standalone performance studies were done.
      • The "Performance test using data set from Korea (562) and France (18)" and the "Performance test using data set from US-based locations with DeepCatch" are examples of standalone algorithm performance evaluations, where the DeepCatch algorithm's segmentation and measurement results were compared directly against the established Ground Truth (GT).
      • The "Comparative Performance Test" sections also evaluate the algorithm's standalone performance against other algorithms (predicate devices), rather than human-in-the-loop performance.

    7. Type of Ground Truth Used

    • The ground truth used was expert consensus / expert-labeled data.
      • The text explicitly states for the comparative performance tests: "Ground truthing for each image was created by a licensed physician."
      • For the other performance tests, "GT" is referenced, implying a similar expert-derived ground truth. There's no mention of pathology or outcomes data being used as ground truth for segmentation or volume measurements.

    8. Sample Size for the Training Set

    • The document states: "All data used images independent of the images used to learn the algorithm."
    • However, the specific sample size for the training set is not provided in this summary.

    9. How the Ground Truth for the Training Set Was Established

    • The document implies that the training data had its own ground truth ("images used to learn the algorithm"), but it does not describe how the ground truth for the training set was established. This information is typically found in a more detailed technical report.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1