Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221868
    Date Cleared
    2023-01-27

    (214 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K190362

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QOCA® image Smart CXR Image Processing System is a software as medical device (SaMD) used, through artificial intelligence/deep learning technology, to analyze chest X-ray images of adult patient, and then identify cases with suspected pneumothorax. This product shall be used in conjunction with Picture Archiving and Communication System (PACS) at the hospital. This product will automatically analyze the DICOM files automatically pushed from PACS, and then make a notation next to the cases with suspected pneumothorax. This product is only used to remind radiologists to prioritize reviewing cases with suspected pneumothorax. Its results cannot be used as a substitute for a diagnosis by a radiologist, nor can it be used on a stand-alone basis for clinical decision-making.

    Device Description

    This product, QOCA® image Smart CXR Image Processing System, is a web-based medical device using a locked artificial intelligence algorithm. It provides features such as cases sorting and image viewing, and supports multiple users at a time.

    After connecting to Picture Archiving and Communication System (PACS) at the hospital, this product is capable of automatically analyzing either posteroanterior (PA) view or anteroposterior (AP) erect view chest X-ray images automatically pushed from PACS. Once a case with suspected pneumothorax is identified, a notation will be made next to the case in question, so the radiologist can prioritize to review cases with suspected pneumothorax in the Viewer Page. This product will not directly indicate, however, the specific portions or anomalies on the image.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the QOCA® image Smart CXR Image Processing System:


    1. Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Predicate Device K190362 Performance)Reported Device Performance (QOCA® image Smart CXR)Overall Performance
    AUC98.3% (95% CI: [97.40%, 99.02%])97.8% (95% CI: [97.0%, 98.5%])Met
    Sensitivity93.15% (95% CI: [87.76%, 96.67%])92.5% (95% CI: [90.5%, 94.2%])Met
    Specificity92.99% (95% CI: [90.19%, 95.19%])94.0% (95% CI: [93.9%, 94.6%])Met
    Average Performance Time22.1 seconds4.94 secondsMet

    Note: The reported device performance is an overall performance across both the MIMIC and Taiwanese datasets. Individual performance for each dataset is also provided in the document.


    2. Sample Size Used for the Test Set and Data Provenance

    The device's performance was assessed using two separate pivotal studies/datasets:

    • MIMIC Dataset:

      • Sample Size: 3,105 radiographs (336 positive pneumothorax cases, 2,769 negative pneumothorax cases).
      • Data Provenance: US patient population (MIMIC dataset). This was an independent medical institution from the training dataset.
    • Taiwanese Dataset:

      • Sample Size: 2,947 radiographs (472 positive pneumothorax cases, 2,475 negative pneumothorax cases).
      • Data Provenance: Taiwanese hospital. This was an independent medical institution from the training dataset.

    Overall Test Set: 6,052 radiographs (3,105 from MIMIC + 2,947 from Taiwan). Both datasets were retrospective.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    For both the MIMIC dataset and the Taiwanese dataset:

    • Number of Experts: Three radiologists.
    • Qualifications: The document states "truthed by three radiologists" without specifying their years of experience or sub-specialty.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that the datasets were "truthed by three radiologists," implying a consensus-based approach, but the specific process for resolving disagreements is not detailed.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study being performed to assess how much human readers improve with AI vs. without AI assistance. The study focused on the standalone performance of the AI algorithm.


    6. Standalone Performance Study

    Yes, a standalone performance study was done. The document explicitly states: "Bases on the results of the standalone performance assessment, this product achieves, identification accuracy of AUC > 95% with Sensitivity > 91% and Specificity > 92%." The performance metrics provided in section 1 (AUC, sensitivity, specificity) reflect the algorithm's performance without human-in-the-loop.


    7. Type of Ground Truth Used

    The ground truth for the test sets (MIMIC and Taiwanese) was established by "three radiologists," which indicates expert consensus diagnosis.


    8. Sample Size for the Training Set

    The document states: "The training dataset is used to train the model, and divided into three sets: the training set, the validation set, and the test set." However, the specific sample size for the entire training dataset (including training, validation, and its own internal test set used during development) is not provided in the summary. It only indicates that it was "collected from two hospitals, and additional data from the US National Institutes of Health (NIH) was added to the test set to improve its US patient population representativeness during training."


    9. How the Ground Truth for the Training Set Was Established

    The document states that the "model training dataset was collected from two hospitals, and additional data from the US National Institutes of Health (NIH) was added to the test set." While it implies the data was labeled for training, it does not explicitly describe how the ground truth for the training set was established (e.g., whether it was also by expert radiologists, pathology, etc.).

    Ask a Question

    Ask a specific question about this device

    K Number
    K191556
    Device Name
    Red Dot
    Date Cleared
    2020-02-28

    (261 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K190362

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The red dot™ software platform is a software workflow tool designed to aid the clinical assessment of adult Chest X-Ray cases with features suggestive of Pneumothorax in the medical care environment. red dot™ analyzes cases using an artificial intelligence algorithm to identify suspected findings. It makes case-level output available to a PACS/workstation for worklist prioritization or triage. red dot™ is not intended to direct attention to specific portions of an image or to anomalies other than Pneumothorax. Its results are not intended to be used on a stand-alone basis for clinical decision-making nor is it intended to rule out Pneumothorax or otherwise preclude clinical assessment of X-Ray cases.

    Device Description

    Behold.ai red dot™ is a radiological computer-assisted triage and notification software system. The software automatically analyzes PA/AP chest x-rays and alerts the PACS/RIS workstation once images with features suggestive of pneumothorax are identified.

    Through the use of the red dot™ device, a radiologist is able to review studies with features suggestive of pneumothorax earlier than in standard of care workflow.

    In summary, the red dot™ device provides a passive notification through the PACS/workstation to the radiologists indicating the existence of a case that may potentially benefit from prioritization. It doesn't output an image and therefore it does not mark, highlight, or direct users' attention to a specific location on the original chest X ray.

    The device aim is to aid in prioritization and triage of radiological medical images only.

    The main components of the red dot™ device are described below.

    1. Image input, validation and anonymization
      After a chest x-ray has been performed, a copy of the study is received and processed by the red dot™ device. Following receipt of a study, the validation feature ensures the image is valid (i.e. has readable pixels) and the anonymization feature removes or anonymizes Personally Identifiable Information (PII) such as Patient Name, Patient Birthdate, and Patient Address.

    2. red dot™ Image Analysis Algorithm
      This component of the device is primarily comprised of the visual recognition algorithm that is responsible for detecting images with potential abnormalities. Once a study has been validated, the algorithm analyzes the frontal chest x-ray for detection of suspected findings suggestive of pneumothorax.

    3. PACS Integration Feature
      The results of a successful study analysis is provided to an integration engine in a standard JSON message containing sufficient information to allow the integration engine to notify the PACS/workstation for prioritization through the worklist interface.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the red dot™ device meets them, based on the provided document:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implied)Reported Device Performance
    AUROC> 0.95 (as stated for "prespecified performance goals")0.975 (95% CI: [0.966 - 0.984])
    SensitivityLower bound of 95% CI > 80%94.65% (95% CI: [91.46 - 96.91])
    SpecificityLower bound of 95% CI > 80%87.95% (95% CI: [85.04 - 90.46])
    AccuracyNot explicitly stated as an acceptance criterion bound beyond the above, but reported.90.20% (95% CI: [88.06 - 92.08])
    Processing Time (red-dot™)Substantially equivalent to predicate (Zebra HealthPNX: 22.1 seconds)13.8 seconds (Mean, 95% CI: [13.0 - 14.5])
    Notification Transit TimeImplied to be part of overall timing comparison with predicate15.5 seconds (Average from 3 live customer sites)
    Total red dot™ Performance TimeSubstantially equivalent to predicate (Zebra HealthPNX: 22.1 seconds)29.3 seconds

    Note on Acceptance Criteria: The document explicitly states that the AUROC was above 0.95 and "all lower bounds of the 95% confidence intervals exceeded 80% and achieved the prespecified performance goals in the study" for the classification metrics (AUROC, Sensitivity, Specificity). For the timing, the acceptance criterion is defined as being "substantially equivalent" to the predicate.

    Study Details Proving Device Meets Acceptance Criteria

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 888 CXR images.
    • Data Provenance: Retrospective, anonymous study.
      • Country of Origin: United States (n=738 cases from 2 clinical sites) and United Kingdom (n=150 cases from 2 clinical sites).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: At least two ABR certified radiologists reviewed each CXR image. A third reader was involved in the event of disagreement/discrepancy.
    • Qualifications of Experts: All readers were "ABR certified radiologists" and "received training related to imaging findings defining each condition per protocol prior to the review."

    4. Adjudication Method for the Test Set

    • Adjudication Method: "The ground truth was determined by two readers with a third reader in the event of disagreement/discrepancy." Ground truth for a condition was defined as agreement between two readers. This is a 2+1 adjudication method.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported as having been done to directly compare human readers with and without AI assistance. The study described is a standalone performance validation of the AI algorithm against a consensus ground truth.

    6. If a Standalone Performance Study Was Done

    • Standalone Study: Yes, a standalone (algorithm only without human-in-the-loop performance) study was explicitly done. The reported metrics (AUROC, Accuracy, Sensitivity, Specificity) are for the red dot™ algorithm's performance in detecting pneumothorax.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus. Specifically, "agreement between two readers" from ABR certified radiologists, with a third radiologist for discrepancy resolution.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: The document does not specify the sample size for the training set. It only describes the test set.

    9. How the Ground Truth for the Training Set Was Established

    • Training Set Ground Truth Establishment: The document does not provide details on how the ground truth for the training set was established. It only focuses on the data used for the performance evaluation (test set).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1