Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K223491
    Date Cleared
    2023-05-25

    (185 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K183182

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Critical Care Suite with Pneumothorax Detection AI Algorithm is a computer-aided triage, notification, and diagnostic device that analyzes frontal chest X-ray images for the presence of a pneumothorax. Critical Care Suite identifies and highlights images with a pneumothorax to enable case prioritization or triage and assist as a concurrent reading aide during interpretation of radiographs.

    Intended users include qualified independently licensed healthcare professionals (HCPs) trained to independently assess the presence of pneumothoraxes in radiographic images and radiologists.

    Critical Care Suite should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. It is not intended to replace the review of the X-ray image by a qualified physician. Critical Care Suite is indicated for adults and Transitional Adolescents (18 to < 22 years old but treated like adults).

    Device Description

    Critical Care Suite is a suite of Al algorithms for the automated image analysis of frontal chest X-rays acquired on a digital x-ray system for the presence of critical findings. Critical Care Suite with Pneumothorax Detection Al Algorithm is indicated for adults and transitional adolescents (18 to <22 years old but treated like adults) and is intended to be used by licensed qualified healthcare professionals (HCPs) trained to independently assess the presence of pneumothoraxes in radiographic images and radiologists. Critical Care Suite is a software module that can be deployed on several computing platforms such as PACS, On Premise, On Cloud or X-ray Imaging Systems.

    Today's clinical workflow, hospitals are overburdened by large volume of orders and long turnaround times for radiologist reports. Critical Care Suite with the Pneumothorax Detection Al Algorithm enables effective prioritization and assists in the detection / diagnosis of pneumothoraxes for radiologists and HCPs that have been trained to independently assess the presence of pneumothoraxes in radiographic images. It performs this task by flagging images with a suspicious finding and providing a localization overlay of the suspected pneumothorax as well as a graphical representation of the algorithm's confidence in the resultant finding. These outputs can be displayed wherever the reviewing physician normally conducts their reads per their standard of care, including PACS, On Premise, On Cloud and Digital Projection Radiographic Systems.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the GE Medical Systems, LLC Critical Care Suite with Pneumothorax Detection AI Algorithm, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on reporting the device's performance against its own established criteria rather than explicitly listing pre-defined "acceptance criteria" tables. However, we can infer the acceptance criteria from the reported performance goals.

    MetricAcceptance Criteria (Implied from Performance)Reported Device Performance (Standalone)Reported Device Performance (MRMC with AI Assistance vs. Non-Aided)
    Pneumothorax Detection (Standalone Algorithm)Detect pneumothorax in frontal chest X-ray images, with high diagnostic accuracy.AUC of 96.1% (94.9%, 97.2%)Not Applicable
    Sensitivity (Overall)High sensitivity for overall pneumothorax detection.84.3% (80.6%, 88.0%)Not Applicable
    Specificity (Overall)High specificity for overall pneumothorax detection.93.2% (90.8%, 95.6%)Not Applicable
    Sensitivity (Large Pneumothorax)High sensitivity for large pneumothoraxes.96.3% (93.1%, 99.2%)Not Applicable
    Sensitivity (Small Pneumothorax)High sensitivity for small pneumothoraxes.75.0% (69.2%, 80.8%)Not Applicable
    Pneumothorax Localization (Standalone Algorithm)Localize suspected pneumothoraxes effectively.Partially localized 98.1% (96.6%, 99.6%) of actual pneumothorax within an image (apical, lateral, inferior regions).Not Applicable
    Full agreement between regions.67.8% (62.7%, 73.0%)Not Applicable
    Overlap with true pneumothorax area.DICE Similarity Coefficient of 0.705 (0.683, 0.724)Not Applicable
    Reader Performance Improvement (MRMC Study)Improve reader performance for pneumothorax detection.Mean AUC improved by 14.5% (7.0%, 22.0%; p=.002) from 76.8% (non-aided) to 91.3% (aided).14.5% improvement in mean AUC
    Reader Sensitivity ImprovementIncrease reader sensitivity.Reader sensitivity increased by 16.3% (13.1%, 19.5%; p<.001) from 67.4% (non-aided) to 83.7% (aided).16.3% improvement in sensitivity
    Reader Specificity ImprovementIncrease reader specificity.Reader specificity increased by 12.4% (9.6%, 15.1%; p<.001) from 76.6% (non-aided) to 89.0% (aided).12.4% improvement in specificity
    Reader Performance Improvement (Large Pneumothorax)Improve reader performance for large pneumothoraxes.Mean AUC improved by 10.5% (3.2%, 17.8%, p=0.009). Sensitivity improved by 13.4% (10.0%, 16.9%, p<.001).10.5% improvement in mean AUC (large); 13.4% improvement in sensitivity (large)
    Reader Performance Improvement (Small Pneumothorax)Improve reader performance for small pneumothoraxes.Mean AUC improved by 17.6% (9.3%, 25.9%, p<0.001). Sensitivity improved by 18.7% (13.8%, 23.6%, p<.001).17.6% improvement in mean AUC (small); 18.7% improvement in sensitivity (small)
    Improvement Across User GroupsDemonstrate improvement across different clinical user types.All physicians (Rad, IM, ER) improved 10.4% (2.8%, 17.9%, p=0.015). Nurse practitioners improved 24.1% (1.2%, 47.0%, p=0.045). Non-radiologists (ER, IM, NP) improved 17.5% (9.6%, 25.4%, p<0.001).Varied improvements across user groups as noted.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 804 images
    • Data Provenance: The test set included images from two North American sites.
    • Retrospective/Prospective: The document does not explicitly state if the test set was retrospective or prospective. However, given it's a "final validation ground truth dataset" that was not used in training, it's highly likely to be retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Three blinded radiologists.
    • Qualifications of Experts: Radiologists (no specific experience level mentioned, but "blinded radiologists" implies qualified professionals).

    4. Adjudication Method for the Test Set

    • Adjudication Method: The ground truth was established by "three blinded radiologists." This implies a consensus method, likely majority rule or a process where discrepancies were resolved to arrive at a single ground truth label. The specific phrase "consensus" or "adjudication" is not used, but the description points to this approach.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    • MRMC Study Done: Yes, a multi-reader multi-case study was conducted.
    • Effect Size of Human Reader Improvement with AI vs. Without AI Assistance:
      • Mean AUC: Improved by 14.5% (from 76.8% non-aided to 91.3% aided; p=0.002).
      • Sensitivity: Increased by 16.3% (from 67.4% non-aided to 83.7% aided; p<0.001).
      • Specificity: Increased by 12.4% (from 76.6% non-aided to 89.0% aided; p<001).
      • Large Pneumothorax (Mean AUC): Improved by 10.5% (p=0.009).
      • Large Pneumothorax (Sensitivity): Improved by 13.4% (p<0.001).
      • Small Pneumothorax (Mean AUC): Improved by 17.6% (p<0.001).
      • Small Pneumothorax (Sensitivity): Improved by 18.7% (p<0.001).

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Standalone Study Done: Yes, the "standalone performance of the Pneumothorax Detection AI Algorithm was tested against this dataset."

    7. The Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus by three blinded radiologists.

    8. The Sample Size for the Training Set

    • Sample Size for Training Set: The algorithm was developed using "over 12,000 images." This number includes images used for training, verification, and validation, but the specific breakdown for the training set alone is not provided. It's implied that the majority would be for training.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: The document states that the "Pneumothorax Detection AI Algorithm was developed using over 12,000 images from six sources, including the National Institute of Health and sites within the United States, Canada, and India." It then clarifies this data was "segregated into training, verification, and validation datasets." While it doesn't explicitly detail the methodology for establishing ground truth for the training set, it's standard practice that such large datasets for deep learning and medical imaging are meticulously annotated by medical experts (e.g., radiologists) or derived from existing clinical reports and pathology, which would then be reviewed or confirmed by experts. Given the rigor for the validation set, it's reasonable to infer a similar expert-driven process for the training data, although the specifics are not provided in this excerpt.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211733
    Manufacturer
    Date Cleared
    2021-11-10

    (159 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K192320, K183182

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lunit INSIGHT CXR Triage is a radiological computer-assisted triage and notification software that analyzes adult chest X-ray images for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). Lunit INSIGHT CXR Triage uses an artificial intelligence algorithm to analyze images for features suggestive of critical findings and provides case-level output available in the PACS/workstation for worklist prioritization or triage.

    As a passive notification for prioritization-only software tool within standard of care workflow, Lunit INSIGHT CXR Triage does not send a proactive alert directly to the appropriately trained medical specialists. Lunit INSIGHT CXR Triage is not intended to direct attention to specific portions of an image. Its results are not intended to be used on a standalone basis for clinical decision-making.

    Device Description

    Lunit INSIGHT CXR Triage is a radiological computer-assisted prioritization software that utilizes AI-based image analysis algorithms to identify pre-specified critical findings (pleural effusion and/or pneumothorax) on frontal chest X-ray images and flag the images in the PACS/workstation to enable prioritized review by the appropriately trained medical specialists who are qualified to interpret chest radiographs. The software does not alter the order or remove cases from the reading queue.

    Chest radiographs are automatically received from the user's image system (e.g. Picture Archiving and Communication System (PACS)) or other radiological imaging equipment (e.g. X-ray systems) and processed by the Lunit INSIGHT CXR Triage for analysis. Following receipt of chest radiographs, the software device de-identifies a copy of each chest radiographs in DICOM format (.dcm) and automatically analyzes each image to identify features suggestive of pleural effusion and/or pneumothorax. Based on the analysis result, the software notifies PACS/workstation for the presence of the critical findings as indicating either "flag" or "(blank)". This would allow the appropriately trained medical specialists to group suspicious exams together that may potentially benefit for their prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care. Lunit INSIGHT CXR Triage can flag more than one critical finding per radiograph and the user may select the option to turn on and off notification of critical findings (pleural effusion and pneumothorax).

    When deployed on other radiological imaging equipment, Lunit INSIGHT CXR Triage automatically runs after image acquisition. It prioritizes and displays the analysis result through the worklist interface of PACS/workstation. Moreover, the analysis result can also be provided in the form of DICOM files containing information on the presence of suspicious radiologic findings. In parallel, the algorithms produce an on-device notification indicating which cases were prioritized by Lunit INSIGHT CXR Triage in PACS. The on-device notification does not provide any diagnostic information and it is not intended to inform any clinical decision, prioritization, or action to the technologist.

    Lunit INSIGHT CXR Triage works in parallel to and in conjunction with the standard care of workflow; therefore, the user enables to review the study containing critical findings earlier than others. As a passive notification for prioritization-only software tool within standard of care workflow, the software does not send a proactive alert directly to the appropriately trained medical specialists who are qualified to interpret chest radiographs. Lunit INSIGHT CXR Triage is not intended to direct attention to specific portions or anomalies of an image and it should not be used on a stand-alone basis for clinical decision-making.

    In parallel, an on-device, technologist notification is generated 15 minutes after interpretation by the user, indicating which cases were prioritized by Lunit INSIGHT CXR Triage in PACS. The technologist notification is contextual and does not provides any diagnostic information. The ondevice, technologist notification is not intended to inform any clinical decision, prioritization, or action.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the Lunit INSIGHT CXR Triage device:

    1. Table of Acceptance Criteria and Reported Device Performance

    | Feature/Metric | Acceptance Criteria | Reported Device Performance (Clinical Study) |
    |------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Pleural Effusion Detection | | |
    | ROC AUC | > 0.95 | 0.9686 (95% CI: 0.9547 - 0.9824) |
    | Sensitivity (Lower Bound) | > 0.85 | 89.86% (95% CI: 86.72 - 93.00) |
    | Specificity (Lower Bound) | > 0.85 | 93.48% (95% CI: 91.06 - 95.91) |
    | Pneumothorax Detection | | |
    | ROC AUC | > 0.95 | 0.9630 (95% CI: 0.9521 - 0.9739) |
    | Sensitivity (Lower Bound) | > 0.85 | 88.92% (95% CI: 85.60 - 92.24) |
    | Specificity (Lower Bound) | > 0.85 | 90.51% (95% CI: 88.18 - 92.83) |
    | Device Performance Time (average) | Comparable to cleared commercial products (HealthCXR (Zebra, K192320) and red dot™ (Behold.AI, K161556)) Considering the primary predicate (HealthCXR) had a delay of ~22 seconds for image transfer, computation, and results transfer, and Critical Care Suite had an average of 42 seconds for acquisition, annotation, processing, and transfer, comparably shorter times would be favorable. | Pleural Effusion: 20.76 seconds (95% CI: 20.23 - 21.28) |
    | | | Pneumothorax: 20.45 seconds (95% CI: 19.99 - 20.92) |
    | Nonclinical Testing (Standalone Performance) | ROC AUC > 0.95, Sensitivity > 0.80, Specificity > 0.80 | Pleural Effusion: ROC AUC: 0.9864 (95% CI: 0.9815 - 0.9913), Sensitivity: 94.29%, Specificity: 95.72% |
    | | | Pneumothorax: ROC AUC: 0.9973 (95% CI: 0.9955 - 0.9992), Sensitivity: 96.08%, Specificity: 99.14% |

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Studies (Pivotal Studies):

      • Total Sample Size: 1,708 anonymized chest radiographs.
        • 754 cases for pleural effusion.
        • 954 cases for pneumothorax.
      • Data Provenance:
        • NIH chest X-ray dataset (represents the US population).
        • India dataset collected from multiple institutions in India (6 sites for pleural effusion and 3 sites for pneumothorax).
      • Type: Retrospective (implied by "anonymized chest radiographs collected from datasets").
    • Nonclinical Internal Validation Test (for standalone performance validation):

      • Total Sample Size: 1,385 images.
      • Data Provenance: Not explicitly stated, but performed "internally" by the company. It's likely a mix of internal data or data acquired for internal development.
      • Type: Retrospective (implied by "images were collected").

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Clinical Studies (Pivotal Studies): Not explicitly stated how many experts established the ground truth for the clinical pivotal studies.
    • Nonclinical Internal Validation Test: "highly experienced board-certified radiologists" were used to classify images into positive and negative groups. The specific number of radiologists is not mentioned.

    4. Adjudication Method for the Test Set

    • The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). For the Nonclinical Internal Validation Test, it states "classified into positive and negative groups by highly experienced board-certified radiologists," which might imply a consensus or single-reader approach, but an explicit adjudication method is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done or reported in this document. The studies primarily focused on the standalone performance of the Lunit INSIGHT CXR Triage device and comparison to predicate device performance metrics rather than human reader improvement with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was evaluated:
      • An internal nonclinical validation test was conducted specifically "to assess the standalone performance of Lunit INSIGHT CXR Triage."
      • The clinical pivotal studies also reported standalone performance metrics (ROC AUC, sensitivity, specificity) for the device.

    7. The Type of Ground Truth Used

    • For both the nonclinical internal validation test and the clinical pivotal studies, the ground truth was established by expert consensus/interpretation, specifically by "highly experienced board-certified radiologists" for the internal validation. For the clinical studies, "positive" cases contained at least one target finding, and "negative" cases did not.

    8. The Sample Size for the Training Set

    • The document does not provide the sample size for the training set. It only discusses the validation/test sets used for performance evaluation.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established, as it does not detail the training process or dataset.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211161
    Date Cleared
    2021-10-29

    (193 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K183182

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Critical Care Suite is a suite of AI algorithms for the automated image analysis of frontal chest X-rays acquired on a digital x-ray system.

    Critical Care Suite with the Endotracheal Tube Position produces an on-screen image overlay that detects and localizes an endotracheal tube, locates the endotracheal tube tip, locates the carina, and automatically calculates the vertical distance between the endoracheal tube tip and carina. This information is also transmitted to the radiologist for review.

    Intended users include licensed qualified healthcare professionals (HCPs) trained to independently place and/or assess endotracheal tube placement and radiologists.

    Critical Care Suite with the Endotracheal Tube Positioning AI Algorithm should not be used in-lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis. It is not intended to review of the X-ray image by a qualified healthcare professional. Critical Care Suite with the Positioning AI Algorithm is indicated for adult-sized patients.

    Device Description

    Critical Care Suite with Endotracheal Tube Positioning Al Algorithm is an additional AI Algorithm incorporated into the Critical Care Suite software previously cleared under K183182. It introduces the Endotracheal Tube Positioning Al Algorithm which is a quantification tool that analyzes frontal chest x-ray images and based on the data in the image determines the location of the tip of an intubated patient's endotracheal tube, determines the location of the carina, and then calculates and displays the vertical distance between them. The distance provided is within the x-ray detector imaging plane and does not take into account the geometric magnification resultant from the geometry of the x-ray acquisition based on source to image distance (SID), patient size, or any impacts due to patient rotation or tube rotation. This information can aide clinical care teams and radiologists to determine the proper placement of the endotracheal tube in an intubated patient. All algorithms previously cleared under K183182 are still available with Critical Care Suite, including the Pneumothorax Detection Algorithm for triage and notification. The benefit of the proposed modification is not specific to the platform on which it is deployed. This benefit applies to all previously cleared computational platforms for Critical Care Suite, including PACS, On Premise, On Cloud and Digital Projection Radiographic Systems. The Optima XR240amx was chosen as the initial platform for deployment because endotracheal tube placement images are almost exclusively acquired on mobile X-ray systems due to the immobilization of the patients being intubated with an endotracheal tube.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Critical Care Suite with Endotracheal Tube Positioning AI Algorithm, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Implicit)Reported Device Performance (95% CI)
    ETT DetectionHigh accuracy for detection of endotracheal tubes.AUC: 0.9999 (0.9998, 1.0000)
    High sensitivity for detection of endotracheal tubes.Sensitivity: 0.9941 (0.9859, 1.0000)
    High specificity for detection of endotracheal tubes.Specificity: 1.0000 (1.0000, 1.0000)
    ETT Tip to Carina Distance MeasurementHigh success rate for accurate distance measurement.Success Rate: 0.9851 (0.9722, 0.9981)
    Carina LocalizationHigh success rate for accurate carina localization.Success Rate: 0.9851 (0.9722, 0.9981)
    ETT Tip LocalizationHigh success rate for accurate ETT tip localization.Success Rate: 0.9524 (0.9296, 0.9752)
    ETT Localization (DICE Score)High accuracy for overall ETT localization (segmentation fidelity).DICE: 0.9881 (0.9765, 0.9997)

    Note: The document states that "the results met the defined passing criteria." While specific numerical acceptance thresholds are not explicitly listed in the text, the reported high performance metrics imply that these values exceeded the internal acceptance criteria set by the manufacturer.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document states that the ground truth dataset contained a "sufficient number of images to adequately analyze all the primary and secondary endpoints." However, the exact sample size for the test set is not explicitly provided in the given text.
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective. It only mentions the use of a "ground truth dataset."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state the number of experts used to establish the ground truth for the test set, nor does it provide their specific qualifications (e.g., radiologist with X years of experience).

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth of the test set.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly conducted or described in the provided document. The clinical tests focused on the standalone performance of the AI algorithm against a ground truth dataset, not on comparing human reader performance with and without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone study was done. The "Summary of Clinical Tests" section explicitly describes the performance of the Endotracheal Tube Positioning AI Algorithm tested against a ground truth dataset, reporting metrics like AUC, sensitivity, specificity, and success rates for localization and measurement. This indicates a standalone evaluation of the algorithm's performance without direct human-in-the-loop comparison for these specific metrics.

    7. The Type of Ground Truth Used

    The type of ground truth used is expert consensus. The document refers to the algorithm's performance being "tested against a ground truth dataset" without specifying the exact method of ground truth establishment (e.g., pathology, outcomes data). However, for image analysis tasks like ETT positioning and carina localization, ground truth is typically established by multiple experts (e.g., radiologists) providing annotations or measurements, often followed by an adjudication process to reach a consensus.

    8. The Sample Size for the Training Set

    The document does not explicitly provide the sample size for the training set. It mentions the algorithms being "trained with clinical and/or artificial data" but no specific numbers.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the algorithms are "trained with clinical and/or artificial data." It does not explicitly detail how the ground truth for the training set was established. It refers to "nonadaptive machine learning algorithms trained with clinical and/or artificial data," but the process of creating the ground truth annotations for this training data is not described.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1