Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K240554
    Date Cleared
    2025-05-16

    (443 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    InferRead Lung CT.AI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    InferRead Lung CT.AI is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules ≥ 4mm during the review of CT examinations of the chest on an asymptomatic population ≥ 55 years old. InferRead Lung CT.AI requires that both lungs be in the field of view. InferRead Lung CT.AI provides adjunctive information and is not intended to be used without the original CT series.

    Device Description

    InferRead Lung CT.AI uses the deep learning (DL) technology to perform nodule detection. It is a dedicated post-processing application that generates CADe marks as an overlay on original CT scans. The software can be installed in a healthcare facility or a cloud-based platform and is comprised of computer-assisted reading tools designed to aid radiologists in detecting, segmenting, measuring and localizing actionable pulmonary nodules that are 4mm or above during the review of chest CT examinations of asymptomatic populations, with enhanced capabilities for pulmonary nodule follow-up comparison and lung analysis. InferRead Lung CT.AI provides auxiliary information and is not intended to be used if the original CT series is not available.

    AI/ML Overview

    The provided 510(k) clearance letter and summary discuss the InferRead Lung CT.AI device, its indications for use, and a comparison to predicate devices. It also details some standalone performance tests conducted to assess the newly introduced features of the device.

    Here's an analysis of the acceptance criteria and study information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document details performance for newly added features rather than explicitly defined "acceptance criteria" for the overall device's primary function of nodule detection. However, it states that "predetermined testing criteria" were passed and that "validation tests indicated that as required by the risk analysis, designated individuals performed all verification and validation activities and that the results demonstrated that the predetermined acceptance criteria were met."

    For the newly introduced functions, specific performance metrics are reported:

    Feature TestedAcceptance Criteria (Implied/Expected)Reported Device Performance
    Nodule RegistrationHigh accuracy in matching nodule pairs between current and prior scans.Overall Nodule Match Rate: 0.970 (95%CI: 0.947-0.994)
    Scan Interval Subgroup:
    • 0-6 months: 0.976 (95%CI: 0.911-1.0)
    • 6-12 months: 1.000 (95%CI: N/A)
    • 12-24 months: 0.938 (95%CI: 0.880-0.997) |
      | Nodule Lobe Localization | High accuracy in identifying the correct lung lobe for detected nodules. | Overall Lobe Localization Accuracy Rate: 0.957 (95%CI: 0.929-0.986) |
      | Lung Lobe Segmentation | High geometric similarity between automated segmentation and ground truth. | Average Dice Coefficient: 0.966 (95%CI: 0.962 to 0.969) |

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Nodule Registration Standalone Test: 98 lung cancer screening cases with 206 nodule pairs.
    • Nodule Lobe Localization Standalone Test: 94 lung cancer screening scans with 188 nodules.
    • Lung Lobe Segmentation Standalone Test: 22 lung cancer screening cases with 110 lung lobes.

    Data Provenance: The document does not explicitly state the country of origin for the data used in these tests, nor does it specify if the data was retrospective or prospective. It refers to "lung cancer screening cases/scans," suggesting these are clinical datasets.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not explicitly state the number of experts or their qualifications who established the ground truth for the standalone performance tests.

    4. Adjudication Method for the Test Set

    The document does not specify the adjudication method used for establishing the ground truth for the test sets in these standalone performance evaluations.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to assess the improvement of human readers with AI assistance versus without AI assistance. The performance tests described are standalone evaluations of specific AI functions.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the document explicitly describes standalone performance testing for the newly added functions: "For the newly added functions, including nodule registration, nodule localization and lung lobe segmentation, we conducted standalone performance testing." The reported results (Nodule Match Rate, Lobe Localization Accuracy Rate, Dice Coefficient) are all metrics of the algorithm's performance without human interaction.

    The document also states: "Regarding the performance of the AI outputs, the nodule detection and segmentation functions were consistent with the predicate product (K192880), as verified through consistency testing." This implies that the primary nodule detection and segmentation capabilities were also assessed in a standalone manner, likely by comparing the AI's output to a ground truth.

    7. The Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for the standalone tests (e.g., expert consensus, pathology, outcomes data). However, for metrics like "Nodule Match Rate," "Lobe Localization Accuracy Rate," and "Dice Coefficient," the ground truth would typically be established by expert radiologists or reference standards. For Dice Coefficient in segmentation, it would likely be expert-drawn segmentations.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size of the training set used to develop the InferRead Lung CT.AI device.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K192880
    Date Cleared
    2020-07-02

    (267 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    InferRead Lung CT.AI

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    InferRead Lung CT.AI is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules during the review of CT examinations of the chest on an asymptomatic population. InferRead Lung CT.AI requires that both lungs be in the field of view. InferRead Lung CT.AI provides adjunctive information and is not intended to be used without the original CT series.

    Device Description

    InferRead Lung CT.AI uses the Browser/Server architecture, and is provided as Service (SaaS) via a URL. The system integrates algorithm logic and database in the same the simplicity of the system and the convenience of system maintenance. The server is able to accept chest CT images from a PACS system, Radiological Information System) or directly from a CT scanner, analyze the images and provide output annotations regarding lung nodules. Users are an existing PACS system to view the annotations. Dedicated servers can be located at hospitals and are directly connected to the hospital networks. The software consists of 4 modules which are Image reception (Docking Toolbox), Image predictive processing (DLServer), Image storage (RePACS) and Image display (NeoViewer).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for InferRead Lung CT.AI, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on a comparative effectiveness study and discusses the device's performance in comparison to unaided human reading. It doesn't explicitly list "acceptance criteria" with numerical targets in the same way a standalone performance study might. However, the objective of the clinical study serves as the de facto acceptance criteria.

    Acceptance Criteria (Inferred from Study Objective)Reported Device Performance
    Significantly improve radiologists' nodule detection performance (AUC)Increase in AUC (Aided - Unaided): 0.073 (95% CI: 0.020, 0.125). The document states this increase was "significant," indicating that the lower bound of the confidence interval (0.020) is above zero, satisfying the improvement criterion.
    Without significantly increasing reading timeDecrease in reading times (Aided - Unaided): -23 seconds (95% CI: -42, -3). The document states this decrease was "significant," meaning the upper bound of the confidence interval (-3) is below zero. This indicates a reduction in reading time, thus satisfying the criterion of not increasing reading time and indeed improving it.

    2. Sample Size and Data Provenance for Test Set

    • Sample Size (Test Set): 249 CT scans.
    • Data Provenance: The document does not explicitly state the country of origin. It specifies that the data included "chest CT scans from patients who underwent lung cancer screening," implying it's clinical data, and the study was "retrospective." This suggests the data was collected from existing patient records.

    3. Number of Experts and Qualifications for Ground Truth

    • The document mentions that a "pivotal reader study" was conducted, involving "10 board-certified radiologists." These radiologists were part of the MRMC study, where their consensus or interpretations would contribute to the ground truth.
    • However, it does not explicitly state how many of these, or other, experts were specifically used to establish the definitive ground truth for the test set independent of the reader study itself. The ground truth for the reader study is the consensus of the readers, or a reference standard against which their performance is measured (see point 7).

    4. Adjudication Method for the Test Set

    The document describes a "fully crossed, multi-reader multi-case (MRMC) study." In such studies, all readers review all cases. While it doesn't explicitly state an adjudication method like "2+1" for establishing a separate ground truth, the MRMC setup inherently uses the collective performance of the expert readers (in both aided and unaided modes) to evaluate the device's impact. The ground truth for nodule presence/absence in the cases would have been established prior to the reader study.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    Yes, a MRMC comparative effectiveness study was done.

    • Effect Size of Human Readers Improvement with AI vs. without AI:
      • Nodule Detection Performance (AUC): The AUC increased by 0.073 (Aided - Unaided), with a 95% confidence interval of (0.020, 0.125). This indicates a statistically significant improvement in detection performance when radiologists used the InferRead Lung CT.AI device.
      • Reading Time: Reading times decreased by 23 seconds (Aided - Unaided), with a 95% confidence interval of (-42, -3). This indicates a statistically significant reduction in reading time.

    6. Standalone Performance Study (Algorithm Only)

    Yes, a standalone performance study was done.

    • The document states: "Standalone performance testing which included chest CT scans from patients who underwent lung cancer screening was performed to validate detection accuracy of InferRead Lung CT.AI. Results showed that InferRead Lung CT.AI had similar nodule detection sensitivity and FP/scan compared to those of the predicate device."
    • This suggests comparison against the predicate device's standalone performance, which also implies a form of quantitative performance metric (sensitivity, false positives per scan) for the algorithm in isolation.

    7. Type of Ground Truth Used

    • For the standalone performance study, the document mentions "detection accuracy" based on scans from lung cancer screening, but doesn't explicitly state whether the ground truth was expert consensus, pathology, or outcomes data. However, for nodule detection, expert consensus on the presence and location of nodules from expert radiologists is a common ground truth, often verified or refined.
    • For the MRMC study, the ground truth for evaluating individual reader performance (from which the AUC is derived) would typically be an established reference standard (often expert consensus, sometimes supplemented by follow-up or pathology if available for some cases) created prior to the readers' evaluations.

    8. Sample Size for the Training Set

    The document does not provide the sample size of the training set used for developing the InferRead Lung CT.AI algorithm.

    9. How Ground Truth for Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1