Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K242652
    Manufacturer
    Date Cleared
    2024-10-04

    (30 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lunit INSIGHT DBT is a computer-assisted detection and diagnosis (CADe/x) software intended to be used concurrently by interpreting physicians to aid in the detection and characterization of suspected lesions for breast cancer in digital breast tomosynthesis (DBT) exams from compatible DBT systems. Through the analysis. the regions of soft tissue lesions and calcifications are marked with an abnormality score indicating the likelihood of the presence of malignancy for each lesion. Lunit INSIGHT DBT uses screening mammograms of the female population.

    Lunit INSIGHT DBT is not intended as a replacement for a complete interpreting physician's review or their clinical judgment that takes into account other relevant information from the image or patient history.

    Device Description

    Lunit INSIGHT DBT is a computer-assisted detection/diagnosis (CADe/x) software as a medical device that provides information about the presence, location and characteristics of lesions suspicious for breast cancer to assist interpreting physicians in making diagnostic decisions when reading digital breast tomosynthesis (DBT) images. The software automatically analyzes digital breast tomosynthesis slices via artificial intelligence technology that has been trained via deep learning.

    For each DBT case, Lunit INSIGHT DBT generates an artificial intelligence analysis results that include the lesion type, location, lesion-level/case-level score, and outline of the regions suspected of breast cancer. This peripheral information intends to augment the physician's workflow to better aid in detection and diagnosis of breast cancer.

    AI/ML Overview

    The provided text describes the 510(k) submission for Lunit INSIGHT DBT v1.1, a computer-assisted detection and diagnosis (CADe/x) software for breast cancer in digital breast tomosynthesis (DBT) exams. The document primarily focuses on demonstrating substantial equivalence to its predicate device, Lunit INSIGHT DBT v1.0.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The core acceptance criterion explicitly mentioned for the standalone performance testing is an AUROC (Area Under the Receiver Operating Characteristic curve) greater than 0.903. This is directly compared to the predicate device's performance.

    Acceptance Criterion (Primary Endpoint)Reported Device Performance (Lunit INSIGHT DBT v1.1)
    AUROC in standalone performance > 0.903AUROC = 0.931 (95% CI: 0.920 - 0.941)
    Statistical Significancep < 0.0001
    Exceeds Acceptance CriteriaYes

    Details of the Study

    The provided text only discusses a "Standalone Performance Testing" for Lunit INSIGHT DBT v1.1. It states that the protocol for this evaluation was the same as that used for the predicate device (K231470).

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: The document does not explicitly state the sample size (number of cases or images) used for the test set in the standalone performance study.
      • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective. It only mentions that the software uses "screening mammograms of the female population."
    2. Number of Experts Used to Establish Ground Truth and Qualifications:

      • The document does not specify the number of experts used or their qualifications for establishing ground truth in the standalone performance study.
    3. Adjudication Method for the Test Set:

      • The document does not mention any adjudication method (e.g., 2+1, 3+1, none) used for the test set.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • The document does not indicate that a multi-reader multi-case (MRMC) comparative effectiveness study was done to show how much human readers improve with AI vs. without AI assistance. The study described focuses on the standalone performance of the AI algorithm.
    5. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

      • Yes, a standalone performance study was done. The document explicitly states: "A standalone performance study of the Lunit INSIGHT DBT v1.1 assessed the detection performance of the artificial intelligence algorithm for breast cancer within DBT exams."
    6. Type of Ground Truth Used:

      • The document does not explicitly state the specific type of ground truth used (e.g., expert consensus, pathology, outcomes data, etc.) for the standalone performance study. It refers to the detection of "breast cancer," implying a definitive diagnosis, but doesn't detail how this diagnosis was established as ground truth for the test cases.
    7. Sample Size for the Training Set:

      • The document does not specify the exact sample size for the training set. It only mentions that the updated AI engine has "expanded training data."
    8. How the Ground Truth for the Training Set Was Established:

      • The document does not explicitly detail how the ground truth for the training set was established. It states that the AI technology "has been trained via deep learning," which implies the use of labeled data, but does not describe the process of labeling or establishing that ground truth.

    In summary:

    The provided information focuses on demonstrating that Lunit INSIGHT DBT v1.1 meets the standalone performance AUROC criterion (0.931 > 0.903), which was the same criterion used for its predicate device. However, it lacks detailed information regarding the specifics of the data used (sample sizes, provenance), the ground truth establishment process (experts, adjudication), and the absence of an MRMC study is notable for a CADe/x device, though not explicitly required for this specific 510(k) submission that highlights substantial equivalence based on standalone performance to a predicate.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231470
    Manufacturer
    Date Cleared
    2023-11-06

    (168 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lunit INSIGHT DBT is a computer-assisted detection and diagnosis (CADe/x) software intended to be used concurrently by interpreting physicians to aid in the detection and characterization of suspected lesions for breast cancer in digital breast tomosynthesis (DBT) exams from compatible DBT systems. Through the analysis. the regions of soft tissue lesions and calcifications are marked with an abnormality score indicating the likelihood of the presence of malignancy for each lesion. Lunit INSIGHT DBT uses screening mammograms of the female population.

    Lunit INSIGHT DBT is not intended as a replacement for a complete interpreting physician's review or their clinical judgment that takes into account other relevant information from the image or patient history.

    Device Description

    Lunit INSIGHT DBT is a computer-assisted detection/diagnosis (CADe/x) software as a medical device that provides information about the presence, location and characteristics of lesions suspicious for breast cancer to assist interpreting physicians in making diagnostic decisions when reading digital breast tomosynthesis (DBT) images. The software automatically analyzes digital breast tomosynthesis slices via artificial intelligence technology that has been trained via deep learning.

    For each DBT case, Lunit INSIGHT DBT generates an artificial intelligence analysis results that include the lesion type, location, lesion-level case-level score, and outline of the regions suspected of breast cancer. This peripheral information intends to augment the physician's workflow to better aid in detection and diagnosis of breast cancer.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device, Lunit INSIGHT DBT, meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance CriteriaReported Device PerformanceStatistical Significance / Comment
    Standalone Performance
    AUROC> 0.903 (mean AUROC of predicate device K211678)0.928 (95% Cl: 0.917 - 0.939)p < 0.0001 (exceeded criteria)
    Clinical Assessment (MRMC Study with CAD Assistance)
    Patient-level LOS AUROCCAD-assisted performance superior to CAD-unassisted performance with statistical significanceCAD-unassisted AUROC: 0.897 (95% Cl 0.858 - 0.936)CAD-assisted AUROC: 0.915 (95% Cl: 0.874 - 0.955)Inter-test difference: 0.017 (95% Cl: 0.000 - 0.034, P = 0.0498) - met criteria.

    2. Sample Size Used for the Test Set and Data Provenance

    • Standalone Performance Test Set: 2,202 DBT exams (1,100 negative/benign, 1,102 cancer cases).
      • Data Provenance: Collected at multiple imaging facilities in the US. The data was collected consecutively.
    • Clinical Assessment (MRMC) Test Set: 258 DBT exams (65 cancer cases, 193 non-cancer cases, comprising 128 normal and 65 benign cases).
      • Data Provenance: Acquired from US clinical centers.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Standalone Performance Test Set: The text states ground truth localization was "derived based on the radiologic review and annotation by multiple MQSA qualified ground truthers." The exact number of experts is not specified.
      • Qualifications: "MQSA qualified ground truthers."
    • Clinical Assessment (MRMC) Test Set: The ground truth for the cases used in the MRMC study is implicitly established by the case classification (cancer vs. non-cancer). It's not explicitly stated how many experts established the underlying ground truth for these 258 cases, beyond the radiologists participating in the MRMC study itself. The readers for the MRMC study were "a total of 15 MQSA qualified and US board-certified radiologists."

    4. Adjudication Method for the Test Set

    • Standalone Performance Test Set: Ground truth was established through "binary classification of each case based on clinical supporting data, particularly pathology reports for cancer and biopsy-proven benign cases, followed by localization which was derived based on the radiologic review and annotation by multiple MQSA qualified ground truthers." This suggests an expert consensus/review process for localization, likely involving reconciliation or multiple reads. The specific type of adjudication (e.g., 2+1, 3+1) is not explicitly detailed.
    • Clinical Assessment (MRMC) Test Set: The ground truth for the MRMC study seems to rely on the pre-established classification of cases (cancer/non-cancer) based on clinical and pathological data. The radiologists in the MRMC study were evaluating cases against this existing ground truth, not establishing it in an adjudicated reading session.

    5. Was a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Done? If so, what was the effect size of how much human readers improve with AI vs. without AI assistance?

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Improvement:
      • CAD-unassisted AUROC: 0.897
      • CAD-assisted AUROC: 0.915
      • Inter-test difference (effect size): 0.017 (Absolute difference in AUROC). This indicates an improvement in AUROC of 0.017 when radiologists were assisted by Lunit INSIGHT DBT. The p-value of 0.0498 indicates this improvement was statistically significant.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance study was done.
      • Performance: AUROC of 0.928 (95% Cl: 0.917 - 0.939).

    7. The Type of Ground Truth Used

    • Standalone Performance and Clinical Assessment Test Sets: The ground truth was established primarily through clinical supporting data, specifically pathology reports for cancer and biopsy-proven benign cases, further supplemented by radiologic review and annotation by MQSA qualified ground truthers for localization. This can be categorized as a combination of pathology and expert consensus/review.

    8. The Sample Size for the Training Set

    • The document states that the "dataset used in the standalone performance test was independent from the dataset used for development of the artificial intelligence algorithm." However, it does not specify the sample size of the training set used for the development of Lunit INSIGHT DBT.

    9. How the Ground Truth for the Training Set Was Established

    • The document mentions that the training dataset was separate from the test dataset. While it details how the ground truth was established for the test set (pathology, biopsy, radiologic review/annotation), it does not explicitly describe how the ground truth was established for the training set. Being a deep learning model, it's highly probable the training data also relied on verified diagnoses, likely similarly derived from pathology/biopsy given the nature of CADe/x devices for breast cancer.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211678
    Manufacturer
    Date Cleared
    2021-11-17

    (169 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software device based on an artificial intelligence algorithm intended to aid in the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM systems. As an adjunctive tool, the device is intended to be viewed by interpreting physicians after completing their initial read.

    It is not intended as a replacement for a complete physician's review or their clinical judgement that takes into account other relevant information from the image or patient history. Lunit INSIGHT MMG uses screening mammograms of the female population.

    Device Description

    Lunit INSIGHT MMG is a radiological Computer-Assisted Detection and Diagnosis (CADe/x) software for aiding interpreting physicians with the detection, and characterization of suspicious areas for breast cancer on mammograms from compatible FFDM (full-field digital mammography) systems. The software applies an artificial intelligence algorithm for recognition of suspicious lesions, which are trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues.

    Lunit INSIGHT MMG automatically analyzes the mammograms received from the client's image storage system (e.g., Picture Archiving and Communication System (PACS)) or other radiological imaging equipment. Following receipt of mammograms, the software device de-identifies the copies of images in DICOM format (.dcm) and then automatically analyzes each image and identifies and characterizes suspicious areas for breast cancer. The analysis result is converted into DICOM file and the result is saved within the designated storage location (e.g., PACS, x-ray system, etc.)

    Lunit INSIGHT MMG processes mammograms and the output of the device can be viewed by interpreting physicians after completing their initial read. As an analysis result, the software device allows a visualization and quantitative estimation of the presence of a malignant lesion. The suspicious lesions are marked by a visualized map and an abnormality score, which reflects general likelihood of presence of malignancy, is presented for each lesion, as well as for each breast.

    AI/ML Overview
    {
      "acceptance_criteria_and_study_description": {
        "acceptance_criteria_and_performance_table": [
          {
            "criterion": "Standalone Performance: ROC AUC",
            "acceptance_criteria": "Statistical significance (p < 0.0001) and improvement compared to interpretation performance of radiologists when reading mammograms unaided.",
            "reported_performance": "0.903 (95% CI: 0.889-0.917), p < 0.0001 (demonstrates improvement compared to unaided radiologists)"
          },
          {
            "criterion": "Clinical Performance (MRMC Reader Study): Primary Endpoint - Average inter-test difference in ROC AUC (Test 2 vs. Test 1)",
            "acceptance_criteria": "p-value of ROC AUC difference test < 5% and lower bound of two-sided 95% CI of AUC difference (Test 2 - Test 1) > 0",
            "reported_performance": "0.051 (95% CI: 0.027-0.075), P=0.0001"
          }
        ],
        "test_set_sample_size_and_provenance": {
          "standalone_performance_study": {
            "sample_size": "2412 mammograms",
            "data_provenance": "Collected using Hologic, GE Healthcare, and Siemens mammography equipment. Data is independent from dataset used for algorithm development and US pivotal reader study.",
            "country_of_origin": "Not explicitly stated, but implies global or general collection from compatible equipment."
          },
          "clinical_testing_reader_study": {
            "sample_size": "240 mammograms",
            "data_provenance": "Collected using Hologic and GE Healthcare mammography equipment in the US. Retrospective study.",
            "country_of_origin": "US"
          }
        },
        "experts_for_ground_truth": {
          "number_of_experts": "Not explicitly stated for establishing ground truth, but for the clinical reader study, 12 MQSA qualified reading panelists performed interpretations.",
          "qualifications_of_experts": "For the clinical reader study, 'MQSA qualified reading panelists' were used. Specific years of experience are not mentioned."
        },
        "adjudication_method_for_test_set": {
          "standalone_performance_study": "Not explicitly described, but implies comparison against 'reference standards'.",
          "clinical_testing_reader_study": "Not explicitly described beyond the use of 12 MQSA qualified reading panelists. The process states that readers re-interpret cases with AI assistance after an initial unaided read, with randomized reading order to minimize bias."
        },
        "mrmc_comparative_effectiveness_study": {
          "conducted": "Yes",
          "effect_size": {
            "roc_auc_improvement": "Average inter-test difference in ROC AUC between Test 2 (with CAD) and Test 1 (without CAD) was 0.051 (95% CI: 0.027-0.075), with statistical significance (P=0.0001). This indicates improved physician interpretation ability.",
            "lcm_lroc_improvement": "0.094 (95% CI: 0.056 - 0.132)",
            "lcm_roc_auc_improvement": "0.052 (95% CI: 0.026 - 0.079)",
            "recall_rate_in_cancer_group_sensitivity_improvement": "5.97 (95% CI: 2.48 - 9.46)",
            "recall_rate_in_non_cancer_group_1_specificity_improvement": "-1.46 (95% CI: -3.41 - 0.05)"
          }
        },
        "standalone_performance_study_conducted": "Yes, two standalone performance analyses were done:\n1. A dedicated standalone performance study with 2412 mammograms, showing ROC AUC of 0.903.\n2. An additional standalone algorithm performance assessment using the 240 cases from the reader study, without reader intervention, yielding an ROC AUC of 0.863. This exceeded the AUC of every human reading panelist in the unaided (Test 1) scenario (unaided AUC = 0.754).",
        "type_of_ground_truth_used": "Mainly implied as breast cancer detection based on 'reference standards' in the standalone study and by expert interpretation/consensus in the reader study. The device is trained with 'biopsy proven examples of breast cancer, benign lesions and normal tissues', suggesting pathology or clinical outcomes data as the ultimate ground truth source for training.",
        "training_set_sample_size": "Not explicitly stated as a specific number, but described as 'large databases of biopsy proven examples of breast cancer, benign lesions and normal tissues'."
      }
    }
    
    Ask a Question

    Ask a specific question about this device

    K Number
    K211733
    Manufacturer
    Date Cleared
    2021-11-10

    (159 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Lunit INSIGHT CXR Triage is a radiological computer-assisted triage and notification software that analyzes adult chest X-ray images for the presence of pre-specified suspected critical findings (pleural effusion and/or pneumothorax). Lunit INSIGHT CXR Triage uses an artificial intelligence algorithm to analyze images for features suggestive of critical findings and provides case-level output available in the PACS/workstation for worklist prioritization or triage.

    As a passive notification for prioritization-only software tool within standard of care workflow, Lunit INSIGHT CXR Triage does not send a proactive alert directly to the appropriately trained medical specialists. Lunit INSIGHT CXR Triage is not intended to direct attention to specific portions of an image. Its results are not intended to be used on a standalone basis for clinical decision-making.

    Device Description

    Lunit INSIGHT CXR Triage is a radiological computer-assisted prioritization software that utilizes AI-based image analysis algorithms to identify pre-specified critical findings (pleural effusion and/or pneumothorax) on frontal chest X-ray images and flag the images in the PACS/workstation to enable prioritized review by the appropriately trained medical specialists who are qualified to interpret chest radiographs. The software does not alter the order or remove cases from the reading queue.

    Chest radiographs are automatically received from the user's image system (e.g. Picture Archiving and Communication System (PACS)) or other radiological imaging equipment (e.g. X-ray systems) and processed by the Lunit INSIGHT CXR Triage for analysis. Following receipt of chest radiographs, the software device de-identifies a copy of each chest radiographs in DICOM format (.dcm) and automatically analyzes each image to identify features suggestive of pleural effusion and/or pneumothorax. Based on the analysis result, the software notifies PACS/workstation for the presence of the critical findings as indicating either "flag" or "(blank)". This would allow the appropriately trained medical specialists to group suspicious exams together that may potentially benefit for their prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care. Lunit INSIGHT CXR Triage can flag more than one critical finding per radiograph and the user may select the option to turn on and off notification of critical findings (pleural effusion and pneumothorax).

    When deployed on other radiological imaging equipment, Lunit INSIGHT CXR Triage automatically runs after image acquisition. It prioritizes and displays the analysis result through the worklist interface of PACS/workstation. Moreover, the analysis result can also be provided in the form of DICOM files containing information on the presence of suspicious radiologic findings. In parallel, the algorithms produce an on-device notification indicating which cases were prioritized by Lunit INSIGHT CXR Triage in PACS. The on-device notification does not provide any diagnostic information and it is not intended to inform any clinical decision, prioritization, or action to the technologist.

    Lunit INSIGHT CXR Triage works in parallel to and in conjunction with the standard care of workflow; therefore, the user enables to review the study containing critical findings earlier than others. As a passive notification for prioritization-only software tool within standard of care workflow, the software does not send a proactive alert directly to the appropriately trained medical specialists who are qualified to interpret chest radiographs. Lunit INSIGHT CXR Triage is not intended to direct attention to specific portions or anomalies of an image and it should not be used on a stand-alone basis for clinical decision-making.

    In parallel, an on-device, technologist notification is generated 15 minutes after interpretation by the user, indicating which cases were prioritized by Lunit INSIGHT CXR Triage in PACS. The technologist notification is contextual and does not provides any diagnostic information. The ondevice, technologist notification is not intended to inform any clinical decision, prioritization, or action.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details for the Lunit INSIGHT CXR Triage device:

    1. Table of Acceptance Criteria and Reported Device Performance

    | Feature/Metric | Acceptance Criteria | Reported Device Performance (Clinical Study) |
    |------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Pleural Effusion Detection | | |
    | ROC AUC | > 0.95 | 0.9686 (95% CI: 0.9547 - 0.9824) |
    | Sensitivity (Lower Bound) | > 0.85 | 89.86% (95% CI: 86.72 - 93.00) |
    | Specificity (Lower Bound) | > 0.85 | 93.48% (95% CI: 91.06 - 95.91) |
    | Pneumothorax Detection | | |
    | ROC AUC | > 0.95 | 0.9630 (95% CI: 0.9521 - 0.9739) |
    | Sensitivity (Lower Bound) | > 0.85 | 88.92% (95% CI: 85.60 - 92.24) |
    | Specificity (Lower Bound) | > 0.85 | 90.51% (95% CI: 88.18 - 92.83) |
    | Device Performance Time (average) | Comparable to cleared commercial products (HealthCXR (Zebra, K192320) and red dot™ (Behold.AI, K161556)) Considering the primary predicate (HealthCXR) had a delay of ~22 seconds for image transfer, computation, and results transfer, and Critical Care Suite had an average of 42 seconds for acquisition, annotation, processing, and transfer, comparably shorter times would be favorable. | Pleural Effusion: 20.76 seconds (95% CI: 20.23 - 21.28) |
    | | | Pneumothorax: 20.45 seconds (95% CI: 19.99 - 20.92) |
    | Nonclinical Testing (Standalone Performance) | ROC AUC > 0.95, Sensitivity > 0.80, Specificity > 0.80 | Pleural Effusion: ROC AUC: 0.9864 (95% CI: 0.9815 - 0.9913), Sensitivity: 94.29%, Specificity: 95.72% |
    | | | Pneumothorax: ROC AUC: 0.9973 (95% CI: 0.9955 - 0.9992), Sensitivity: 96.08%, Specificity: 99.14% |

    2. Sample Size Used for the Test Set and Data Provenance

    • Clinical Studies (Pivotal Studies):

      • Total Sample Size: 1,708 anonymized chest radiographs.
        • 754 cases for pleural effusion.
        • 954 cases for pneumothorax.
      • Data Provenance:
        • NIH chest X-ray dataset (represents the US population).
        • India dataset collected from multiple institutions in India (6 sites for pleural effusion and 3 sites for pneumothorax).
      • Type: Retrospective (implied by "anonymized chest radiographs collected from datasets").
    • Nonclinical Internal Validation Test (for standalone performance validation):

      • Total Sample Size: 1,385 images.
      • Data Provenance: Not explicitly stated, but performed "internally" by the company. It's likely a mix of internal data or data acquired for internal development.
      • Type: Retrospective (implied by "images were collected").

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Clinical Studies (Pivotal Studies): Not explicitly stated how many experts established the ground truth for the clinical pivotal studies.
    • Nonclinical Internal Validation Test: "highly experienced board-certified radiologists" were used to classify images into positive and negative groups. The specific number of radiologists is not mentioned.

    4. Adjudication Method for the Test Set

    • The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). For the Nonclinical Internal Validation Test, it states "classified into positive and negative groups by highly experienced board-certified radiologists," which might imply a consensus or single-reader approach, but an explicit adjudication method is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done or reported in this document. The studies primarily focused on the standalone performance of the Lunit INSIGHT CXR Triage device and comparison to predicate device performance metrics rather than human reader improvement with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was evaluated:
      • An internal nonclinical validation test was conducted specifically "to assess the standalone performance of Lunit INSIGHT CXR Triage."
      • The clinical pivotal studies also reported standalone performance metrics (ROC AUC, sensitivity, specificity) for the device.

    7. The Type of Ground Truth Used

    • For both the nonclinical internal validation test and the clinical pivotal studies, the ground truth was established by expert consensus/interpretation, specifically by "highly experienced board-certified radiologists" for the internal validation. For the clinical studies, "positive" cases contained at least one target finding, and "negative" cases did not.

    8. The Sample Size for the Training Set

    • The document does not provide the sample size for the training set. It only discusses the validation/test sets used for performance evaluation.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established, as it does not detail the training process or dataset.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1