Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241620
    Device Name
    ChestView US
    Manufacturer
    Date Cleared
    2025-02-27

    (267 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.

    ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.

    Device Description

    ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.

    The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.

    The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.

    The pleural space abnormality ROI category was developed from images with:

    • Pleural Effusion that is an abnormal presence of fluid in the pleural space
    • Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura

    The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.

    ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.

    After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).

    Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.

    Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.

    The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.

    For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.

    If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:

    • Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
    • Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
      • Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
      • Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
    • Below the images, a footer with:
      • The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
      • The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)

    If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for ChestView US:

    1. Table of Acceptance Criteria and Reported Device Performance

    Standalone Performance (ChestView US)

    ROIsAcceptance Criteria (AUC)Reported Device Performance (AUC)95% Bootstrap CI (AUC)Acceptance Criteria (Sensitivity @ High-Sensitivity OP)Reported Device Performance (Sensitivity @ High-Sensitivity OP)95% Bootstrap CI (Sensitivity @ High-Sensitivity OP)Acceptance Criteria (Specificity @ High-Sensitivity OP)Reported Device Performance (Specificity @ High-Sensitivity OP)95% Bootstrap CI (Specificity @ High-Sensitivity OP)Acceptance Criteria (Sensitivity @ High-Specificity OP)Reported Device Performance (Sensitivity @ High-Specificity OP)95% Bootstrap CI (Sensitivity @ High-Specificity OP)Acceptance Criteria (Specificity @ High-Specificity OP)Reported Device Performance (Specificity @ High-Specificity OP)95% Bootstrap CI (Specificity @ High-Specificity OP)
    NODULE(Not explicitly stated)0.93[0.921; 0.938](Not explicitly stated)0.829[0.801; 0.86](Not explicitly stated)0.956[0.948; 0.963](Not explicitly stated)0.482[0.455; 0.518](Not explicitly stated)0.994[0.99; 0.996]
    MEDIASTINUM/HILA ABNORMALITY(Not explicitly stated)0.922[0.91; 0.934](Not explicitly stated)0.793[0.739; 0.832](Not explicitly stated)0.975[0.971; 0.98](Not explicitly stated)0.535[0.475; 0.592](Not explicitly stated)0.992[0.99; 0.994]
    CONSOLIDATION(Not explicitly stated)0.952[0.947; 0.957](Not explicitly stated)0.853[0.822; 0.879](Not explicitly stated)0.946[0.938; 0.952](Not explicitly stated)0.61[0.583; 0.643](Not explicitly stated)0.985[0.981; 0.989]
    PLEURAL SPACE ABNORMALITY(Not explicitly stated)0.973[0.97; 0.975](Not explicitly stated)0.892[0.87; 0.911](Not explicitly stated)0.965[0.958; 0.971](Not explicitly stated)0.87[0.85; 0.896](Not explicitly stated)0.975[0.97; 0.981]

    MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)

    ROI CategoryReader TypeAcceptance Criteria (AUC Improvement)Reported AUC Improvement95% Confidence Interval for AUC ImprovementP-value
    NoduleEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.136[0.107, 0.17]< 0.001
    NoduleRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.038[0.026, 0.052]< 0.001
    Mediastinum/Hila AbnormalityEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.158[0.14, 0.178]< 0.001
    Mediastinum/Hila AbnormalityRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.057[0.039, 0.077]< 0.001
    ConsolidationEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.099[0.083, 0.116]< 0.001
    ConsolidationRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.059[0.038, 0.079]< 0.001
    Pleural Space AbnormalityEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.127[0.078, 0.18]< 0.001
    Pleural Space AbnormalityRadiologists(Not explicitly stated as a numerical threshold, but "significantly improved")0.034[0.019, 0.049]< 0.001

    The acceptance criteria for the standalone performance are implied by the presentation of high AUC, sensitivity, and specificity metrics, suggesting that these values met an internal performance threshold deemed acceptable by the manufacturer and the FDA for market clearance. The MRMC study explicitly states that "Reader AUC estimates for both specialties significantly improved for all four categories (p-values < 0.001)," which serves as the acceptance criterion for the human-in-the-loop performance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Standalone Test Set: 3,884 chest radiograph cases.
    • Data Provenance (Standalone and MRMC): "representative of the intended use population." While the document does not explicitly state the country of origin or whether the data was retrospective or prospective, most such studies use retrospective data from diverse patient populations to represent real-world clinical scenarios. The use of "U.S. board-certified radiologists" for ground truth suggests U.S. data sources are likely.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: A "panel of U.S. board-certified radiologists" was used. The exact number is not specified.
    • Qualifications of Experts: U.S. board-certified radiologists. No specific experience levels (e.g., "10 years of experience") are mentioned.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that a "panel of U.S. board-certified radiologists" assessed the presence or absence of ROIs. This typically implies a consensus-based approach, but the specific mechanics (e.g., how disagreements were resolved) are not provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance (Difference in AUC):
      • Nodule Detection:
        • Emergency Medicine Physicians: 0.136 (95% CI [0.107, 0.17])
        • Radiologists: 0.038 (95% CI [0.026, 0.052])
      • Mediastinum/Hila Abnormality Detection:
        • Emergency Medicine Physicians: 0.158 (95% CI [0.14, 0.178])
        • Radiologists: 0.057 (95% CI [0.039, 0.077])
      • Consolidation Detection:
        • Emergency Medicine Physicians: 0.099 (95% CI [0.083, 0.116])
        • Radiologists: 0.059 (95% CI [0.038, 0.079])
      • Pleural Space Abnormality Detection:
        • Emergency Medicine Physicians: 0.127 (95% CI [0.078, 0.18])
        • Radiologists: 0.034 (95% CI [0.019, 0.049])

    6. Standalone (Algorithm Only without Human-in-the-Loop Performance)

    • Yes, a standalone clinical performance study was done. The results are presented in Table 2 (AUC) and Table 3 (Specificity/Sensitivity).

    7. Type of Ground Truth Used

    • Expert Consensus: The ground truth for both the standalone and MRMC studies was established by a "panel of U.S. board-certified radiologists" who assessed the presence or absence of ROIs.

    8. Sample Size for the Training Set

    The document does not specify the sample size used for the training set. It only mentions the "standalone clinical performance study on 3,884 chest radiograph cases representative of the intended use population" for testing.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide details on how the ground truth for the training set was established. It only describes the establishment of ground truth for the test set by a panel of U.S. board-certified radiologists.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1