Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K232410
    Device Name
    SmartChest
    Manufacturer
    Date Cleared
    2024-05-10

    (274 days)

    Product Code
    Regulation Number
    892.2080
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    SmartChest

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SmartChest is a radiological computer assisted triage and notification software that analyzes frontal chest X-ray images (Postero-Anterior (PA) or Antero-Posterior (AP)) of transitional adolescents (18 -21 yo but treated like adults) and adults (≥22 yo) for the presence of suspected pleural effusion and/or pneumothorax. SmartChest uses an artificial intelligence algorithm to analyze the images for features suggestive of critical findings and provides case-level output available to a PACS (or other DICOM storage platforms) for worklist prioritization.

    As a passive notification for prioritization-only software tool within the standard of care workflow, SmartChest does not send a proactive alert directly to a trained medical specialist.

    SmartChest is not intended to direct attention to a specific portion of an image. Its results are not intended to be used on a stand-alone basis for clinical decision-making.

    Device Description

    Smart Chest is a radiological computer assisted triage and notification software that analyzes (Postero-Anterior (PA) and/or Antero-Posterior (AP)) of transitional adolescents (18 ≤ age ≤ 21 yo but treated like adults) and adults (22 yo ≤ age) for the presence of suspected pleural effusion and/or pneumothorax. The software utilizes Al-based image analysis algorithms to detect the findings.

    SmartChest provides case-level output available in the worklist prioritization by appropriately trained medical specialists qualified to interpret chest radiographs are automatically received from the user's image acquisition or storage systems (e.g., PACS, other DICOM storage platforms) and processed by SmartChest for analysis. After receiving Chest X-Ray images, the device automatically analyzes the images and identifies pre-spectied findings (pleural effusion and/or pneumothorax). Then the analysis results are passively sent by SmartChest yia a notfication to the worklist software being used (PACS, or other platforms).

    The results are made available via a newly generated DICOM series (containing a secondary capture image), where DICOM tags contains the following information:

    1. "SUSPECTED FINDING" or "CASE PROCESSED" if the algorithm ran successfully, "NOT PROCESSED" if the algorithm receives a study containing chest images that are not part of the intended use (lateral views or excluded age for example).

    2. "SUSPECTED PLEURAL EFFUSION" OR "SUSPECTED PNEUMOTHORAX" if one pre-specified finding category identified OR,

    3."SUSPECTED PLEURAL EFFUSION, PNEUMOTHORAX" if the two pre-specified finding categories identified

      1. The secondary capture image returned in the storage system indicates at the study-level:
    • The number of images received by SmartChest,
    • The number of images processed by SmartChest,
    • The status of the study: "NOT PROCESSED", "SUSPECTED FINDING" or "CASE PROCESSED".

    The DICOM storage component may be a Picture Archiving and Communications (PACS) system or other local storage platforms. This would allow the appropriately trained medical specialists to group suspicious exams together that may potentially benefit their prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care.

    The device is not intended to be a rule-out device and for cases that have been processed by the device without notification for prespecified suspected findings it should not be viewed as indicating that the pre-specified findings are excluded. SmartChest device does not alter the order nor remove imaging exams from the interpretation queue. Unflagged cases should still be interpreted by medical specialists.

    The notification is contextual and does not provide any diagnostic information. The results are not intended to be used on a stand-alone basis for clinical decision-making. The summary image will display the following statement: "The product is not for Diagnostic Use-For Prioritization Only".

    AI/ML Overview

    The information provided is a 510(k) summary for the Milvue SmartChest device. Here's a breakdown of its acceptance criteria and the study proving it meets those criteria:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" as a separate section with specific numerical thresholds for sensitivity, specificity, and AUC that were set a priori. However, it reports the device's performance metrics for two distinct conditions: Pneumothorax and Pleural Effusion. The implication is that these reported performances met an acceptable level for substantial equivalence to the predicate device.

    Performance MetricPneumothorax Reported PerformancePleural Effusion Reported Performance
    ROC AUC0.989 [0.978; 0.997]0.975 [0.960; 0.987]
    Sensitivity92.7% [95% CI: 87.4-96.2]93.3% [95% CI: 88.1-96.4]
    Specificity97.3% [95% CI: 93.4-99.1]90.0% [95% CI: 84.1-94.1]
    Mean Execution Time (local)2.322 ± 0.267 seconds2.288 ± 0.165 seconds
    Mean Execution Time (cloud)28.542 ± 8.254 seconds28.257 ± 7.226 seconds

    2. Sample Size and Data Provenance for the Test Set:

    • Sample Size for Each Study: 300 Chest X-Ray cases
    • Total Test Set Size (extrapolated based on two studies): 600 Chest X-Ray cases (300 for pneumothorax, 300 for pleural effusion).
    • Data Provenance: The test data was obtained from multiple institutions across the US. It was from sites different from the training data sites, ensuring independence. It included images from rural (49 for pneumothorax, 53 for pleural effusion) and urban (251 for pneumothorax, 247 for pleural effusion) sites from states like New York, North Carolina, Texas, and Washington. The data was retrospective.

    3. Number of Experts and Qualifications for Ground Truth - Test Set:

    • Number of Experts: Three.
    • Qualifications of Experts: All three were ABR-certified (American Board of Radiology-certified) radiologists with a minimum of 5 years of experience.

    4. Adjudication Method for the Test Set:

    • The ground truth was established by three ABR-certified radiologists. The first two radiologists independently interpreted each case. The third radiologist independently reviewed cases where there was a disagreement between the first two. The final ground truth was determined by majority consensus (2+1 adjudication).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • The document does not indicate that an MRMC comparative effectiveness study was done to measure human reader improvement with AI assistance. The study focuses solely on the standalone performance of the AI algorithm.

    6. Standalone Performance Study:

    • Yes, a standalone performance study was done. Two individual standalone performance assessment studies were conducted to evaluate the effectiveness of SmartChest for pneumothorax and pleural effusion separately.

    7. Type of Ground Truth Used:

    • The ground truth for the test set was established by expert consensus among three ABR-certified radiologists.

    8. Sample Size for the Training Set:

    • The training set was composed of 9,560 images.

    9. How the Ground Truth for the Training Set Was Established:

    • The document states that the training data was collected from an unfiltered stream of exams in four French institutions between October 2018 and December 2021. It lists the distribution of exams per pathology (No findings, Pleural Effusion, Pneumothorax).
    • While it explains where and when the data was collected and the distribution of findings, the document does not explicitly describe the detailed process by which the ground truth labels for the training set were established. It only mentions that the data was processed to fit the model's requirements and that the images were used to train the model. This typically implies that the original clinical reports or expert annotations associated with these studies were used to create the ground truth labels for training, but the specific expert qualifications or adjudication methods for the training set ground truth are not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1