(267 days)
ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.
ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.
ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.
The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.
The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.
The pleural space abnormality ROI category was developed from images with:
- Pleural Effusion that is an abnormal presence of fluid in the pleural space
- Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura
The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.
ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.
For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.
If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:
- Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
- Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
- Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
- Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
- Below the images, a footer with:
- The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
- The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)
If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.
Here's a breakdown of the acceptance criteria and study details for ChestView US:
1. Table of Acceptance Criteria and Reported Device Performance
Standalone Performance (ChestView US)
ROIs | Acceptance Criteria (AUC) | Reported Device Performance (AUC) | 95% Bootstrap CI (AUC) | Acceptance Criteria (Sensitivity @ High-Sensitivity OP) | Reported Device Performance (Sensitivity @ High-Sensitivity OP) | 95% Bootstrap CI (Sensitivity @ High-Sensitivity OP) | Acceptance Criteria (Specificity @ High-Sensitivity OP) | Reported Device Performance (Specificity @ High-Sensitivity OP) | 95% Bootstrap CI (Specificity @ High-Sensitivity OP) | Acceptance Criteria (Sensitivity @ High-Specificity OP) | Reported Device Performance (Sensitivity @ High-Specificity OP) | 95% Bootstrap CI (Sensitivity @ High-Specificity OP) | Acceptance Criteria (Specificity @ High-Specificity OP) | Reported Device Performance (Specificity @ High-Specificity OP) | 95% Bootstrap CI (Specificity @ High-Specificity OP) |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
NODULE | (Not explicitly stated) | 0.93 | [0.921; 0.938] | (Not explicitly stated) | 0.829 | [0.801; 0.86] | (Not explicitly stated) | 0.956 | [0.948; 0.963] | (Not explicitly stated) | 0.482 | [0.455; 0.518] | (Not explicitly stated) | 0.994 | [0.99; 0.996] |
MEDIASTINUM/HILA ABNORMALITY | (Not explicitly stated) | 0.922 | [0.91; 0.934] | (Not explicitly stated) | 0.793 | [0.739; 0.832] | (Not explicitly stated) | 0.975 | [0.971; 0.98] | (Not explicitly stated) | 0.535 | [0.475; 0.592] | (Not explicitly stated) | 0.992 | [0.99; 0.994] |
CONSOLIDATION | (Not explicitly stated) | 0.952 | [0.947; 0.957] | (Not explicitly stated) | 0.853 | [0.822; 0.879] | (Not explicitly stated) | 0.946 | [0.938; 0.952] | (Not explicitly stated) | 0.61 | [0.583; 0.643] | (Not explicitly stated) | 0.985 | [0.981; 0.989] |
PLEURAL SPACE ABNORMALITY | (Not explicitly stated) | 0.973 | [0.97; 0.975] | (Not explicitly stated) | 0.892 | [0.87; 0.911] | (Not explicitly stated) | 0.965 | [0.958; 0.971] | (Not explicitly stated) | 0.87 | [0.85; 0.896] | (Not explicitly stated) | 0.975 | [0.97; 0.981] |
MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)
ROI Category | Reader Type | Acceptance Criteria (AUC Improvement) | Reported AUC Improvement | 95% Confidence Interval for AUC Improvement | P-value |
---|---|---|---|---|---|
Nodule | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.136 | [0.107, 0.17] |
§ 892.2070 Medical image analyzer.
(a)
Identification. Medical image analyzers, including computer-assisted/aided detection (CADe) devices for mammography breast cancer, ultrasound breast lesions, radiograph lung nodules, and radiograph dental caries detection, is a prescription device that is intended to identify, mark, highlight, or in any other manner direct the clinicians' attention to portions of a radiology image that may reveal abnormalities during interpretation of patient radiology images by the clinicians. This device incorporates pattern recognition and data analysis capabilities and operates on previously acquired medical images. This device is not intended to replace the review by a qualified radiologist, and is not intended to be used for triage, or to recommend diagnosis.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithms including a description of the algorithm inputs and outputs, each major component or block, and algorithm limitations.
(ii) A detailed description of pre-specified performance testing methods and dataset(s) used to assess whether the device will improve reader performance as intended and to characterize the standalone device performance. Performance testing includes one or more standalone tests, side-by-side comparisons, or a reader study, as applicable.
(iii) Results from performance testing that demonstrate that the device improves reader performance in the intended use population when used in accordance with the instructions for use. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio). The test dataset must contain a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant diseases, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Appropriate software documentation (
e.g., device hazard analysis; software requirements specification document; software design specification document; traceability analysis; description of verification and validation activities including system level test protocol, pass/fail criteria, and results; and cybersecurity).(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the intended reading protocol.
(iii) A detailed description of the intended user and user training that addresses appropriate reading protocols for the device.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Discussion of warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) Device operating instructions.
(viii) A detailed summary of the performance testing, including: test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as lesion and organ characteristics, disease stages, and imaging equipment.