Search Results
Found 2 results
510(k) Data Aggregation
(267 days)
ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.
ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.
ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.
The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.
The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.
The pleural space abnormality ROI category was developed from images with:
- Pleural Effusion that is an abnormal presence of fluid in the pleural space
- Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura
The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.
ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.
For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.
If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:
- Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
- Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
- Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
- Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
- Below the images, a footer with:
- The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
- The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)
If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.
Here's a breakdown of the acceptance criteria and study details for ChestView US:
1. Table of Acceptance Criteria and Reported Device Performance
Standalone Performance (ChestView US)
| ROIs | Acceptance Criteria (AUC) | Reported Device Performance (AUC) | 95% Bootstrap CI (AUC) | Acceptance Criteria (Sensitivity @ High-Sensitivity OP) | Reported Device Performance (Sensitivity @ High-Sensitivity OP) | 95% Bootstrap CI (Sensitivity @ High-Sensitivity OP) | Acceptance Criteria (Specificity @ High-Sensitivity OP) | Reported Device Performance (Specificity @ High-Sensitivity OP) | 95% Bootstrap CI (Specificity @ High-Sensitivity OP) | Acceptance Criteria (Sensitivity @ High-Specificity OP) | Reported Device Performance (Sensitivity @ High-Specificity OP) | 95% Bootstrap CI (Sensitivity @ High-Specificity OP) | Acceptance Criteria (Specificity @ High-Specificity OP) | Reported Device Performance (Specificity @ High-Specificity OP) | 95% Bootstrap CI (Specificity @ High-Specificity OP) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| NODULE | (Not explicitly stated) | 0.93 | [0.921; 0.938] | (Not explicitly stated) | 0.829 | [0.801; 0.86] | (Not explicitly stated) | 0.956 | [0.948; 0.963] | (Not explicitly stated) | 0.482 | [0.455; 0.518] | (Not explicitly stated) | 0.994 | [0.99; 0.996] |
| MEDIASTINUM/HILA ABNORMALITY | (Not explicitly stated) | 0.922 | [0.91; 0.934] | (Not explicitly stated) | 0.793 | [0.739; 0.832] | (Not explicitly stated) | 0.975 | [0.971; 0.98] | (Not explicitly stated) | 0.535 | [0.475; 0.592] | (Not explicitly stated) | 0.992 | [0.99; 0.994] |
| CONSOLIDATION | (Not explicitly stated) | 0.952 | [0.947; 0.957] | (Not explicitly stated) | 0.853 | [0.822; 0.879] | (Not explicitly stated) | 0.946 | [0.938; 0.952] | (Not explicitly stated) | 0.61 | [0.583; 0.643] | (Not explicitly stated) | 0.985 | [0.981; 0.989] |
| PLEURAL SPACE ABNORMALITY | (Not explicitly stated) | 0.973 | [0.97; 0.975] | (Not explicitly stated) | 0.892 | [0.87; 0.911] | (Not explicitly stated) | 0.965 | [0.958; 0.971] | (Not explicitly stated) | 0.87 | [0.85; 0.896] | (Not explicitly stated) | 0.975 | [0.97; 0.981] |
MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)
| ROI Category | Reader Type | Acceptance Criteria (AUC Improvement) | Reported AUC Improvement | 95% Confidence Interval for AUC Improvement | P-value |
|---|---|---|---|---|---|
| Nodule | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.136 | [0.107, 0.17] | < 0.001 |
| Nodule | Radiologists | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.038 | [0.026, 0.052] | < 0.001 |
| Mediastinum/Hila Abnormality | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.158 | [0.14, 0.178] | < 0.001 |
| Mediastinum/Hila Abnormality | Radiologists | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.057 | [0.039, 0.077] | < 0.001 |
| Consolidation | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.099 | [0.083, 0.116] | < 0.001 |
| Consolidation | Radiologists | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.059 | [0.038, 0.079] | < 0.001 |
| Pleural Space Abnormality | Emergency Medicine Physicians | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.127 | [0.078, 0.18] | < 0.001 |
| Pleural Space Abnormality | Radiologists | (Not explicitly stated as a numerical threshold, but "significantly improved") | 0.034 | [0.019, 0.049] | < 0.001 |
The acceptance criteria for the standalone performance are implied by the presentation of high AUC, sensitivity, and specificity metrics, suggesting that these values met an internal performance threshold deemed acceptable by the manufacturer and the FDA for market clearance. The MRMC study explicitly states that "Reader AUC estimates for both specialties significantly improved for all four categories (p-values < 0.001)," which serves as the acceptance criterion for the human-in-the-loop performance.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Standalone Test Set: 3,884 chest radiograph cases.
- Data Provenance (Standalone and MRMC): "representative of the intended use population." While the document does not explicitly state the country of origin or whether the data was retrospective or prospective, most such studies use retrospective data from diverse patient populations to represent real-world clinical scenarios. The use of "U.S. board-certified radiologists" for ground truth suggests U.S. data sources are likely.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: A "panel of U.S. board-certified radiologists" was used. The exact number is not specified.
- Qualifications of Experts: U.S. board-certified radiologists. No specific experience levels (e.g., "10 years of experience") are mentioned.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method (e.g., 2+1, 3+1). It only mentions that a "panel of U.S. board-certified radiologists" assessed the presence or absence of ROIs. This typically implies a consensus-based approach, but the specific mechanics (e.g., how disagreements were resolved) are not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC comparative effectiveness study was done.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance (Difference in AUC):
- Nodule Detection:
- Emergency Medicine Physicians: 0.136 (95% CI [0.107, 0.17])
- Radiologists: 0.038 (95% CI [0.026, 0.052])
- Mediastinum/Hila Abnormality Detection:
- Emergency Medicine Physicians: 0.158 (95% CI [0.14, 0.178])
- Radiologists: 0.057 (95% CI [0.039, 0.077])
- Consolidation Detection:
- Emergency Medicine Physicians: 0.099 (95% CI [0.083, 0.116])
- Radiologists: 0.059 (95% CI [0.038, 0.079])
- Pleural Space Abnormality Detection:
- Emergency Medicine Physicians: 0.127 (95% CI [0.078, 0.18])
- Radiologists: 0.034 (95% CI [0.019, 0.049])
- Nodule Detection:
6. Standalone (Algorithm Only without Human-in-the-Loop Performance)
- Yes, a standalone clinical performance study was done. The results are presented in Table 2 (AUC) and Table 3 (Specificity/Sensitivity).
7. Type of Ground Truth Used
- Expert Consensus: The ground truth for both the standalone and MRMC studies was established by a "panel of U.S. board-certified radiologists" who assessed the presence or absence of ROIs.
8. Sample Size for the Training Set
The document does not specify the sample size used for the training set. It only mentions the "standalone clinical performance study on 3,884 chest radiograph cases representative of the intended use population" for testing.
9. How the Ground Truth for the Training Set Was Established
The document does not provide details on how the ground truth for the training set was established. It only describes the establishment of ground truth for the test set by a panel of U.S. board-certified radiologists.
Ask a specific question about this device
(247 days)
BoneMetrics US is a fully automated radiological image processing software device intended to aid users in the measurement of Cobb angles on frontal spine radiographs of individuals of at least 4 years old for patients with suspected or present spinal deformities, such as scoliosis. It should not be used instead of full patient evaluation or solely relied upon to make or confirm a diagnosis. The software device is to be used by healthcare professionals trained in radiology.
BoneMetrics US is intended to analyze radiographs using machine learning techniques to provide fully automated measurements of cobb angles during the review of frontal spine radiographs. BoneMetrics US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, BoneMetrics US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS. After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneMetrics US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). Once received by BoneMetrics US, the radiographs are automatically processed by the Al algorithm without requiring any user inputs. The algorithm identifies the keypoints corresponding to the corners of all the vertebras that are seen on the images and calculates all possible angles between vertebras. Only Cobb Angles that are above 7° are retained. Based on the processing result, BoneMetrics US generates result files in DICOM format. These result files consist of annotated images with the measurements plotted on a copy of all images (as an overlay) and angle values displayed in degrees. BoneMetrics US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source. Once available, the result files are sent by BoneMetrics US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical. The DICOM Destination can be used to visualize the result files provided by BoneMetrics US or to transfer the results to another DICOM host for visualization. The users are then able to use them as a concurrent reading aid to provide their diagnosis. The displayed result for the BoneMetrics US is a summary in a unique Secondary Capture with the following information: The image with the angle(s) in degree drawn as an overlay (if any), A table with the angle(s) measurement(s) and value(s) in degree (if any), At the bottom, the "Gleamer" logo and the "BoneMetrics" mention.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of acceptance criteria and the reported device performance:
| Endpoint | Metric | Reported Mean Absolute Error (95% CI) | Acceptance Criteria (Upper bound of the MAE 95% CI) | Device Meets Criteria? |
|---|---|---|---|---|
| Cobb angle with the largest curvature (n = 212) | Mean Absolute Error (°) | 2.56° (2.0° - 3.28°) | < 6.34° | Yes (3.28° < 6.34°) |
| Minor Cobb angle (n = 189) | Mean Absolute Error (°) | 2.78° (2.29° - 3.33°) | < 6.34° | Yes (3.33° < 6.34°) |
Subgroup Analysis for Acceptance Criteria and Performance:
| Population | Endpoint | Metric | Reported Mean Absolute Error (95% CI) | Acceptance Criteria (Upper bound of the MAE 95% CI) | Device Meets Criteria? |
|---|---|---|---|---|---|
| Adults | Cobb angle with the largest curvature (n=100) | Mean Absolute Error (°) | 3.31° (2.21° - 4.87°) | < 6.34° | Yes (4.87° < 6.34°) |
| Adults | Minor Cobb angle (n=90) | Mean Absolute Error (°) | 2.91° (2.29° - 3.68°) | < 6.34° | Yes (3.68° < 6.34°) |
| Children | Cobb angle with the largest curvature (n=32) | Mean Absolute Error (°) | 1.34° (0.88° - 1.86°) | < 6.34° | Yes (1.86° < 6.34°) |
| Children | Minor Cobb angle (n=18) | Mean Absolute Error (°) | 1.95° (1.01° - 3.21°) | < 6.34° | Yes (3.21° < 6.34°) |
| Adolescent | Cobb angle with the largest curvature (n=80) | Mean Absolute Error (°) | 2.11° (1.71° - 2.56°) | < 6.34° | Yes (2.56° < 6.34°) |
| Adolescent | Minor Cobb angle (n=81) | Mean Absolute Error (°) | 2.83° (1.96° - 3.85°) | < 6.34° | Yes (3.85° < 6.34°) |
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: 345 frontal spine radiographs.
- Data Provenance: Obtained from US data providers. The text doesn't explicitly state whether the data was retrospective or prospective, but for regulatory studies, it is typically well-defined.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Three (3) experts.
- Qualifications of Experts: Two US board-certified musculoskeletal radiologists and one US board-certified orthopedic surgeon.
4. Adjudication method for the test set:
- The ground truth was initially defined as the mean of the Cobb angles measured by the 3 ground truthers to establish a consensus-based ground truth.
- For cases with discrepancies exceeding a predetermined threshold, an adjudication process was implemented where the three experts mutually agreed on a value for the ground truth. This is a 3-expert consensus with adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done in this submission. The study described is a "Clinical Standalone Performance Study" comparing the device's measurements against a ground truth.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The text explicitly states, "A Clinical Standalone Performance Study was conducted..." The experts establishing ground truth were kept unaware of the outputs from BoneMetrics US.
7. The type of ground truth used:
- The ground truth was established by expert consensus (mean of three expert measurements with adjudication for discrepancies).
8. The sample size for the training set:
- The document does not specify the sample size for the training set. It only mentions "dedicated training of the algorithm for the indications and the patient population" as a control in the discussion of differences from the predicate device.
9. How the ground truth for the training set was established:
- The document does not specify how the ground truth for the training set was established. It only mentions "dedicated training of the algorithm" without details on the ground truth process for training data.
Ask a specific question about this device
Page 1 of 1