Search Results
Found 6 results
510(k) Data Aggregation
(265 days)
Imagen Technologies, Inc
Lung-CAD is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies for lung hyperinflation. The device uses a deep learning algorithm to identify regions of interest (ROIs) with lung hyperinflation and produces boxes around the ROIs.
Lung-CAD is intended for use as a concurrent reading aid for physicians interpreting chest X-rays. The device is not intended for clinical diagnosis of any disease. It does not replace the role of other diagnostic testing in the standard of care for lung parenchymal findings. Lung-CAD is indicated for adults only.
Lung-CAD is computer-assisted detection (CADe) software designed to increase the accurate detection of lung hyperinflation. Lung-CAD's output is available for physicians interpreting chest radiographs as a concurrent reading aid. The device helps physicians more effectively identify lung hyperinflation. Lung-CAD does not replace the role of the physician or of other diagnostic testing in the standard of care and does not provide a diagnosis for any disease. Lung-CAD uses modern deep learning and computer vision techniques to analyze chest radiographs.
For each image within a study, Lung-CAD generates a DICOM Presentation State file (output overlay). If any region of interest (ROI) is detected by Lung-CAD in the study, the output overlay for each image includes "Lung hyperinflation". In addition, if ROI(s) are detected in an image, bounding boxes surrounding each detected ROI are included in the output overlay for that image and are labeled with the radiographic findings: "Lung hyperinflation". If no ROI is detected by Lung-CAD in the study, the output overlay for each image will include the text "No Lung-CAD ROI(s)" and no bounding boxes will be included. Regardless of whether an ROI is detected, the overlay includes text identifying the X-ray study as analyzed by Lung-CAD and a customer configurable message containing a link or instructions, for users, to access labeling documents. The Lung-CAD overlay can be toggled on or off by the physician within their Picture Archiving and Communication System (PACS) viewer, allowing for concurrent review of the X-ray study.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Lung-CAD:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics that demonstrate substantial equivalence and effectiveness. While explicit "acceptance criteria" are not listed as pass/fail thresholds in this summary, the strong statistical significance and high performance metrics indicate successful validation.
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Standalone Performance | ||
Sensitivity (Lung-CAD) | High (e.g., above certain threshold) | 0.898 (95% CI: 0.856, 0.929) |
Specificity (Lung-CAD) | High (e.g., above certain threshold) | 0.894 (95% CI: 0.885, 0.902) |
AUC (Lung-CAD) | High (e.g., close to 1.0) | 0.964 (95% Bootstrap CI: 0.956, 0.972) |
Reader Study (MRMC) Performance | ||
Reader AUC Improvement (Aided vs. Unaided) | Statistically significant improvement | 0.0632 (95% CI: 0.0632, 0.0633) |
Statistical Significance (Aided vs. Unaided) | p-value |
Ask a specific question about this device
(267 days)
Imagen Technologies, Inc
Lung-CAD is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies for interstitial thickening. The device uses a deep learning algorithm to identify regions of interstital thickening and produces boxes around the ROIs.
Lung-CAD is intended for use as a concurrent reading aid for physicians interpreting chest X-rays. The device is not intended for clinical diagnosis of any disease. It does not replace the role of other diagnostic testing in the standard of care for lung parenchymal findings. Lung-CAD is indicated for adults only.
Lung-CAD is computer-assisted detection (CADe) software designed to increase the accurate detection of interstitial thickening. Lung-CAD's output is available for physicians interpreting chest radiographs as a concurrent reading aid. The device helps physicians more effectively identify interstitial thickening. Lung-CAD does not replace the role of the physician or of other diagnostic testing in the standard of care and does not provide a diagnosis for any disease. Lung-CAD uses modern deep learning and computer vision techniques to analyze chest radiographs.
For each image within a study, Lung-CAD generates a DICOM Presentation State file (output overlay). If any ROI is detected by Lung-CAD in the study, the output overlay for each image includes "Interstitial thickening". In addition, if ROI(s) are detected in an image, bounding boxes surrounding each detected ROI are included in the output overlay for that image and are labeled with the radiographic finding: "Interstitial thickening". If no ROI is detected by Lung-CAD in the study, the output overlay for each image will include the text "No Lung-CAD ROI(s)" and no bounding boxes will be included. Regardless of whether an ROI is detected, the overlay includes text identifying the X-ray study as analyzed by Lung-CAD and a customer configurable message containing a link or instructions, for users, to access labeling documents. The Lung-CAD overlay can be toggled on or off by the physician within their Picture Archiving and Communication System (PACS) viewer, allowing for concurrent review of the X-ray study.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Criteria | Reported Device Performance (Lung-CAD) |
---|---|
Standalone Performance | |
Sensitivity | 0.913 (95% Wilson's CI: 0.850-0.951) |
Specificity | 0.866 (95% Wilson's CI: 0.856-0.875) |
Area Under the Curve (AUC) of ROC curve | 0.961 (95% Bootstrap CI: 0.948-0.972) |
Clinical Performance (Reader Study) | |
Reader AUC improvement (Aided vs. Unaided) | 0.0797 (95% Confidence Interval: 0.0797, 0.0798); statistically significant (p-value |
Ask a specific question about this device
(347 days)
Imagen Technologies, Inc
Aorta-CAD is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies for suspicious regions of interest (ROIs). The device uses a deep learning algorithm to identify ROIs and produces boxes around the ROls. The boxes are labeled with one of the following radiographic findings: Aortic calcification or Dilated aorta.
Aorta-CAD is intended for use as a concurrent reading aid for physicians looking for ROIs with radiographic findings suggestive of Aortic Atherosclerosis or Aortic Ectasia. It does not replace the role of the physician or of other diagnosic testing in the standard of care. Aorta-CAD is indicated for adults only.
Aorta-CAD is computer-assisted detection (CADe) software designed for physicians to increase the accurate detection of findings on chest radiographs that are suggestive of chronic conditions in the aorta. The ROIs are labeled with one of the following radiographic findings: Aortic calcification or Dilated aorta. Aorta-CAD is intended for use as a concurrent reading aid for physicians looking for suspicious ROIs with radiographic findings suggestive of Aortic Atherosclerosis or Aortic Ectasia. Aorta-CAD's output is available for physicians as a concurrent reading aid and does not replace the role of the physician or of other diagnostic testing in the standard of care for the distinct conditions. Aorta-CAD uses modern deep learning and computer vision techniques to analyze chest radiographs.
For each image within a study, Aorta-CAD generates a DICOM Presentation State file (output overlay). If any ROI is detected by Aorta-CAD in the study, the output overlay for each image includes which radiographic finding(s) were identified and what chronic condition in the aorta is suggested by these findings, such as "Aortic calcification suggestive of Aortic Atherosclerosis." In addition, if ROI(s) are detected in an image, bounding boxes surrounding each detected ROI are included in the output overlay for that image and are labeled with the radiographic findings, such as "Aortic calcification". If no ROI is detected by Aorta-CAD in the study, the output overlay for each image will include the text "No Aorta-CAD ROI(s)" and no bounding boxes will be included. Regardless of whether an ROI is detected, the overlay includes text identifying the X-ray study as analyzed by Aorta-CAD and a customer configurable message containing a link to our instructions for users to access labeling documents. The Aorta-CAD overlay can be toggled on or off by the physician within their Picture Archiving and Communication System (PACS) viewer, allowing for concurrent review of the X-ray study.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly list "acceptance criteria" in a separate section with pass/fail thresholds. Instead, it describes "performance assessment" and "clinical study" results which implicitly serve as the demonstration that the device performs acceptably and is substantially equivalent. The key performance metrics are presented from the standalone and clinical studies.
Metric (Implicit Acceptance Criteria) | Reported Device Performance (Standalone Study) | Reported Device Performance (Clinical Study - Aided vs. Unaided) |
---|---|---|
Overall Standalone Performance | ||
Sensitivity | 0.910 (95% CI: 0.896, 0.922) | Not directly comparable (MRMC study focuses on reader improvement, 0.910 refers to algorithm only) |
Specificity | 0.896 (95% CI: 0.889, 0.902) | Not directly comparable (MRMC study focuses on reader improvement, 0.896 refers to algorithm only) |
AUC (Overall) | 0.974 (95% Bootstrap CI: 0.971, 0.977) | Not directly comparable (MRMC study focuses on reader improvement, 0.974 refers to algorithm only) |
Category-Specific Standalone Performance | ||
Aortic calcification suggestive of Aortic Atherosclerosis (AUC) | 0.972 (95% Bootstrap CI: 0.967, 0.976) | Reader AUC estimates significantly improved (p |
Ask a specific question about this device
(137 days)
Imagen Technologies, Inc
Chest-CAD is a computer-assisted detection (CADe) software device that analyzes chest radiograph studies using machine learning techniques to identify, categorize, and highlight suspicious regions of interest (ROI). Any suspicious ROI identified by Chest-CAD is assigned to one of the following categories: Cardiac, Mediastinum/Hila, Lungs, Pleura, Bones, Soft Tissues, Hardware, or Other. The device is intended for use as a concurrent reading aid for physicians. Chest-CAD is indicated for adults only.
Chest-CAD is a computer-assisted detection (CADe) software device designed to assist physicians in identifying suspicious regions of interest (ROIs) in adult chest X-rays. Suspicious ROIs identified by Chest-CAD are assigned to one of the following categories: Cardiac, Mediastinum/Hila, Lungs, Pleura, Bones, Soft Tissues, Hardware, or Other. Chest-CAD detects suspicious ROIs by analyzing radiographs using deep learning algorithms for computer vision and provides relevant annotations to assist physicians with their interpretations.
For each image within a study, Chest-CAD generates a DICOM Presentation State file (output overlay). If any suspicious ROI is detected by Chest-CAD in the study, the output overlay for all images includes the text "ROI(s) Detected:" followed by a list of the category/categories for which suspicious ROI(s) were found, such as "Lungs, Bones". In addition, if suspicious ROI(s) are detected in the image, bounding boxes surrounding each detected suspicious ROI are included in the output overlay. If no suspicious ROI is detected by Chest-CAD in the study, the output overlay for each image will include the text "No ROI(s) Detected" and no bounding boxes will be included. Regardless of whether a suspicious ROI is detected, the overlay includes text identifying the X-ray study as analyzed by Chest-CAD and a customer configurable message containing a link to or instructions for users to access labeling. The Chest-CAD overlay can be toggled on or off by the physician within their Picture Archiving and Communication System (PACS) viewer, allowing for concurrent review of the X-ray study.
Acceptance Criteria and Device Performance for Chest-CAD (K210666)
The Chest-CAD device is a computer-assisted detection (CADe) software that analyzes chest radiographs to identify, categorize, and highlight suspicious regions of interest (ROI). It is intended as a concurrent reading aid for physicians and is indicated for adults only.
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria for Chest-CAD were established through both a standalone performance assessment and a multi-reader, multi-case (MRMC) comparative effectiveness study. No explicit numerical acceptance criteria were stated as targets for the standalone performance metrics (sensitivity, specificity, AUC). Instead, the performance demonstrated in the standalone test formed a basis for comparison and validation. For the MRMC study, the primary objective was to demonstrate superiority of aided reading over unaided reading in terms of AUC.
Table 1: Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Internal/Implicit) | Standalone Device Performance | MRMC Study Performance (Aided vs. Unaided) |
---|---|---|---|
Standalone Performance | |||
Overall Sensitivity | High sensitivity is expected | 0.908 (95% CI: 0.905, 0.911) | N/A |
Overall Specificity | High specificity is expected | 0.887 (95% CI: 0.885, 0.889) | N/A |
Overall AUC | High AUC is expected | 0.976 (95% CI: 0.975, 0.976) | N/A |
Clinical Performance (MRMC Study) | Aided vs. Unaided | ||
Reader AUC | Aided AUC > Unaided AUC | N/A | Aided: 0.894 (95% CI: 0.879, 0.909) |
Unaided: 0.836 (95% CI: 0.816, 0.856) | |||
Reader Sensitivity | Aided Sensitivity > Unaided Sensitivity | N/A | Aided: 0.856 (95% CI: 0.850, 0.862) |
Unaided: 0.757 (95% CI: 0.750, 0.764) | |||
Reader Specificity | Aided Specificity > Unaided Specificity | N/A | Aided: 0.870 (95% CI: 0.866, 0.873) |
Unaided: 0.843 (95% CI: 0.839, 0.847) |
2. Sample Sizes and Data Provenance
Standalone Test Set:
- Sample Size: 20,000 chest radiograph cases.
- Data Provenance: From 12 hospitals, outpatient centers, and specialty centers in the United States. The study was retrospective.
MRMC Study Test Set:
- Sample Size: 238 cases.
- Data Provenance: From 9 hospitals, outpatient centers, and specialty centers in the United States. The study was retrospective.
3. Number and Qualifications of Experts for Ground Truth (Test Set)
- Standalone Test Set: Not explicitly stated for the standalone test set. The document indicates "Suspicious ROIs were manually annotated and categorized by board-certified radiologists before the images were used for benchmarking." This implies that the ground truth for standalone testing was established by expert radiologists, but the number and specific qualifications (years of experience) are not detailed.
- MRMC Study Test Set: "Each case was previously evaluated by a panel of U.S. board-certified radiologists who assigned a ground truth binary label indicating the presence or absence of a suspicious ROI for each Chest-CAD category." The exact number of experts in the panel is not specified, nor is their specific level of experience (e.g., 10 years).
4. Adjudication Method (Test Set)
- The document implies a consensus-based adjudication method for the ground truth of the MRMC study cases, stating that "a panel of U.S. board-certified radiologists who assigned a ground truth binary label..." This strongly suggests that multiple radiologists reviewed the cases and reached a consensus for the ground truth. It does not explicitly state a 2+1 or 3+1 method but refers to a "panel" establishing the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a fully-crossed MRMC retrospective reader study was conducted.
- Effect Size of Human Reader Improvement with AI vs. Without AI Assistance:
- Reader AUC: Improved from 0.836 (Unaided) to 0.894 (Aided). Effect size = 0.058 increase.
- Reader Sensitivity: Improved from 0.757 (Unaided) to 0.856 (Aided). Effect size = 0.099 increase.
- Reader Specificity: Improved from 0.843 (Unaided) to 0.870 (Aided). Effect size = 0.027 increase.
The study concluded that the accuracy of readers was superior when aided by Chest-CAD.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance assessment was conducted.
- Reported Performance:
- Overall Sensitivity: 0.908 (95% CI: 0.905, 0.911)
- Overall Specificity: 0.887 (95% CI: 0.885, 0.889)
- Overall AUC: 0.976 (95% CI: 0.975, 0.976)
- AUC by Category: Ranged from 0.921 (Mediastinum/Hila) to 0.994 (Hardware).
- Sensitivity by Category: Ranged from 0.854 (Bones) to 0.967 (Hardware).
- Specificity by Category: Ranged from 0.830 (Mediastinum/Hila) to 0.960 (Hardware).
7. Type of Ground Truth Used (Test Set)
- Expert Consensus: For both the standalone testing and the MRMC study, the ground truth was established by board-certified radiologists who manually annotated and assigned binary labels for suspicious ROIs.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It refers to "deep learning algorithms for computer vision" but does not detail the dataset used for training or validation of these algorithms.
9. How the Ground Truth for the Training Set was Established
- The document does not explicitly state how the ground truth for the training set was established. It only mentions that the device uses "machine learning techniques" and "deep learning algorithms." Typically, for such devices, the training set ground truth would also be established by expert radiologists, usually through a process similar to or more extensive than that used for the test sets. However, this is not detailed in the provided text.
Ask a specific question about this device
(234 days)
Imagen Technologies, Inc.
FractureDetect (FX) is a computer-assisted detection and diagnosis (CAD) software device to assist clinicians in detecting fractures during the review of radiographs of the musculoskeletal system. FX is indicated for adults only.
FX is indicated for radiographs of the following industry-standard radiographic views and study types.
| Study Type
(Anatomic Area
of Interest⁺) | Radiographic View(s)
Supported* |
|-----------------------------------------------|------------------------------------|
| Ankle | Frontal, Lateral, Oblique |
| Clavicle | Frontal |
| Elbow | Frontal, Lateral |
| Femur | Frontal, Lateral |
| Forearm | Frontal, Lateral |
| Hip | Frontal, Frog Leg Lateral |
| Humerus | Frontal, Lateral |
| Knee | Frontal, Lateral |
| Pelvis | Frontal |
| Shoulder | Frontal, Lateral, Axillary |
| Tibia / Fibula | Frontal, Lateral |
| Wrist | Frontal, Lateral, Oblique |
*For the purposes of this table, "Frontal" is considered inclusive of both posteroanterior (PA) and anteroposterior (AP) views.
+Definitions of anatomic area of interest and radiographic views are consistent with the American College of Radiology (ACR) standards and guidelines.
FractureDetect (FX) is a computer-assisted detection and diagnosis (CAD) software device designed to assist clinicians in detecting fractures during the review of commonly acquired adult radiographs. FX does this by analyzing radiographs and providing relevant annotations, assisting clinicians in the detection of fractures within their diagnostic process at the point of care. FX was developed using robust scientific principles and industry-standard deep learning algorithms for computer vision.
FX creates, as its output, a DICOM overlay with annotations indicating the presence or absence of fractures. If any fracture is detected by FX, the output overlay is composed to include the text annotation "Fracture: DETECTED" and to include one or more bounding boxes surrounding any fracture site(s). If no fracture is detected by FX, the output overlay is composed to include the text annotation "Fracture: NOT DETECTED" and no bounding box is included. Whether or not a fracture is detected, the overlay includes a text annotation identifying the radiograph as analyzed by FX and instructions for users to access labeling. The FX overlay can be toggled on or off by the clinicians within their PACS viewer, allowing for uninhibited concurrent review of the original radiograph.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Standalone Performance | |
Overall Sensitivity | 0.951 (95% Wilson's CI: 0.940, 0.960) |
Overall Specificity | 0.893 (95% Wilson's CI: 0.886, 0.898) |
Overall Area Under the Curve (AUC) | 0.982 (95% Bootstrap CI: 0.9790, 0.9850) |
AUC per Study Type: Ankle | 0.983 (0.972, 0.991) |
AUC per Study Type: Clavicle | 0.962 (0.948, 0.975) |
AUC per Study Type: Elbow | 0.964 (0.940, 0.982) |
AUC per Study Type: Femur | 0.989 (0.983, 0.994) |
AUC per Study Type: Forearm | 0.987 (0.977, 0.995) |
AUC per Study Type: Hip | 0.982 (0.962, 0.995) |
AUC per Study Type: Humerus | 0.983 (0.974, 0.991) |
AUC per Study Type: Knee | 0.996 (0.993, 0.998) |
AUC per Study Type: Pelvis | 0.982 (0.973, 0.989) |
AUC per Study Type: Shoulder | 0.962 (0.938, 0.982) |
AUC per Study Type: Tibia / Fibula | 0.994 (0.991, 0.997) |
AUC per Study Type: Wrist | 0.992 (0.988, 0.996) |
MRMC Comparative Effectiveness (Reader Performance with AI vs. without AI) | |
Reader AUC (FX-Aided) vs. (FX-Unaided) | Improved from 0.912 to 0.952, a difference of 0.0406 (95% CI: 0.0127, 0.0685) (p=.0043) |
Reader Sensitivity (FX-Aided) vs. (FX-Unaided) | Improved from 0.819 (95% Wilson's CI: 0.794, 0.842) to 0.900 (95% Wilson's CI: 0.880, 0.917) |
Reader Specificity (FX-Aided) vs. (FX-Unaided) | Improved from 0.890 (95% Wilson's CI: 0.879, 0.900) to 0.918 (95% Wilson's CI: 0.908, 0.927) |
Study Details
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- Standalone Study: 11,970 radiographs.
- MRMC Reader Study: 175 cases.
- Data Provenance: Not explicitly stated, but the experts establishing ground truth are specified as U.S. board-certified, suggesting the data is likely from the U.S. There is no indication whether the data was retrospective or prospective, but for an FDA submission of this nature, historical retrospective data is common.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: A panel of three experts was used for the MRMC study's ground truth.
- Qualifications: "U.S. board-certified orthopedic surgeons or U.S. board-certified radiologists." Specific years of experience are not mentioned.
4. Adjudication Method for the Test Set
- Adjudication Method: A "panel of three" experts assigned a ground truth binary label (presence or absence of fracture). This implies a consensus-based adjudication. While not explicitly stated (e.g., 2-out-of-3, or further adjudication if there was disagreement), the phrasing suggests a collective agreement to establish the "ground truth." This is analogous to a 3-expert consensus, where the majority rules.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Yes.
- Effect Size (Improvement with AI vs. without AI assistance):
- Readers' AUC significantly improved by 0.0406 (from 0.912 to 0.952).
- Readers' sensitivity improved by 0.081 (from 0.819 to 0.900).
- Readers' specificity improved by 0.028 (from 0.890 to 0.918).
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Yes.
- Performance:
- Sensitivity: 0.951
- Specificity: 0.893
- Overall AUC: 0.982
- High accuracy across study types and potential confounders (image brightness, x-ray manufacturers).
7. Type of Ground Truth Used
- Standalone Study: The ground truth for the standalone study is not explicitly detailed but given the MRMC study, it's highly probable it also leveraged expert consensus, similar to the MRMC setup, for fracture detection.
- MRMC Study: Expert Consensus by a panel of three U.S. board-certified orthopedic surgeons or U.S. board-certified radiologists.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size for the training set. It only mentions "robust scientific principles and industry-standard deep learning algorithms for computer vision" were used for development.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only mentions "Supervised Deep Learning" as the methodology, which implies labeled data was used for training, but the process of obtaining these labels is not detailed.
Ask a specific question about this device
(108 days)
Imagen Technologies, Inc.
OsteoDetect analyzes wrist radiographs using machine learning techniques to identify and highlight distal radius fractures during the review of posterior-anterior (PA) and lateral (LAT) radiographs of adult wrists.
OsteoDetect is a software device designed to assist clinicians in detecting distal radius fractures during the review of posterior-anterior (PA) and lateral (LAT) radiographs of adult wrists. The software uses deep learning techniques to analyze wrist radiographs (PA and LAT views) for distal radius fracture in adult patients.
1. Table of Acceptance Criteria and Reported Device Performance
Standalone Performance
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Estimate) | 95% Confidence Interval |
---|---|---|---|
AUC of ROC | High | 0.965 | (0.953, 0.976) |
Sensitivity | High | 0.921 | (0.886, 0.946) |
Specificity | High | 0.902 | (0.877, 0.922) |
PPV | High | 0.813 | (0.769, 0.850) |
NPV | High | 0.961 | (0.943, 0.973) |
Localization Accuracy (average pixel distance) | Small | 33.52 pixels | Not provided for average distance itself, but standard deviation of 30.03 pixels. |
Generalizability (AUC for all subgroups) | High | ≥ 0.926 (lowest subgroup - post-surgical radiographs) | Not explicitly provided for all, but individual subgroup CIs available in text. |
MRMC (Reader Study) Performance - Aided vs. Unaided Reads
Performance Metric | Acceptance Criteria (Implicit: Superiority of Aided) | Reported Device Performance (OD-Aided) | Reported Device Performance (OD-Unaided) | 95% Confidence Interval (OD-Aided) | 95% Confidence Interval (OD-Unaided) | p-value for difference |
---|---|---|---|---|---|---|
AUC of ROC | AUC_aided - AUC_unaided > 0 | 0.889 | 0.840 | Not explicitly given for AUCs themselves, but difference CI: (0.019, 0.080) | Not explicitly given for AUCs themselves, but difference CI: (0.019, 0.080) | 0.0056 |
Sensitivity | Superior Aided | 0.803 | 0.747 | (0.785, 0.819) | (0.728, 0.765) | Not explicitly given for individual metrics, but non-overlapping CIs imply significance. |
Specificity | Superior Aided | 0.914 | 0.889 | (0.903, 0.924) | (0.876, 0.900) | Not explicitly given for individual metrics, but non-overlapping CIs imply significance. |
PPV | Superior Aided | 0.883 | 0.844 | (0.868, 0.896) | (0.826, 0.859) | Not explicitly given for individual metrics, but non-overlapping CIs imply significance. |
NPV | Superior Aided | 0.853 | 0.814 | (0.839, 0.865) | (0.800, 0.828) | Not explicitly given for individual metrics, but non-overlapping CIs imply significance. |
2. Sample Size and Data Provenance for Test Set
Standalone Performance Test Set:
- Sample Size: 1000 images (500 PA, 500 LAT)
- Data Provenance: Retrospective. Randomly sampled from an existing validation database of consecutively collected images from patients receiving wrist radiographs at the (b) (4) from November 1, 2016 to April 30, 2017. The study population included images from the US.
MRMC (Reader Study) Test Set:
- Sample Size: 200 cases.
- Data Provenance: Retrospective. Randomly sampled from the same validation database used for the standalone performance study. The data includes cases from the US.
3. Number of Experts and Qualifications for Ground Truth
Standalone Performance Test Set and MRMC (Reader Study) Test Set:
- Number of Experts: Three.
- Qualifications: U.S. board-certified orthopedic hand surgeons.
4. Adjudication Method for Test Set
Standalone Performance Test Set:
- Adjudication Method (Binary Fracture Presence/Absence): Majority opinion of at least 2 of the 3 clinicians.
- Adjudication Method (Localization - Bounding Box): The union of the bounding box of each clinician identifying the fracture.
MRMC (Reader Study) Test Set:
- Adjudication Method: Majority opinion of three U.S. board-certified orthopedic hand surgeons. (Note: this was defined on a per-case basis, considering PA, LAT, and oblique images if available).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Yes.
- Effect Size (Improvement of Human Readers with AI vs. without AI assistance):
- The least squares mean difference between the AUC for OsteoDetect-aided and OsteoDetect-unaided reads is 0.049 (95% CI, (0.019, 0.080)). This indicates a statistically significant improvement in diagnostic accuracy (AUC) of 4.9 percentage points when readers were aided by OsteoDetect.
- Sensitivity: Improved from 0.747 (unaided) to 0.803 (aided), an improvement of 0.056.
- Specificity: Improved from 0.889 (unaided) to 0.914 (aided), an improvement of 0.025.
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Yes.
7. Type of Ground Truth Used
Standalone Performance Test Set:
- Type of Ground Truth: Expert consensus (majority opinion of three U.S. board-certified orthopedic hand surgeons).
MRMC (Reader Study) Test Set:
- Type of Ground Truth: Expert consensus (majority opinion of three U.S. board-certified orthopedic hand surgeons).
8. Sample Size for Training Set
The document does not explicitly state the sample size for the training set. It mentions "randomly withheld subset of the model's training data" for setting the operating point, implying a training set existed, but its size is not provided.
9. How Ground Truth for Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It only refers to a "randomly withheld subset of the model's training data" during the operating point setting.
Ask a specific question about this device
Page 1 of 1