Search Results
Found 1 results
510(k) Data Aggregation
(214 days)
BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of:
| Study Type (Anatomical Area of Interest) | Compatible Radiographic View(s) |
|---|---|
| Ankle | Frontal, Lateral, Oblique |
| Foot | Frontal, Lateral, Oblique |
| Knee | Frontal, Lateral |
| Tibia/Fibula | Frontal, Lateral |
| Femur | Frontal, Lateral |
| Wrist | Frontal, Lateral, Oblique |
| Hand | Frontal, Oblique |
| Elbow | Frontal, Lateral |
| Forearm | Frontal, Lateral |
| Humerus | Frontal, Lateral |
| Shoulder | Frontal, Lateral, Axillary |
| Clavicle | Frontal |
| Pelvis | Frontal |
| Hip | Frontal, Frog Leg Lateral |
| Ribs | Frontal Chest, Rib series |
| Thoracic Spine | Frontal, Lateral |
| Lumbosacral Spine | Frontal, Lateral |
BoneView is intended for use as a concurrent reading aid during the interpretations of radiographs. BoneView is for prescription use only and is indicated for adults only.
BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs.
BoneView can be deployed on-premises or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. More precisely, BoneView can be deployed:
- In the cloud with a PACS as the DICOM Source
- . On-premises with a PACS as the DICOM Source
- On-premises with an X-ray system as the DICOM Source
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView from the user's DICOM Source through an intermediate DICOM node (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by BoneView, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by BoneView to the DICOM Destination through the same intermediate DICOM node. Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by BoneView or to transfer the results to another DICOM host for visualization. The users are then able to use them as a concurrent reading aid to provide their diagnosis.
The general layout of images processed by BoneView is comprising:
(1) The "summary table" – it is a first image that is derived from the detected regions of interest in the following result images and that displays the results of the overall study along with the Gleamer – BoneView logo. This summary can be configured to be present or not.
(2) The result images – they are provided for all the images that were processed by BoneView and contain:
- . Around the Regions of Interest (if any), a rectangle with a solid or dotted line depending on the confidence of the algorithm (see below)
- . Around the entire image, a white frame showing that the images were processed by BoneView
- . Below the image:
- o The Gleamer BoneView logo
- o The number of Regions of interest that are displayed in the result image
- (if any) The caution message if it was identified that the image was not part of o the indication for use of BoneView
The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images (52.4% of males, with age: range [0 – 109]; mean 42.4 +/- 24.6) for all anatomical areas of interest in the Indications for Use and from various manufacturers. BoneView has been designed to solve the problem of missed fractures including subtle fractures, and thus detects fractures with a high sensitivity. In this regard, the display of findings is triggered by a "high-sensitivity operating point" (DOUBT FRACT) that will enable the display of a dotted-line bounding box around the region of interest. Additionally, the users need to be confident that when BoneView identifies a fracture, it is actually a fracture. In this regard, an additional information is introduced to the user with a "high-specificity operating point" (FRACT).
These two operating points are implemented in the User Interface as follow:
-
Dotted-line Bounding Box: suspicious area / subtle fracture (when the level of . confidence of the Al algorithm associated with the finding is above "high-sensitivity operating point" and below "high-specificity operating point") displayed as a dotted bounding box around the area of interest
-
. Solid-line Bounding Box: definite or unequivocal fractures (when the level of confidence of the AI algorithm associated with the finding is above "high-specificity operating point") displayed as a solid bounding box around the area of interest
BoneView can provide 4 levels of results: -
. FRACT: BoneView identified at least one solid-line bounding box on the result images,
-
. DOUBT FRACT: BoneView did not identify any solid-line bounding box on the result images but it identified at least one dotted-line bounding box in the result images,
-
. NO FRACT: BoneView did not identify any bounding box at all in the result images,
-
NOT AVAILABLE: BoneView identified that the original images are out of its Indications for Use
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria (i.e., predefined thresholds that the device must meet). Instead, it shows the reported performance of the device from standalone testing and a clinical study. I will present the reported performance, which implicitly are the metrics used to demonstrate effectiveness.
Standalone Performance (High-Sensitivity Operating Point - DOUBT FRACT):
| Metric | Global Performance (95% CI) |
|---|---|
| Specificity | 0.811 [0.8 - 0.821] |
| Sensitivity | 0.928 [0.919 - 0.936] |
Standalone Performance (High-Specificity Operating Point - FRACT):
| Metric | Global Performance (95% CI) |
|---|---|
| Specificity | 0.932 [0.925 - 0.939] |
| Sensitivity | 0.841 [0.829 - 0.853] |
Clinical Study (Reader Performance with AI vs. Without AI Assistance):
| Metric | Unaided (95% CI) | Aided (95% CI) |
|---|---|---|
| Specificity | 0.906 [0.898-0.913] | 0.956 [0.951-0.960] |
| Sensitivity | 0.648 [0.640-0.656] | 0.752 [0.745-0.759] |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Standalone Performance Test Set:
- Sample Size: 8,918 radiographs (n(positive)=3,886, n(negative)=5,032).
- Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. It included full anatomical areas of interest for adults (age range [21-113]; mean 52.5 +/- 19.8, 47.2% males). Images were sourced from various manufacturers (Agfa, Fujifilm, GE Healthcare, Kodak, Konica Minolta, Philips, Primax, Samsung, Siemens). No specific country of origin is mentioned, but the variety of manufacturers suggests a diverse dataset. The study description implies it's a retrospective analysis of existing radiographs.
-
Clinical Study (MRMC) Test Set:
- Sample Size: 480 cases (31.9% males, age range [21-93]; mean 59.2 +/- 16.4). It covered all anatomical areas of interest listed in BoneView's Indications for Use.
- Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. Images were from various manufacturers (GE Healthcare, Kodak, Konica Minolta, Philips, Samsung). The study implies it's a retrospective analysis of existing radiographs.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Standalone Performance Test Set: The document does not explicitly state how the ground truth was established for the standalone test set (e.g., number of experts). However, given the nature of the clinical study, it's highly probable that similar expert review was used.
- Clinical Study (MRMC) Test Set:
- Number of Experts: A panel of three experts.
- Qualifications: U.S. board-certified radiologists. The document does not specify their years of experience.
4. Adjudication Method for the Test Set
- Clinical Study (MRMC) Test Set: Ground truth was assigned by a panel of three U.S. board-certified radiologists. The method implies a consensus or majority rule (e.g., 2+1 or 3+1), as a "ground truth label indicating the presence or absence of a fracture and its location" was assigned per case. The specific adjudication method (e.g., majority vote, independent reads then consensus) is not detailed, but the use of a panel suggests a robust method to establish ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC study was done.
- Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance (based on the reported deltas):
- Specificity Improvement: +5% increase (from 0.906 unaided to 0.956 aided).
- Sensitivity Improvement: +10.4% increase (from 0.648 unaided to 0.752 aided).
- The study found that "the diagnostic accuracy of readers...is superior when aided by BoneView than when unaided."
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance study was done.
- The results are detailed in the "Bench Testing" section (7.4) and summarized in the table above for both "high-sensitivity operating point" and "high-specificity operating point." This evaluation used 8,918 radiographs and assessed the detection of fractures with high sensitivity and high specificity.
7. Type of Ground Truth Used
- For the Clinical Study (MRMC) and likely for the Standalone Test Set: Expert consensus (a panel of three U.S. board-certified radiologists assigned the ground truth label for presence or absence and location of a fracture).
8. Sample Size for the Training Set
- Training Set Sample Size: 44,649 radiographs, representing 151,096 images.
- Patient Demographics for Training Set: 52.4% males, age range [0-109]; mean 42.4 +/- 24.6.
- The training data covered "all anatomical areas of interest in the Indications for Use and from various manufacturers."
9. How the Ground Truth for the Training Set Was Established
- The document states that the training of BoneView was performed on this dataset. However, it does not explicitly detail how the ground truth for this training set was established. It is implied that fractures were somehow labeled for the supervised deep learning methodology, but the process (e.g., specific number of radiologists, their qualifications, adjudication method) is not described for the training data.
Ask a specific question about this device
Page 1 of 1