Search Results
Found 2 results
510(k) Data Aggregation
(223 days)
BoneView
BoneView 1.1-US is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of: Ankle, Foot, Knee, Tibia/Fibula, Wrist, Hand, Elbow, Forearm, Humerus, Shoulder, Clavicle, Pelvis, Hip, Femur, Ribs, Thoracic Spine, Lumbosacral Spine. BoneView 1.1-US is intended for use as a concurrent reading aid during the interpretation of radiographs. BoneView 1.1-US is for prescription use only.
BoneView 1.1-US is a software-only device intended to assist clinicians in the interpretation of: . limbs radiographs of children/adolescents and . limbs, pelvis, rib cage, and dorsolumbar vertebra radiographs of adults. BoneView 1.1-US can be deployed on-premise or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView 1.1-US from the user's DICOM Source through an intermediate DICOM node. Once received by BoneView 1.1-US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView 1.1-US generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView 1.1-US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source. Once available, the result files are sent by BoneView 1.1-US to the DICOM Destination through the same intermediate DICOM node. The DICOM Destination can be used to visualize the result files provided by BoneView 1.1-US or to transfer the results to another DICOM host for visualization. The users are then as a concurrent reading aid to provide their diagnosis.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as numerical targets in a table. Instead, the study aims to demonstrate that the device performs with "high sensitivity and high specificity" and that its performance on children/adolescents is "similar" to that on adults. For the clinical study, the acceptance criteria are implicitly that the diagnostic accuracy of readers aided by BoneView is superior to that of readers unaided.
However, the document provides the performance metrics for both standalone testing and the clinical study.
Standalone Performance (Children/Adolescents Clinical Performance Study Dataset)
Operating Point | Metric | Value (95% Clopper-Pearson CI) | Description |
---|---|---|---|
High-sensitivity (DOUBT FRACT) | Sensitivity | 0.909 [0.889 - 0.926] | The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly sensitive to possible fractures, potentially including subtle ones, and is indicated by a dotted bounding box. |
High-sensitivity (DOUBT FRACT) | Specificity | 0.821 [0.796 - 0.844] | The probability that the device correctly identifies the absence of a fracture when no fracture is present. |
High-specificity (FRACT) | Sensitivity | 0.792 [0.766 - 0.817] | The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly specific, meaning it provides a high degree of confidence that a detected fracture is indeed a fracture, and is indicated by a solid bounding box. |
High-specificity (FRACT) | Specificity | 0.965 [0.952 - 0.976] | The probability that the device correctly identifies the absence of a fracture when no fracture is present. |
Comparative Standalone Performance (Children/Adolescents vs. Adult)
Operating Point | Dataset | Sensitivity (95% CI) | Specificity (95% CI) | 95% CI on the difference (Sensitivity) | 95% CI on the difference (Specificity) |
---|---|---|---|---|---|
High-sensitivity (DOUBT FRACT) | Adult clinical performance study | 0.928 [0.919 - 0.936] | 0.811 [0.8 - 0.821] | -0.019 [-0.039 - 0.001] | 0.010 [-0.016 - 0.037] |
High-sensitivity (DOUBT FRACT) | Children/adolescents clinical performance | 0.909 [0.889 - 0.926] | 0.821 [0.796 - 0.844] | ||
High-specificity (FRACT) | Adult clinical performance study | 0.841 [0.829 - 0.853] | 0.932 [0.925 - 0.939] | -0.049 [-0.079 - -0.021] | 0.033 [0.019 - 0.046] |
High-specificity (FRACT) | Children/adolescents clinical performance | 0.792 [0.766 - 0.817] | 0.965 [0.952 - 0.976] |
Clinical Study Performance (MRMC - Reader Performance with/without AI assistance)
Metric | Unaided Performance (95% bootstrap CI) | Aided Performance (95% bootstrap CI) | Increase |
---|---|---|---|
Specificity | 0.906 (0.898-0.913) | 0.956 (0.951-0.960) | +5% |
Sensitivity | 0.648 (0.640-0.656) | 0.752 (0.745-0.759) | +10.4% |
2. Sample sizes used for the test set and data provenance:
- Standalone Performance Test Set:
- Children/Adolescents: 2,000 radiographs (52.8% males, age range [2 – 21]; mean 11.54 +/- 4.7). The anatomical areas of interest included all those in the Indications for Use for this population group.
- Adults (cited from predicate device K212365): 8,918 radiographs (47.2% males, age range [21 – 113]; mean 52.5 +/- 19.8). The anatomical areas of interest included all those in the Indications for Use for this population group.
- Clinical Study Test Set (MRMC): 480 cases (31.9% males, age range [21 – 93]; mean 59.2 +/- 16.4). These cases were from all anatomical areas of interest included in BoneView's Indications for Use.
- Data Provenance: The document states "various manufacturers" (e.g., Canon, Fujifilm, GE Healthcare, Konica Minolta, Philips, Primax, Samsung, Siemens for standalone data; GE Healthcare, Kodak, Konica Minolta, Philips, Samsung for clinical study data). The general context implies a European or North American source for the regulatory submission (France for the manufacturer, FDA for the review). It is explicitly stated that these datasets were independent of training data. The studies are described as retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Clinical Study (MRMC Test Set): Ground truth was established by a panel of three U.S. board-certified radiologists. No further details on their years of experience are provided, only their certification.
- Standalone Test Sets (Children/Adolescents & Adult): The document doesn't explicitly state the number or qualifications of experts used to establish ground truth for the standalone test sets. However, it indicates these datasets were used for "diagnostic performances," implying a definitive ground truth. Given the rigorous nature of FDA submissions, it's highly probable that board-certified radiologists or other qualified medical professionals established this ground truth.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Clinical Study (MRMC Test Set): The ground truth was established by a panel of three U.S. board-certified radiologists. The method of adjudication (e.g., majority vote, discussion to consensus) is not explicitly detailed, but it states they "assigned a ground truth label." This strongly suggests a consensus or majority-based method from the panel of three, rather than just 2+1 or 3+1 with a tie-breaker.
- Standalone Test Sets: Not explicitly stated, though a panel or consensus method is standard for robust ground truth establishment.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a fully-crossed multi-reader, multi-case (MRMC) retrospective reader study was conducted.
- Effect Size of Improvement with AI Assistance:
- Specificity: Improved by +5% (from 0.906 unaided to 0.956 aided).
- Sensitivity: Improved by +10.4% (from 0.648 unaided to 0.752 aided).
- The study found that "the diagnostic accuracy of readers in the intended use population is superior when aided by BoneView than when unaided by BoneView."
- Subgroup analysis also found that "Sensitivity and Specificity were higher for Aided reads versus Unaided reads for all of the anatomical areas of interest."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, standalone performance testing was conducted for both the children/adolescent population and the adult population (the latter referencing the predicate device's data). The results are provided in the tables under section 1.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus: The ground truth for the clinical MRMC study was established by a "panel of three U.S. board-certified radiologists who assigned a ground truth label indicating the presence of a fracture and its location." For the standalone testing, although not explicitly stated, it is commonly established by expert interpretation of the radiographs, often through consensus, to determine the presence or absence of fractures.
8. The sample size for the training set:
- The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images. This dataset covered all anatomical areas of interest in the Indications for Use and was sourced from various manufacturers.
9. How the ground truth for the training set was established:
- The document implies that the "training was performed on a training dataset... for all anatomical areas of interest." While it doesn't explicitly state how ground truth was established for this massive training set, it is standard practice for medical imaging AI that ground truth for training data is established through expert annotation (e.g., radiologists, orthopedic surgeons) of the images, typically through a labor-intensive review process.
Ask a specific question about this device
(214 days)
BoneView
BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of:
Study Type (Anatomical Area of Interest) | Compatible Radiographic View(s) |
---|---|
Ankle | Frontal, Lateral, Oblique |
Foot | Frontal, Lateral, Oblique |
Knee | Frontal, Lateral |
Tibia/Fibula | Frontal, Lateral |
Femur | Frontal, Lateral |
Wrist | Frontal, Lateral, Oblique |
Hand | Frontal, Oblique |
Elbow | Frontal, Lateral |
Forearm | Frontal, Lateral |
Humerus | Frontal, Lateral |
Shoulder | Frontal, Lateral, Axillary |
Clavicle | Frontal |
Pelvis | Frontal |
Hip | Frontal, Frog Leg Lateral |
Ribs | Frontal Chest, Rib series |
Thoracic Spine | Frontal, Lateral |
Lumbosacral Spine | Frontal, Lateral |
BoneView is intended for use as a concurrent reading aid during the interpretations of radiographs. BoneView is for prescription use only and is indicated for adults only.
BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs.
BoneView can be deployed on-premises or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. More precisely, BoneView can be deployed:
- In the cloud with a PACS as the DICOM Source
- . On-premises with a PACS as the DICOM Source
- On-premises with an X-ray system as the DICOM Source
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView from the user's DICOM Source through an intermediate DICOM node (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by BoneView, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by BoneView to the DICOM Destination through the same intermediate DICOM node. Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by BoneView or to transfer the results to another DICOM host for visualization. The users are then able to use them as a concurrent reading aid to provide their diagnosis.
The general layout of images processed by BoneView is comprising:
(1) The "summary table" – it is a first image that is derived from the detected regions of interest in the following result images and that displays the results of the overall study along with the Gleamer – BoneView logo. This summary can be configured to be present or not.
(2) The result images – they are provided for all the images that were processed by BoneView and contain:
- . Around the Regions of Interest (if any), a rectangle with a solid or dotted line depending on the confidence of the algorithm (see below)
- . Around the entire image, a white frame showing that the images were processed by BoneView
- . Below the image:
- o The Gleamer BoneView logo
- o The number of Regions of interest that are displayed in the result image
- (if any) The caution message if it was identified that the image was not part of o the indication for use of BoneView
The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images (52.4% of males, with age: range [0 – 109]; mean 42.4 +/- 24.6) for all anatomical areas of interest in the Indications for Use and from various manufacturers. BoneView has been designed to solve the problem of missed fractures including subtle fractures, and thus detects fractures with a high sensitivity. In this regard, the display of findings is triggered by a "high-sensitivity operating point" (DOUBT FRACT) that will enable the display of a dotted-line bounding box around the region of interest. Additionally, the users need to be confident that when BoneView identifies a fracture, it is actually a fracture. In this regard, an additional information is introduced to the user with a "high-specificity operating point" (FRACT).
These two operating points are implemented in the User Interface as follow:
-
Dotted-line Bounding Box: suspicious area / subtle fracture (when the level of . confidence of the Al algorithm associated with the finding is above "high-sensitivity operating point" and below "high-specificity operating point") displayed as a dotted bounding box around the area of interest
-
. Solid-line Bounding Box: definite or unequivocal fractures (when the level of confidence of the AI algorithm associated with the finding is above "high-specificity operating point") displayed as a solid bounding box around the area of interest
BoneView can provide 4 levels of results: -
. FRACT: BoneView identified at least one solid-line bounding box on the result images,
-
. DOUBT FRACT: BoneView did not identify any solid-line bounding box on the result images but it identified at least one dotted-line bounding box in the result images,
-
. NO FRACT: BoneView did not identify any bounding box at all in the result images,
-
NOT AVAILABLE: BoneView identified that the original images are out of its Indications for Use
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria (i.e., predefined thresholds that the device must meet). Instead, it shows the reported performance of the device from standalone testing and a clinical study. I will present the reported performance, which implicitly are the metrics used to demonstrate effectiveness.
Standalone Performance (High-Sensitivity Operating Point - DOUBT FRACT):
Metric | Global Performance (95% CI) |
---|---|
Specificity | 0.811 [0.8 - 0.821] |
Sensitivity | 0.928 [0.919 - 0.936] |
Standalone Performance (High-Specificity Operating Point - FRACT):
Metric | Global Performance (95% CI) |
---|---|
Specificity | 0.932 [0.925 - 0.939] |
Sensitivity | 0.841 [0.829 - 0.853] |
Clinical Study (Reader Performance with AI vs. Without AI Assistance):
Metric | Unaided (95% CI) | Aided (95% CI) |
---|---|---|
Specificity | 0.906 [0.898-0.913] | 0.956 [0.951-0.960] |
Sensitivity | 0.648 [0.640-0.656] | 0.752 [0.745-0.759] |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Standalone Performance Test Set:
- Sample Size: 8,918 radiographs (n(positive)=3,886, n(negative)=5,032).
- Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. It included full anatomical areas of interest for adults (age range [21-113]; mean 52.5 +/- 19.8, 47.2% males). Images were sourced from various manufacturers (Agfa, Fujifilm, GE Healthcare, Kodak, Konica Minolta, Philips, Primax, Samsung, Siemens). No specific country of origin is mentioned, but the variety of manufacturers suggests a diverse dataset. The study description implies it's a retrospective analysis of existing radiographs.
-
Clinical Study (MRMC) Test Set:
- Sample Size: 480 cases (31.9% males, age range [21-93]; mean 59.2 +/- 16.4). It covered all anatomical areas of interest listed in BoneView's Indications for Use.
- Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. Images were from various manufacturers (GE Healthcare, Kodak, Konica Minolta, Philips, Samsung). The study implies it's a retrospective analysis of existing radiographs.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Standalone Performance Test Set: The document does not explicitly state how the ground truth was established for the standalone test set (e.g., number of experts). However, given the nature of the clinical study, it's highly probable that similar expert review was used.
- Clinical Study (MRMC) Test Set:
- Number of Experts: A panel of three experts.
- Qualifications: U.S. board-certified radiologists. The document does not specify their years of experience.
4. Adjudication Method for the Test Set
- Clinical Study (MRMC) Test Set: Ground truth was assigned by a panel of three U.S. board-certified radiologists. The method implies a consensus or majority rule (e.g., 2+1 or 3+1), as a "ground truth label indicating the presence or absence of a fracture and its location" was assigned per case. The specific adjudication method (e.g., majority vote, independent reads then consensus) is not detailed, but the use of a panel suggests a robust method to establish ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC study was done.
- Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance (based on the reported deltas):
- Specificity Improvement: +5% increase (from 0.906 unaided to 0.956 aided).
- Sensitivity Improvement: +10.4% increase (from 0.648 unaided to 0.752 aided).
- The study found that "the diagnostic accuracy of readers...is superior when aided by BoneView than when unaided."
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance study was done.
- The results are detailed in the "Bench Testing" section (7.4) and summarized in the table above for both "high-sensitivity operating point" and "high-specificity operating point." This evaluation used 8,918 radiographs and assessed the detection of fractures with high sensitivity and high specificity.
7. Type of Ground Truth Used
- For the Clinical Study (MRMC) and likely for the Standalone Test Set: Expert consensus (a panel of three U.S. board-certified radiologists assigned the ground truth label for presence or absence and location of a fracture).
8. Sample Size for the Training Set
- Training Set Sample Size: 44,649 radiographs, representing 151,096 images.
- Patient Demographics for Training Set: 52.4% males, age range [0-109]; mean 42.4 +/- 24.6.
- The training data covered "all anatomical areas of interest in the Indications for Use and from various manufacturers."
9. How the Ground Truth for the Training Set Was Established
- The document states that the training of BoneView was performed on this dataset. However, it does not explicitly detail how the ground truth for this training set was established. It is implied that fractures were somehow labeled for the supervised deep learning methodology, but the process (e.g., specific number of radiologists, their qualifications, adjudication method) is not described for the training data.
Ask a specific question about this device
Page 1 of 1