Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K241620
    Device Name
    ChestView US
    Manufacturer
    Date Cleared
    2025-02-27

    (267 days)

    Product Code
    Regulation Number
    892.2070
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Gleamer SAS

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ChestView US is a radiological Computer-Assisted Detection (CADe) software device that analyzes frontal and lateral chest radiographs of patients presenting with symptoms (e.g. dyspnea, cough, pain) or suspected for findings related to regions of interest (ROIs) in the lungs, airways, mediastinum/hila and pleural space. The device uses machine learning techniques to identify and produces boxes around the ROIs. The boxes are labeled with one of the following radiographic findings: Nodule, Pleural space abnormality, Mediastinum/Hila abnormality, and Consolidation.

    ChestView US is intended for use as a concurrent reading aid for radiologists and emergency medicine physicians. It does not replace the role of radiologists and emergency medicine physicians or of other diagnostic testing in the standard of care. ChestView US is for prescription use only and is indicated for adults only.

    Device Description

    ChestView US is a radiological Computer-Assisted Detection (CADe) software device intended to analyze frontal and lateral chest radiographs for suspicious regions of interest (ROIs): Nodule, Consolidation, Pleural Space Abnormality and Mediastinum/Hila Abnormality.

    The nodule ROI category was developed from images with focal nonlinear opacity with a generally spherical shape situated in the pulmonary interstitium.

    The consolidation ROI category was developed from images with area of increased attenuation of lung parenchyma due to the replacement of air in the alveoli.

    The pleural space abnormality ROI category was developed from images with:

    • Pleural Effusion that is an abnormal presence of fluid in the pleural space
    • Pneumothorax that is an abnormal presence of air or gas in the pleural space that separates the parietal and the visceral pleura

    The mediastinum/hila abnormality ROI category was developed from images with enlargement of the mediastinum or the hilar region with a deformation of its contours.

    ChestView US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, ChestView US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS.

    After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by ChestView US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).

    Once received by ChestView US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, ChestView US generates result files in DICOM format. These result files consist of annotated images with boxes drawn around the regions of interest on a copy of all images (as an overlay). ChestView US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.

    Once available, the result files are sent by ChestView US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.

    The DICOM Destination can be used to visualize the result files provided by ChestView US or to transfer the results to another DICOM host for visualization. The users are them as a concurrent reading aid to provide their diagnosis.

    For each exam analyzed by ChestView US, a DICOM Secondary Capture is generated.

    If any ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the following information:

    • Above the images, a header with the text "CHESTVIEW ROI" and the list of the findings detected in the image.
    • Around the ROI(s), a bounding box with a solid or dotted line depending on the confidence of the algorithm and the type of ROI written above the box:
      • Dotted-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the possible finding is above "high-sensitivity operating point" and below "high specificity operating point" displayed as a dotted bounding box around the area of interest.
      • Solid-line Bounding Box: Identified region of interest when the confidence degree of the AI algorithm associated with the finding is above "high-specificity operating point" displayed as a solid bounding box around the area of interest.
    • Below the images, a footer with:
      • The scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US.
      • The total number of regions of interest identified by ChestView US on the exam (sum of solid-line and dotted-line bounding boxes)

    If no ROI is detected by ChestView US, the output DICOM image includes a copy of the original images of the study and the text "NO CHESTVIEW ROI" with the scope of ChestView US to allow the user to always have available the list of ROI type that are in the indications for use of the device and avoid any risk of confusion or misinterpretation of the types of ROI detected by ChestView US. Finally, if the processing of the exam by ChestView US is not possible because it is outside the indications for use of the device or some information is missing to allow the processing, the output DICOM image includes a copy of the original images of the study and, in a header, the text "OUT OF SCOPE" and a caution message explaining the reason why no result was provided by the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for ChestView US:

    1. Table of Acceptance Criteria and Reported Device Performance

    Standalone Performance (ChestView US)

    ROIsAcceptance Criteria (AUC)Reported Device Performance (AUC)95% Bootstrap CI (AUC)Acceptance Criteria (Sensitivity @ High-Sensitivity OP)Reported Device Performance (Sensitivity @ High-Sensitivity OP)95% Bootstrap CI (Sensitivity @ High-Sensitivity OP)Acceptance Criteria (Specificity @ High-Sensitivity OP)Reported Device Performance (Specificity @ High-Sensitivity OP)95% Bootstrap CI (Specificity @ High-Sensitivity OP)Acceptance Criteria (Sensitivity @ High-Specificity OP)Reported Device Performance (Sensitivity @ High-Specificity OP)95% Bootstrap CI (Sensitivity @ High-Specificity OP)Acceptance Criteria (Specificity @ High-Specificity OP)Reported Device Performance (Specificity @ High-Specificity OP)95% Bootstrap CI (Specificity @ High-Specificity OP)
    NODULE(Not explicitly stated)0.93[0.921; 0.938](Not explicitly stated)0.829[0.801; 0.86](Not explicitly stated)0.956[0.948; 0.963](Not explicitly stated)0.482[0.455; 0.518](Not explicitly stated)0.994[0.99; 0.996]
    MEDIASTINUM/HILA ABNORMALITY(Not explicitly stated)0.922[0.91; 0.934](Not explicitly stated)0.793[0.739; 0.832](Not explicitly stated)0.975[0.971; 0.98](Not explicitly stated)0.535[0.475; 0.592](Not explicitly stated)0.992[0.99; 0.994]
    CONSOLIDATION(Not explicitly stated)0.952[0.947; 0.957](Not explicitly stated)0.853[0.822; 0.879](Not explicitly stated)0.946[0.938; 0.952](Not explicitly stated)0.61[0.583; 0.643](Not explicitly stated)0.985[0.981; 0.989]
    PLEURAL SPACE ABNORMALITY(Not explicitly stated)0.973[0.97; 0.975](Not explicitly stated)0.892[0.87; 0.911](Not explicitly stated)0.965[0.958; 0.971](Not explicitly stated)0.87[0.85; 0.896](Not explicitly stated)0.975[0.97; 0.981]

    MRMC Study Acceptance Criteria and Reported Performance (Improvement with AI Aid)

    ROI CategoryReader TypeAcceptance Criteria (AUC Improvement)Reported AUC Improvement95% Confidence Interval for AUC ImprovementP-value
    NoduleEmergency Medicine Physicians(Not explicitly stated as a numerical threshold, but "significantly improved")0.136[0.107, 0.17]
    Ask a Question

    Ask a specific question about this device

    K Number
    K241593
    Device Name
    BoneMetrics (US)
    Manufacturer
    Date Cleared
    2025-02-05

    (247 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Gleamer SAS

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BoneMetrics US is a fully automated radiological image processing software device intended to aid users in the measurement of Cobb angles on frontal spine radiographs of individuals of at least 4 years old for patients with suspected or present spinal deformities, such as scoliosis. It should not be used instead of full patient evaluation or solely relied upon to make or confirm a diagnosis. The software device is to be used by healthcare professionals trained in radiology.

    Device Description

    BoneMetrics US is intended to analyze radiographs using machine learning techniques to provide fully automated measurements of cobb angles during the review of frontal spine radiographs. BoneMetrics US can be deployed on cloud and be connected to several computing platforms and X-ray imaging platforms such as radiographic systems, or PACS. More precisely, BoneMetrics US can be deployed in the cloud connected to a DICOM Source/Destination with a DICOM Viewer, i.e. a PACS. After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneMetrics US from the user's DICOM Source through intermediate DICOM node(s) (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). Once received by BoneMetrics US, the radiographs are automatically processed by the Al algorithm without requiring any user inputs. The algorithm identifies the keypoints corresponding to the corners of all the vertebras that are seen on the images and calculates all possible angles between vertebras. Only Cobb Angles that are above 7° are retained. Based on the processing result, BoneMetrics US generates result files in DICOM format. These result files consist of annotated images with the measurements plotted on a copy of all images (as an overlay) and angle values displayed in degrees. BoneMetrics US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source. Once available, the result files are sent by BoneMetrics US to the DICOM Destination through the same intermediate DICOM node(s). Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical. The DICOM Destination can be used to visualize the result files provided by BoneMetrics US or to transfer the results to another DICOM host for visualization. The users are then able to use them as a concurrent reading aid to provide their diagnosis. The displayed result for the BoneMetrics US is a summary in a unique Secondary Capture with the following information: The image with the angle(s) in degree drawn as an overlay (if any), A table with the angle(s) measurement(s) and value(s) in degree (if any), At the bottom, the "Gleamer" logo and the "BoneMetrics" mention.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of acceptance criteria and the reported device performance:

    EndpointMetricReported Mean Absolute Error (95% CI)Acceptance Criteria (Upper bound of the MAE 95% CI)Device Meets Criteria?
    Cobb angle with the largest curvature (n = 212)Mean Absolute Error (°)2.56° (2.0° - 3.28°)
    Ask a Question

    Ask a specific question about this device

    K Number
    K222176
    Device Name
    BoneView
    Manufacturer
    Date Cleared
    2023-03-02

    (223 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Gleamer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BoneView 1.1-US is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of: Ankle, Foot, Knee, Tibia/Fibula, Wrist, Hand, Elbow, Forearm, Humerus, Shoulder, Clavicle, Pelvis, Hip, Femur, Ribs, Thoracic Spine, Lumbosacral Spine. BoneView 1.1-US is intended for use as a concurrent reading aid during the interpretation of radiographs. BoneView 1.1-US is for prescription use only.

    Device Description

    BoneView 1.1-US is a software-only device intended to assist clinicians in the interpretation of: . limbs radiographs of children/adolescents and . limbs, pelvis, rib cage, and dorsolumbar vertebra radiographs of adults. BoneView 1.1-US can be deployed on-premise or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView 1.1-US from the user's DICOM Source through an intermediate DICOM node. Once received by BoneView 1.1-US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView 1.1-US generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView 1.1-US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source. Once available, the result files are sent by BoneView 1.1-US to the DICOM Destination through the same intermediate DICOM node. The DICOM Destination can be used to visualize the result files provided by BoneView 1.1-US or to transfer the results to another DICOM host for visualization. The users are then as a concurrent reading aid to provide their diagnosis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as numerical targets in a table. Instead, the study aims to demonstrate that the device performs with "high sensitivity and high specificity" and that its performance on children/adolescents is "similar" to that on adults. For the clinical study, the acceptance criteria are implicitly that the diagnostic accuracy of readers aided by BoneView is superior to that of readers unaided.

    However, the document provides the performance metrics for both standalone testing and the clinical study.

    Standalone Performance (Children/Adolescents Clinical Performance Study Dataset)

    Operating PointMetricValue (95% Clopper-Pearson CI)Description
    High-sensitivity (DOUBT FRACT)Sensitivity0.909 [0.889 - 0.926]The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly sensitive to possible fractures, potentially including subtle ones, and is indicated by a dotted bounding box.
    High-sensitivity (DOUBT FRACT)Specificity0.821 [0.796 - 0.844]The probability that the device correctly identifies the absence of a fracture when no fracture is present.
    High-specificity (FRACT)Sensitivity0.792 [0.766 - 0.817]The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly specific, meaning it provides a high degree of confidence that a detected fracture is indeed a fracture, and is indicated by a solid bounding box.
    High-specificity (FRACT)Specificity0.965 [0.952 - 0.976]The probability that the device correctly identifies the absence of a fracture when no fracture is present.

    Comparative Standalone Performance (Children/Adolescents vs. Adult)

    Operating PointDatasetSensitivity (95% CI)Specificity (95% CI)95% CI on the difference (Sensitivity)95% CI on the difference (Specificity)
    High-sensitivity (DOUBT FRACT)Adult clinical performance study0.928 [0.919 - 0.936]0.811 [0.8 - 0.821]-0.019 [-0.039 - 0.001]0.010 [-0.016 - 0.037]
    High-sensitivity (DOUBT FRACT)Children/adolescents clinical performance0.909 [0.889 - 0.926]0.821 [0.796 - 0.844]
    High-specificity (FRACT)Adult clinical performance study0.841 [0.829 - 0.853]0.932 [0.925 - 0.939]-0.049 [-0.079 - -0.021]0.033 [0.019 - 0.046]
    High-specificity (FRACT)Children/adolescents clinical performance0.792 [0.766 - 0.817]0.965 [0.952 - 0.976]

    Clinical Study Performance (MRMC - Reader Performance with/without AI assistance)

    MetricUnaided Performance (95% bootstrap CI)Aided Performance (95% bootstrap CI)Increase
    Specificity0.906 (0.898-0.913)0.956 (0.951-0.960)+5%
    Sensitivity0.648 (0.640-0.656)0.752 (0.745-0.759)+10.4%

    2. Sample sizes used for the test set and data provenance:

    • Standalone Performance Test Set:
      • Children/Adolescents: 2,000 radiographs (52.8% males, age range [2 – 21]; mean 11.54 +/- 4.7). The anatomical areas of interest included all those in the Indications for Use for this population group.
      • Adults (cited from predicate device K212365): 8,918 radiographs (47.2% males, age range [21 – 113]; mean 52.5 +/- 19.8). The anatomical areas of interest included all those in the Indications for Use for this population group.
    • Clinical Study Test Set (MRMC): 480 cases (31.9% males, age range [21 – 93]; mean 59.2 +/- 16.4). These cases were from all anatomical areas of interest included in BoneView's Indications for Use.
    • Data Provenance: The document states "various manufacturers" (e.g., Canon, Fujifilm, GE Healthcare, Konica Minolta, Philips, Primax, Samsung, Siemens for standalone data; GE Healthcare, Kodak, Konica Minolta, Philips, Samsung for clinical study data). The general context implies a European or North American source for the regulatory submission (France for the manufacturer, FDA for the review). It is explicitly stated that these datasets were independent of training data. The studies are described as retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Clinical Study (MRMC Test Set): Ground truth was established by a panel of three U.S. board-certified radiologists. No further details on their years of experience are provided, only their certification.
    • Standalone Test Sets (Children/Adolescents & Adult): The document doesn't explicitly state the number or qualifications of experts used to establish ground truth for the standalone test sets. However, it indicates these datasets were used for "diagnostic performances," implying a definitive ground truth. Given the rigorous nature of FDA submissions, it's highly probable that board-certified radiologists or other qualified medical professionals established this ground truth.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Clinical Study (MRMC Test Set): The ground truth was established by a panel of three U.S. board-certified radiologists. The method of adjudication (e.g., majority vote, discussion to consensus) is not explicitly detailed, but it states they "assigned a ground truth label." This strongly suggests a consensus or majority-based method from the panel of three, rather than just 2+1 or 3+1 with a tie-breaker.
    • Standalone Test Sets: Not explicitly stated, though a panel or consensus method is standard for robust ground truth establishment.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Yes, a fully-crossed multi-reader, multi-case (MRMC) retrospective reader study was conducted.
    • Effect Size of Improvement with AI Assistance:
      • Specificity: Improved by +5% (from 0.906 unaided to 0.956 aided).
      • Sensitivity: Improved by +10.4% (from 0.648 unaided to 0.752 aided).
      • The study found that "the diagnostic accuracy of readers in the intended use population is superior when aided by BoneView than when unaided by BoneView."
      • Subgroup analysis also found that "Sensitivity and Specificity were higher for Aided reads versus Unaided reads for all of the anatomical areas of interest."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, standalone performance testing was conducted for both the children/adolescent population and the adult population (the latter referencing the predicate device's data). The results are provided in the tables under section 1.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Expert Consensus: The ground truth for the clinical MRMC study was established by a "panel of three U.S. board-certified radiologists who assigned a ground truth label indicating the presence of a fracture and its location." For the standalone testing, although not explicitly stated, it is commonly established by expert interpretation of the radiographs, often through consensus, to determine the presence or absence of fractures.

    8. The sample size for the training set:

    • The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images. This dataset covered all anatomical areas of interest in the Indications for Use and was sourced from various manufacturers.

    9. How the ground truth for the training set was established:

    • The document implies that the "training was performed on a training dataset... for all anatomical areas of interest." While it doesn't explicitly state how ground truth was established for this massive training set, it is standard practice for medical imaging AI that ground truth for training data is established through expert annotation (e.g., radiologists, orthopedic surgeons) of the images, typically through a labor-intensive review process.
    Ask a Question

    Ask a specific question about this device

    K Number
    K212365
    Device Name
    BoneView
    Manufacturer
    Date Cleared
    2022-03-01

    (214 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Gleamer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of:

    Study Type (Anatomical Area of Interest)Compatible Radiographic View(s)
    AnkleFrontal, Lateral, Oblique
    FootFrontal, Lateral, Oblique
    KneeFrontal, Lateral
    Tibia/FibulaFrontal, Lateral
    FemurFrontal, Lateral
    WristFrontal, Lateral, Oblique
    HandFrontal, Oblique
    ElbowFrontal, Lateral
    ForearmFrontal, Lateral
    HumerusFrontal, Lateral
    ShoulderFrontal, Lateral, Axillary
    ClavicleFrontal
    PelvisFrontal
    HipFrontal, Frog Leg Lateral
    RibsFrontal Chest, Rib series
    Thoracic SpineFrontal, Lateral
    Lumbosacral SpineFrontal, Lateral

    BoneView is intended for use as a concurrent reading aid during the interpretations of radiographs. BoneView is for prescription use only and is indicated for adults only.

    Device Description

    BoneView is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs.

    BoneView can be deployed on-premises or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. More precisely, BoneView can be deployed:

    • In the cloud with a PACS as the DICOM Source
    • . On-premises with a PACS as the DICOM Source
    • On-premises with an X-ray system as the DICOM Source

    After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView from the user's DICOM Source through an intermediate DICOM node (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).

    Once received by BoneView, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.

    Once available, the result files are sent by BoneView to the DICOM Destination through the same intermediate DICOM node. Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.

    The DICOM Destination can be used to visualize the result files provided by BoneView or to transfer the results to another DICOM host for visualization. The users are then able to use them as a concurrent reading aid to provide their diagnosis.

    The general layout of images processed by BoneView is comprising:

    (1) The "summary table" – it is a first image that is derived from the detected regions of interest in the following result images and that displays the results of the overall study along with the Gleamer – BoneView logo. This summary can be configured to be present or not.

    (2) The result images – they are provided for all the images that were processed by BoneView and contain:

    • . Around the Regions of Interest (if any), a rectangle with a solid or dotted line depending on the confidence of the algorithm (see below)
    • . Around the entire image, a white frame showing that the images were processed by BoneView
    • . Below the image:
      • o The Gleamer BoneView logo
      • o The number of Regions of interest that are displayed in the result image
      • (if any) The caution message if it was identified that the image was not part of o the indication for use of BoneView

    The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images (52.4% of males, with age: range [0 – 109]; mean 42.4 +/- 24.6) for all anatomical areas of interest in the Indications for Use and from various manufacturers. BoneView has been designed to solve the problem of missed fractures including subtle fractures, and thus detects fractures with a high sensitivity. In this regard, the display of findings is triggered by a "high-sensitivity operating point" (DOUBT FRACT) that will enable the display of a dotted-line bounding box around the region of interest. Additionally, the users need to be confident that when BoneView identifies a fracture, it is actually a fracture. In this regard, an additional information is introduced to the user with a "high-specificity operating point" (FRACT).

    These two operating points are implemented in the User Interface as follow:

    • Dotted-line Bounding Box: suspicious area / subtle fracture (when the level of . confidence of the Al algorithm associated with the finding is above "high-sensitivity operating point" and below "high-specificity operating point") displayed as a dotted bounding box around the area of interest

    • . Solid-line Bounding Box: definite or unequivocal fractures (when the level of confidence of the AI algorithm associated with the finding is above "high-specificity operating point") displayed as a solid bounding box around the area of interest
      BoneView can provide 4 levels of results:

    • . FRACT: BoneView identified at least one solid-line bounding box on the result images,

    • . DOUBT FRACT: BoneView did not identify any solid-line bounding box on the result images but it identified at least one dotted-line bounding box in the result images,

    • . NO FRACT: BoneView did not identify any bounding box at all in the result images,

    • NOT AVAILABLE: BoneView identified that the original images are out of its Indications for Use

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of acceptance criteria (i.e., predefined thresholds that the device must meet). Instead, it shows the reported performance of the device from standalone testing and a clinical study. I will present the reported performance, which implicitly are the metrics used to demonstrate effectiveness.

    Standalone Performance (High-Sensitivity Operating Point - DOUBT FRACT):

    MetricGlobal Performance (95% CI)
    Specificity0.811 [0.8 - 0.821]
    Sensitivity0.928 [0.919 - 0.936]

    Standalone Performance (High-Specificity Operating Point - FRACT):

    MetricGlobal Performance (95% CI)
    Specificity0.932 [0.925 - 0.939]
    Sensitivity0.841 [0.829 - 0.853]

    Clinical Study (Reader Performance with AI vs. Without AI Assistance):

    MetricUnaided (95% CI)Aided (95% CI)
    Specificity0.906 [0.898-0.913]0.956 [0.951-0.960]
    Sensitivity0.648 [0.640-0.656]0.752 [0.745-0.759]

    2. Sample Sizes Used for the Test Set and Data Provenance

    1. Standalone Performance Test Set:

      • Sample Size: 8,918 radiographs (n(positive)=3,886, n(negative)=5,032).
      • Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. It included full anatomical areas of interest for adults (age range [21-113]; mean 52.5 +/- 19.8, 47.2% males). Images were sourced from various manufacturers (Agfa, Fujifilm, GE Healthcare, Kodak, Konica Minolta, Philips, Primax, Samsung, Siemens). No specific country of origin is mentioned, but the variety of manufacturers suggests a diverse dataset. The study description implies it's a retrospective analysis of existing radiographs.
    2. Clinical Study (MRMC) Test Set:

      • Sample Size: 480 cases (31.9% males, age range [21-93]; mean 59.2 +/- 16.4). It covered all anatomical areas of interest listed in BoneView's Indications for Use.
      • Data Provenance: The dataset was independent of the data used for model training and establishment of device operating points. Images were from various manufacturers (GE Healthcare, Kodak, Konica Minolta, Philips, Samsung). The study implies it's a retrospective analysis of existing radiographs.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Standalone Performance Test Set: The document does not explicitly state how the ground truth was established for the standalone test set (e.g., number of experts). However, given the nature of the clinical study, it's highly probable that similar expert review was used.
    • Clinical Study (MRMC) Test Set:
      • Number of Experts: A panel of three experts.
      • Qualifications: U.S. board-certified radiologists. The document does not specify their years of experience.

    4. Adjudication Method for the Test Set

    • Clinical Study (MRMC) Test Set: Ground truth was assigned by a panel of three U.S. board-certified radiologists. The method implies a consensus or majority rule (e.g., 2+1 or 3+1), as a "ground truth label indicating the presence or absence of a fracture and its location" was assigned per case. The specific adjudication method (e.g., majority vote, independent reads then consensus) is not detailed, but the use of a panel suggests a robust method to establish ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, an MRMC study was done.
    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance (based on the reported deltas):
      • Specificity Improvement: +5% increase (from 0.906 unaided to 0.956 aided).
      • Sensitivity Improvement: +10.4% increase (from 0.648 unaided to 0.752 aided).
      • The study found that "the diagnostic accuracy of readers...is superior when aided by BoneView than when unaided."

    6. Standalone (Algorithm Only) Performance

    • Yes, a standalone performance study was done.
    • The results are detailed in the "Bench Testing" section (7.4) and summarized in the table above for both "high-sensitivity operating point" and "high-specificity operating point." This evaluation used 8,918 radiographs and assessed the detection of fractures with high sensitivity and high specificity.

    7. Type of Ground Truth Used

    • For the Clinical Study (MRMC) and likely for the Standalone Test Set: Expert consensus (a panel of three U.S. board-certified radiologists assigned the ground truth label for presence or absence and location of a fracture).

    8. Sample Size for the Training Set

    • Training Set Sample Size: 44,649 radiographs, representing 151,096 images.
    • Patient Demographics for Training Set: 52.4% males, age range [0-109]; mean 42.4 +/- 24.6.
    • The training data covered "all anatomical areas of interest in the Indications for Use and from various manufacturers."

    9. How the Ground Truth for the Training Set Was Established

    • The document states that the training of BoneView was performed on this dataset. However, it does not explicitly detail how the ground truth for this training set was established. It is implied that fractures were somehow labeled for the supervised deep learning methodology, but the process (e.g., specific number of radiologists, their qualifications, adjudication method) is not described for the training data.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1