(223 days)
BoneView 1.1-US is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of: Ankle, Foot, Knee, Tibia/Fibula, Wrist, Hand, Elbow, Forearm, Humerus, Shoulder, Clavicle, Pelvis, Hip, Femur, Ribs, Thoracic Spine, Lumbosacral Spine. BoneView 1.1-US is intended for use as a concurrent reading aid during the interpretation of radiographs. BoneView 1.1-US is for prescription use only.
BoneView 1.1-US is a software-only device intended to assist clinicians in the interpretation of: . limbs radiographs of children/adolescents and . limbs, pelvis, rib cage, and dorsolumbar vertebra radiographs of adults. BoneView 1.1-US can be deployed on-premise or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView 1.1-US from the user's DICOM Source through an intermediate DICOM node. Once received by BoneView 1.1-US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView 1.1-US generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView 1.1-US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source. Once available, the result files are sent by BoneView 1.1-US to the DICOM Destination through the same intermediate DICOM node. The DICOM Destination can be used to visualize the result files provided by BoneView 1.1-US or to transfer the results to another DICOM host for visualization. The users are then as a concurrent reading aid to provide their diagnosis.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as numerical targets in a table. Instead, the study aims to demonstrate that the device performs with "high sensitivity and high specificity" and that its performance on children/adolescents is "similar" to that on adults. For the clinical study, the acceptance criteria are implicitly that the diagnostic accuracy of readers aided by BoneView is superior to that of readers unaided.
However, the document provides the performance metrics for both standalone testing and the clinical study.
Standalone Performance (Children/Adolescents Clinical Performance Study Dataset)
| Operating Point | Metric | Value (95% Clopper-Pearson CI) | Description | 
|---|---|---|---|
| High-sensitivity (DOUBT FRACT) | Sensitivity | 0.909 [0.889 - 0.926] | The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly sensitive to possible fractures, potentially including subtle ones, and is indicated by a dotted bounding box. | 
| High-sensitivity (DOUBT FRACT) | Specificity | 0.821 [0.796 - 0.844] | The probability that the device correctly identifies the absence of a fracture when no fracture is present. | 
| High-specificity (FRACT) | Sensitivity | 0.792 [0.766 - 0.817] | The probability that the device correctly identifies a fracture when a fracture is present. This operating point is designed to be highly specific, meaning it provides a high degree of confidence that a detected fracture is indeed a fracture, and is indicated by a solid bounding box. | 
| High-specificity (FRACT) | Specificity | 0.965 [0.952 - 0.976] | The probability that the device correctly identifies the absence of a fracture when no fracture is present. | 
Comparative Standalone Performance (Children/Adolescents vs. Adult)
| Operating Point | Dataset | Sensitivity (95% CI) | Specificity (95% CI) | 95% CI on the difference (Sensitivity) | 95% CI on the difference (Specificity) | 
|---|---|---|---|---|---|
| High-sensitivity (DOUBT FRACT) | Adult clinical performance study | 0.928 [0.919 - 0.936] | 0.811 [0.8 - 0.821] | -0.019 [-0.039 - 0.001] | 0.010 [-0.016 - 0.037] | 
| High-sensitivity (DOUBT FRACT) | Children/adolescents clinical performance | 0.909 [0.889 - 0.926] | 0.821 [0.796 - 0.844] | ||
| High-specificity (FRACT) | Adult clinical performance study | 0.841 [0.829 - 0.853] | 0.932 [0.925 - 0.939] | -0.049 [-0.079 - -0.021] | 0.033 [0.019 - 0.046] | 
| High-specificity (FRACT) | Children/adolescents clinical performance | 0.792 [0.766 - 0.817] | 0.965 [0.952 - 0.976] | 
Clinical Study Performance (MRMC - Reader Performance with/without AI assistance)
| Metric | Unaided Performance (95% bootstrap CI) | Aided Performance (95% bootstrap CI) | Increase | 
|---|---|---|---|
| Specificity | 0.906 (0.898-0.913) | 0.956 (0.951-0.960) | +5% | 
| Sensitivity | 0.648 (0.640-0.656) | 0.752 (0.745-0.759) | +10.4% | 
2. Sample sizes used for the test set and data provenance:
- Standalone Performance Test Set:
- Children/Adolescents: 2,000 radiographs (52.8% males, age range [2 – 21]; mean 11.54 +/- 4.7). The anatomical areas of interest included all those in the Indications for Use for this population group.
 - Adults (cited from predicate device K212365): 8,918 radiographs (47.2% males, age range [21 – 113]; mean 52.5 +/- 19.8). The anatomical areas of interest included all those in the Indications for Use for this population group.
 
 - Clinical Study Test Set (MRMC): 480 cases (31.9% males, age range [21 – 93]; mean 59.2 +/- 16.4). These cases were from all anatomical areas of interest included in BoneView's Indications for Use.
 - Data Provenance: The document states "various manufacturers" (e.g., Canon, Fujifilm, GE Healthcare, Konica Minolta, Philips, Primax, Samsung, Siemens for standalone data; GE Healthcare, Kodak, Konica Minolta, Philips, Samsung for clinical study data). The general context implies a European or North American source for the regulatory submission (France for the manufacturer, FDA for the review). It is explicitly stated that these datasets were independent of training data. The studies are described as retrospective.
 
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Clinical Study (MRMC Test Set): Ground truth was established by a panel of three U.S. board-certified radiologists. No further details on their years of experience are provided, only their certification.
 - Standalone Test Sets (Children/Adolescents & Adult): The document doesn't explicitly state the number or qualifications of experts used to establish ground truth for the standalone test sets. However, it indicates these datasets were used for "diagnostic performances," implying a definitive ground truth. Given the rigorous nature of FDA submissions, it's highly probable that board-certified radiologists or other qualified medical professionals established this ground truth.
 
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Clinical Study (MRMC Test Set): The ground truth was established by a panel of three U.S. board-certified radiologists. The method of adjudication (e.g., majority vote, discussion to consensus) is not explicitly detailed, but it states they "assigned a ground truth label." This strongly suggests a consensus or majority-based method from the panel of three, rather than just 2+1 or 3+1 with a tie-breaker.
 - Standalone Test Sets: Not explicitly stated, though a panel or consensus method is standard for robust ground truth establishment.
 
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a fully-crossed multi-reader, multi-case (MRMC) retrospective reader study was conducted.
 - Effect Size of Improvement with AI Assistance:
- Specificity: Improved by +5% (from 0.906 unaided to 0.956 aided).
 - Sensitivity: Improved by +10.4% (from 0.648 unaided to 0.752 aided).
 - The study found that "the diagnostic accuracy of readers in the intended use population is superior when aided by BoneView than when unaided by BoneView."
 - Subgroup analysis also found that "Sensitivity and Specificity were higher for Aided reads versus Unaided reads for all of the anatomical areas of interest."
 
 
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, standalone performance testing was conducted for both the children/adolescent population and the adult population (the latter referencing the predicate device's data). The results are provided in the tables under section 1.
 
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus: The ground truth for the clinical MRMC study was established by a "panel of three U.S. board-certified radiologists who assigned a ground truth label indicating the presence of a fracture and its location." For the standalone testing, although not explicitly stated, it is commonly established by expert interpretation of the radiographs, often through consensus, to determine the presence or absence of fractures.
 
8. The sample size for the training set:
- The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images. This dataset covered all anatomical areas of interest in the Indications for Use and was sourced from various manufacturers.
 
9. How the ground truth for the training set was established:
- The document implies that the "training was performed on a training dataset... for all anatomical areas of interest." While it doesn't explicitly state how ground truth was established for this massive training set, it is standard practice for medical imaging AI that ground truth for training data is established through expert annotation (e.g., radiologists, orthopedic surgeons) of the images, typically through a labor-intensive review process.
 
{0}------------------------------------------------
Image /page/0/Picture/0 description: The image shows the logo of the U.S. Food & Drug Administration (FDA). On the left is the Department of Health & Human Services logo. To the right of that is the FDA logo, which is a blue square with the letters "FDA" in white. To the right of the blue square is the text "U.S. FOOD & DRUG ADMINISTRATION" in blue.
Gleamer % Antoine Tournier Head of Quality & Regulatory Affairs 5 Avenue du Général de Gaulle Saint Mandé, 94160 FRANCE
March 2, 2023
Re: K222176
Trade/Device Name: BoneView 1.1-US Regulation Number: 21 CFR 892.2090 Regulation Name: Radiological computer assisted detection and diagnosis software for fracture Regulatory Class: Class II Product Code: QBS Dated: January 31, 2023 Received: February 1, 2023
Dear Antoine Tournier:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part 801 and Part 809); medical device reporting of medical device-related adverse events) (21 CFR
{1}------------------------------------------------
- for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reportingcombination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
 
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.
For comprehensive regulatory information about mediation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatoryassistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely.
Jessica Lamb
Jessica Lamb, Ph.D. Assistant Director Imaging Software Team DHT8B: Division of Radiological Imaging Devices and Electronic Products OHT8: Office of Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
Indications for Use
510(k) Number (if known)
Device Name
BoneView 1.1-US
Indications for Use (Describe)
BoneView 1.1-US is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of:
| Study Type (Anatomical Area ofInterest) | Compatible Radiographic View(s) | Patient population* | 
|---|---|---|
| Ankle | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Foot | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Knee | Frontal, Lateral | Adults & Children/Adolescents | 
| Tibia/Fibula | Frontal, Lateral | Adults & Children/Adolescents | 
| Wrist | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Hand | Frontal, Oblique | Adults & Children/Adolescents | 
| Elbow | Frontal, Lateral | Adults & Children/Adolescents | 
| Forearm | Frontal, Lateral | Adults & Children/Adolescents | 
| Humerus | Frontal, Lateral | Adults & Children/Adolescents | 
| Shoulder | Frontal, Lateral, Axillary | Adults & Children/Adolescents | 
| Clavicle | Frontal | Adults & Children/Adolescents | 
| Pelvis | Frontal | Adults only | 
| Hip | Frontal, Frog Leg Lateral | Adults only | 
| Femur | Frontal, Lateral | Adults only | 
| Ribs | Frontal Chest, Rib series | Adults only | 
| Thoracic Spine | Frontal, Lateral | Adults only | 
| Lumbosacral Spine | Frontal, Lateral | Adults only | 
- Adults are patient aged above 21 years old and Children/Adolescents are patients aged from 2 to 21 years old.
 
BoneView 1.1-US is intended for use as a concurrent reading aid during the interpretation of radiographs. BoneView 1.1-US is for prescription use only.
| Type of Use (Select one or both, as applicable) | |
|---|---|
| ------------------------------------------------- | -- | 
X Prescription Use (Part 21 CFR 801 Subpart D)
Over-The-Counter Use (21 CFR 801 Subpart C)
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
Form Approved: OMB No. 0910-0120 Expiration Date: 06/30/2023 See PRA Statement below.
{3}------------------------------------------------
Image /page/3/Picture/0 description: The image shows the word "GLEAMER" in a simple, sans-serif font. To the left of the word is a red circular logo with a white center. The logo appears to be a stylized letter "G" or a symbol representing connection or collaboration. The overall design is clean and modern, suggesting a tech-related or innovative company.
Date prepared: January 31th, 2023
In accordance with 21 CFR 807.87(h) and 21 CFR 807.92 the 510(k) Summary for BoneView 1.1-US is provided below.
1. Submitter
| Submitter | GLEAMER SAS5, avenue du Général de Gaulle94160 Saint-Mandé - FRANCE | |
|---|---|---|
| Primary Contact Person | Antoine TournierHead of Quality & Regulatory AffairsTel: 0033 6 15 81 23 45Email: antoine.tournier@gleamer.ai | |
| Secondary Contact Perso | Christian AlloucheCEOTel: 0033 6 58 53 70 46Email: christian@gleamer.ai | 
2. Device
| Trade Name | BoneView 1.1-US | 
|---|---|
| 510(k) reference | K222176 | 
| Common Name | Radiological computer assisted detection/diagnosis software for fracture | 
| Regulation | 21 CFR 892.2090 | 
| Product Code | QBS | 
| Classification | Class II | 
3. Predicate Device
| Predicate Device | Gleamer BoneView | 
|---|---|
| 510(k) reference | K212365 | 
{4}------------------------------------------------
4. Device Description
BoneView 1.1-US is a software-only device intended to assist clinicians in the interpretation of:
- . limbs radiographs of children/adolescents and
 - . limbs, pelvis, rib cage, and dorsolumbar vertebra radiographs of adults.
 
BoneView 1.1-US can be deployed on-premise or on cloud and be connected to several computing platforms and X-ray imaging platforms such as X-ray radiographic systems, or PACS. More precisely, BoneView 1.1-US can be deployed:
- . In the cloud with a PACS as the DICOM Source
 - On premise with a PACS as the DICOM Source
 - On premise with an X-ray system as the DICOM Source
 
After the acquisition of the radiographs on the patient and their storage in the DICOM Source, the radiographs are automatically received by BoneView 1.1-US from the user's DICOM Source through an intermediate DICOM node (for example, a specific Gateway, or a dedicated API). The DICOM Source can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems).
Once received by BoneView 1.1-US, the radiographs are automatically processed by the AI algorithm to identify regions of interest. Based on the processing result, BoneView 1.1-US generates result files in DICOM format. These result files consist of a summary table and result images (annotations on a copy of the original images or annotations to be toggled on/off). BoneView 1.1-US does not alter the original images, nor does it change the order of original images or delete any image from the DICOM Source.
Once available, the result files are sent by BoneView 1.1-US to the DICOM Destination through the same intermediate DICOM node. Similar to the DICOM Source, the DICOM Destination can be the user's image storage system (for example, the Picture Archiving and Communication System, or PACS), or other radiological equipment (for example X-ray systems). The DICOM Source and the DICOM Destination are not necessarily identical.
The DICOM Destination can be used to visualize the result files provided by BoneView 1.1-US or to transfer the results to another DICOM host for visualization. The users are then as a concurrent reading aid to provide their diagnosis.
The general layout of images processed by BoneView is comprising:
(1) The "summary table" – it is a first image that is derived from the detected regions of interest in the following result images and that displays the results of the overall study along with the Gleamer – BoneView logo. This summary can be configured to be present or not.
(2) The result images – they are provided for all the images that were processed by BoneView and contain:
{5}------------------------------------------------
- . Around the Regions of Interest (if any), a rectangle with a solid or dotted line depending on the confidence of the algorithm (see below)
 - . Around the entire image, a white frame showing that the images were processed by BoneView
 - Below the image:
- o The Gleamer BoneView logo
 - o The number of Regions of interest that are displayed in the result image
 - (if any) The caution message if it was identified that the image was not part of the o indication for use of BoneView
 
 
The training of BoneView was performed on a training dataset of 44,649 radiographs, representing 151,096 images (52.4% of males, with age: range [0 – 109]; mean 42.4 +/- 24.6) for all anatomical areas of interest in the Indications for Use and from various manufacturers. BoneView has been designed to solve the problem of missed fractures including subtle fractures, and thus detects fractures with a high sensitivity. In this regard, the display of findings is triggered by a "high-sensitivity operating point" (DOUBT FRACT) that will enable the display of a dotted-line bounding box around the region of interest. Additionally, the users need to be confident that when BoneView identifies a fracture, it is actually a fracture. In this regard, an additional information is introduced to the user with a "high-specificity operating point" (FRACT).
These two operating points are implemented in the User Interface as follow:
- Dotted-line Bounding Box: suspicious area / subtle fracture (when the level of confidence of the Al . algorithm associated with the finding is above "high-sensitivity operating point" and below "highspecificity operating point") displayed as a dotted bounding box around the area of interest
 - . Solid-line Bounding Box: definite or unequivocal fractures (when the level of confidence of the Al algorithm associated with the finding is above "high-specificity operating point") displayed as a solid bounding box around the area of interest
 
BoneView can provide 4 levels of results:
- FRACT: BoneView identified at least one solid-line bounding box on the result images,
 - . DOUBT FRACT: BoneView did not identify any solid-line bounding box on the result images but it identified at least one dotted-line bounding box in the result images,
 - NO FRACT: BoneView did not identify any bounding box at all in the result images,
 - . NOT AVAILABLE: BoneView identified that the original images are out of its Indications for Use
 
5. Intended use/Indications for use
BoneView 1.1-US is intended to analyze radiographs using machine learning techniques to identify and highlight fractures during the review of radiographs of:
{6}------------------------------------------------
Image /page/6/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white design inside, followed by the word "GLEAMER" in a dark blue sans-serif font. The logo is simple and modern, with a focus on the company name.
| Study Type (Anatomical Area of Interest) | Compatible Radiographic View(s) | Patient population* | 
|---|---|---|
| Ankle | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Foot | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Knee | Frontal, Lateral | Adults & Children/Adolescents | 
| Tibia/Fibula | Frontal, Lateral | Adults & Children/Adolescents | 
| Wrist | Frontal, Lateral, Oblique | Adults & Children/Adolescents | 
| Hand | Frontal, Oblique | Adults & Children/Adolescents | 
| Elbow | Frontal, Lateral | Adults & Children/Adolescents | 
| Forearm | Frontal, Lateral | Adults & Children/Adolescents | 
| Humerus | Frontal, Lateral | Adults & Children/Adolescents | 
| Shoulder | Frontal, Lateral, Axillary | Adults & Children/Adolescents | 
| Clavicle | Frontal | Adults & Children/Adolescents | 
| Pelvis | Frontal | Adults only | 
| Hip | Frontal, Frog Leg Lateral | Adults only | 
| Femur | Frontal, Lateral | Adults only | 
| Ribs | Frontal Chest, Rib series | Adults only | 
| Thoracic Spine | Frontal, Lateral | Adults only | 
| Lumbosacral Spine | Frontal, Lateral | Adults only | 
- Adults are patient aged above 21 years old and Children/Adolescents are patients aged from 2 to 21 years old.
 
BoneView 1.1-US is intended for use as a concurrent reading aid during the interpretation of radiographs. BoneView 1.1-US is for prescription use only.
6. Substantial equivalence
| Features and Characteristics | Subject DeviceGleamerBoneView 1.1-US | Predicate DeviceGleamerBoneView 1.0-US | 
|---|---|---|
| Regulation Information | ||
| Regulation Number/Name | 21 CFR 892.2090 / Radiological ComputerAssisted Detection and Diagnosis Software forFracture | Same | 
| Product Code | QBS | Same | 
| Features and Characteristics | Subject DeviceGleamerBoneView 1.1-US | Predicate DeviceGleamerBoneView 1.0-US | 
| Regulation Description | A radiological computer assisted detection anddiagnostic software for suspected fracture is animage processing device intended to aid in thedetection, localization, and/or characterizationof fracture on acquired medical images (e.g.radiography, MR, CT). The device detects,identifies, and/or characterizes fracture basedon features or information extracted fromimages, and may provide information about thepresence, location, and/or characteristics of thefracture to the user. Primary diagnostic andpatient management decisions are made by theclinical user. | Same | 
| Intended Use | The device is intended to aid in the detection,localization, and characterization of fractures onacquired medical images (per 21 CFR 892.2090Radiological Computer Assisted Detection andDiagnosis Software For Fracture). | Same | 
| Image Modality | 2D Xray Images | Same | 
| Clinical Finding and ClinicalOutput | FractureTo inform the primary diagnostic and patientmanagement decisions that are made by theclinical user. | Same | 
| Mode of action | Image processing software using machinelearning to aid in identifying and highlightingfractures during the review of radiographs. | Same | 
| Features and Characteristics | Subject DeviceGleamerBoneView 1.1-US | Predicate DeviceGleamerBoneView 1.0-US | 
| Patient population andAnatomic Areas of Interest | Adults (greater than 21 years of age) andChildren/Adolescents (between 2 years of ageand 21 years of age):Ankle Foot Knee Tibia/Fibula Wrist Hand Elbow Forearm Humerus Shoulder Clavicle Adults (greater than 21 years of age) only: Pelvis Hip Femur Ribs Thoracic Spine Lumbosacral Spine | Adults (greater than 21 yearsof age) only:Ankle Foot Knee Tibia/Fibula Femur Wrist Hand Elbow Forearm Humerus Shoulder Clavicle Pelvis Hip Ribs Thoracic Spine Lumbosacral Spine | 
| Intended Users | The intended users of BoneView are clinicianswith the authority to diagnose fractures invarious settings including primary care (e. g.,family practice, internal medicine), emergencymedicine, urgent care, and specialty care (e. g.orthopedics), as well as radiologists who reviewradiographs across settings. | Same | 
| Software and Technical Information | ||
| Machine LearningMethodology | Supervised Deep Learning | Same | 
| Image Source | DICOM Source (e.g., imaging device,intermediate DICOM node, PACS system, etc.) | Same | 
| Features and Characteristics | Subject DeviceGleamerBoneView 1.1-US | Predicate DeviceGleamerBoneView 1.0-US | 
| Image Viewing | PACS systemImage annotations made on copy of originalimage or image annotations toggled on/off | Same | 
| Deployment Platform | Deployment on-premise or on cloud andconnection to several computing platforms andX-ray imaging platforms such as X-rayradiographic systems, or PACS | Same | 
| Privacy | HIPAA Compliant | Same | 
| Software Level of Concern | Moderate | Same | 
{7}------------------------------------------------
Image /page/7/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white design inside, followed by the word "GLEAMER" in a simple, sans-serif font. The text is a dark blue color and is spaced evenly.
{8}------------------------------------------------
6 GLEAMER
{9}------------------------------------------------
Image /page/9/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white center, followed by the word "GLEAMER" in a simple, sans-serif font. The text is in a dark blue color. The logo is clean and modern in appearance.
7. Performance data
7.1. Biocompatibility Testing
As a standalone software, BoneView has no direct patient or user contacting components. Therefore, biocompatibility information is not required for this device.
7.2. Software Verification and Validation Testing
BoneView is a standalone software that is considered a moderate level of concern as per the guidance document from the FDA: "Guidance for the Content of Premarket Submissions for Software in Medical Devices". Indeed, a failure or latent design flaw of BoneView could directly result in minor injury to the patient or operator.
Consequently, software verification and validation testing were conducted and documented as per the requirements of the abovementioned FDA guidance document for a moderate level of concern device.
7.3. Electrical safety and Electromagnetic compatibility Testing
As a standalone software, BoneView is not subject to electromagnetic compatibility or electrical safety testing activities. Therefore, Electrical safety and Electromagnetic compatibility information is not required for this device.
7.4. Bench Testing
7.4.1. Testing for the children/adolescent population
{10}------------------------------------------------
Image /page/10/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white design inside, followed by the word "GLEAMER" in a simple, sans-serif font. The text is in a dark blue or black color, providing a clear contrast against the white background.
In order to include the children and adolescents population in the indications for use of BoneView 1.1-US, Gleamer performed a standalone performance testing on a dataset of 2,000 radiographs (52.8% of males, with age: range [2 – 21]; mean 11.54 +/- 4.7) for all anatomical areas of interest in the Indications for Use for the children and adolescents population and from various manufacturers (Canon, Fujifilm, GE Healthcare, Konica Minolta, Philips, Primax, Samsung, Siemens). This dataset was independent of the data used for model training, tuning, and establishment of device operating points.
The overall goal of the conducted study was to compare the diagnostic performances of BoneView 1.1-US on the children/adolescents clinical performance study dataset to the diagnostic performances of BoneView on the adult clinical performance study dataset (included in the submission of the predicate device).
The results of the study demonstrated that BoneView 1.1-US detects fractures in radiographs with similar performances on the adult population and on the children/adolescents population:
Sensitivity (with 95% Clopper-Pearson Cl) and Specificity (with 95% Clopper-Pearson Cl) of BoneView 1.1-US at the examinationlevel at the high-sensitivity operating point on the children/adolescents clinical performance study dataset VS adult clinical performance study dataset
| Operating Point | Dataset | Sensitivity | Specificity | 
|---|---|---|---|
| High-sensitivityoperating point (DOUBTFRACT) | Adult clinicalperformance studydataset | 0.928 [0.919 - 0.936] | 0.811 [0.8 - 0.821] | 
| Children/adolescentsclinical performancestudy dataset | 0.909 [0.889 - 0.926] | 0.821 [0.796 - 0.844] | |
| 95% confidence intervalon the difference | -0.019 [-0.039 -0.001] | 0.010 [-0.016 - 0.037] | 
Sensitivity (with 95% Clopper-Pearson Cl) and Specificity (with 95% Clopper-Pearson Cl) of BoneView 1.1-US at the examinationlevel at the high-specificity operating point on the children/adolescents clinical performance study dataset VS adult clinical performance study dataset
| Operating Point | Dataset | Specificity | Sensitivity | 
|---|---|---|---|
| High-specificityoperating point (FRACT) | Adult clinicalperformance studydataset | 0.932 [0.925 - 0.939] | 0.841 [0.829 - 0.853] | 
| Children/adolescents clinical performancestudy dataset | 0.965 [0.952 - 0.976] | 0.792 [0.766 - 0.817] | |
| 95% confidence intervalon the difference | 0.033 [0.019 - 0.046] | -0.049 [-0.079 - -0.021] | 
{11}------------------------------------------------
Image /page/11/Picture/0 description: The image shows the word "GLEAMER" in a sans-serif font. To the left of the word is a red circular logo with a white space in the middle. The logo appears to be a stylized letter "G" or a circle with a gap in it. The text and logo are aligned horizontally.
In addition to the equivalence of performances with the performances on the adult population, the results of the standalone testing demonstrated that BoneView detects fractures in radiographs with high sensitivity and high specificity:
Specificity (with 95% Clopper-Pearson Cl) and Sensitivity (with 95% Clopper-Pearson Cl) of BoneView at the examination-level at the high-sensitivity operating point and high-specificity operating point on the children/adolescents clinical performance study dataset
| High-sensitivity operating point | High-specificity operating point | |||
|---|---|---|---|---|
| StandalonePerformance | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | 
| Globaln(positive)= 1,000n(negative)= 1,000 | 0.821 [0.796 -0.844] | 0.909 [0.889 -0.926] | 0.965 [0.952 -0.976] | 0.792 [0.766 -0.817] | 
Specificity (with 95% Clopper-Pearson Cl) and Sensitivity (with 95% Clopper-Pearson Cl) of BoneView at the examination-level for the subgroup analysis of anatomical areas of interest at the high-sensitivity operating point and high-specificity operating point on the children/adolescents clinical performance study dataset
| High-sensitivity operating pointDOUBT FRACT | High-specificity operating pointFRACT | |||
|---|---|---|---|---|
| AnatomicalAreas ofInterest | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | 
| Anklen(positive)= 88n(negative)= 157 | TP=75 FP=380.758 [0.683 - 0.823] | TN=119 FN=130.852 [0.761 - 0.919] | TP=57 FP=110.93 [0.878 - 0.965] | TN=146 FN=310.648 [0.539 - 0.747] | 
| Claviclen(positive)= 113n(negative)= 45 | TP=110 FP=90.8 [0.654 - 0.904] | TN=36 FN=30.973 [0.924 - 0.994] | TP=108 FP=10.978 [0.882 - 0.999] | TN=44 FN=50.956 [0.9 - 0.985] | 
| Elbown(positive)= 96n(negative)= 120 | TP=87 FP=320.733 [0.645 - 0.81] | TN=88 FN=90.906 [0.829 - 0.956] | TP=60 FP=20.983 [0.941 - 0.998] | TN=118 FN=360.625 [0.52 - 0.722] | 
| Footn(positive)= 151n(negative)= 173 | TP=129 FP=470.728 [0.656 - 0.793] | TN=126 FN=220.854 [0.788 - 0.906] | TP=113 FP=120.931 [0.882 - 0.964] | TN=161 FN=380.748 [0.671 - 0.815] | 
| Forearm | TP=59 FP=5 TN=35 FN=6 | TP=53 FP=1 TN=39 FN=12 | 
{12}------------------------------------------------
Image /page/12/Picture/1 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white design inside, followed by the word "GLEAMER" in a simple, sans-serif font. The text is in a dark blue color, contrasting with the red icon.
| High-sensitivity operating pointDOUBT FRACT | High-specificity operating pointFRACT | |||
|---|---|---|---|---|
| AnatomicalAreas ofInterest | Specificity - 95%Clopper-Pearson Cl | Sensitivity - 95%Clopper-Pearson Cl | Specificity - 95%Clopper-Pearson Cl | Sensitivity - 95%Clopper-Pearson Cl | 
| n(positive)= 65n(negative)= 40 | 0.875 [0.732 -0.958] | 0.908 [0.81 -0.965] | 0.975 [0.868 -0.999] | 0.815 [0.7 - 0.901] | 
| Hand | TP=174 FP=18 | TN=142 FN=14 | TP=154 FP=6 | TN=154 FN=34 | 
| n(positive)=188n(negative)=160 | 0.887 [0.828 -0.932] | 0.926 [0.878 -0.959] | 0.963 [0.92 -0.986] | 0.819 [0.757 -0.871] | 
| Humerus | TP=24 FP=4 | TN=8 FN=0 | TP=22 FP=1 | TN=11 FN=2 | 
| n(positive)= 24n(negative)= 12 | 0.667 [0.349 -0.901] | 1.0 [0.858 - 1.0] | 0.917 [0.615 -0.998] | 0.917 [0.73 - 0.99] | 
| Knee | TP=36 FP=12 | TN=155 FN=7 | TP=20 FP=4 | TN=163 FN=23 | 
| n(positive)= 43n(negative)=167 | 0.928 [0.878 -0.962] | 0.837 [0.693 -0.932] | 0.976 [0.94 -0.993] | 0.465 [0.312 -0.623] | 
| Shoulder | TP=80 FP=21 | TN=82 FN=5 | TP=79 FP=2 | TN=101 FN=6 | 
| n(positive)= 85n(negative)=103 | 0.796 [0.705 -0.869] | 0.941 [0.868 -0.981] | 0.981 [0.932 -0.998] | 0.929 [0.853 -0.974] | 
| Tibia/Fibula | TP=50 FP=7 | TN=33 FN=8 | TP=43 FP=1 | TN=39 FN=15 | 
| n(positive)= 58n(negative)= 40 | 0.825 [0.672 -0.927] | 0.862 [0.746 -0.939] | 0.975 [0.868 -0.999] | 0.741 [0.61 -0.847] | 
| Wrist | TP=136 FP=20 | TN=70 FN=5 | TP=127 FP=4 | TN=86 FN=14 | 
| n(positive)=141n(negative)= 90 | 0.778 [0.678 -0.859] | 0.965 [0.919 -0.988] | 0.956 [0.89 -0.988] | 0.901 [0.839 -0.945] | 
Additionally, the performance of BoneView 1.1-US on the children and adolescents population was validated for potential confounders including weight-bearing and non-weight bearing bone fractures and different X-ray system manufacturers.
7.4.2. Testing for adult population
BoneView 1.1-US is using the same Al algorithm than the predicate device: BoneView 1.0-US (K212365). Thus, the bench testing (standalone testing) on the adult population described in the 510(k) submission of the predicate device are still valid and applicable to BoneView 1.1-US and are provided here for reference.
{13}------------------------------------------------
Image /page/13/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white center, followed by the word "GLEAMER" in a simple, sans-serif font. The letters are a dark blue color.
Gleamer performed a standalone performance testing on a dataset of 8,918 radiographs (47.2% of males, with age: range [21 – 113]; mean 52.5 +/- 19.8) for all anatomical areas of interest in the Indications for Use and from various manufacturers (Agfa, Fujifilm, GE Healthcare, Kodak, Konica Minolta, Philips, Primax, Samsung, Siemens). This dataset was independent of the data used for model training, tuning, and establishment of device operating points.
The results of the standalone testing demonstrated that BoneView detects fractures in radiographs with high sensitivity and high specificity:
Specificity (with 95% Clopper-Pearson Cl) and Sensitivity (with 95% Clopper-Pearson Cl) of BoneView at the examination-level at the high-sensitivity operating point and high-specificity operating point on the merged datasets
| High-sensitivity operating point | High-specificity operating point | |||
|---|---|---|---|---|
| StandalonePerformance | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | Specificity – 95%Clopper-Pearson CI | Sensitivity – 95%Clopper-Pearson CI | 
| Globaln(positive)= 3,886n(negative)= 5,032 | 0.811 [0.8 - 0.821] | 0.928 [0.919 - 0.936] | 0.932 [0.925 - 0.939] | 0.841 [0.829 - 0.853] | 
Specificity (with 95% Clopper-Pearson Cl) and Sensitivity (with 95% Clopper-Pearson Cl) of BoneView at the examination-level for the subgroup analysis of anatomical areas of interest at the high-sensitivity operating point and high-specificity operating noint on the merged datasets
| and high-specificity operating point on the merged datasetsHigh-sensitivity operating pointDOUBT FRACT | High-specificity operating pointFRACT | |||
|---|---|---|---|---|
| Anatomical Areasof Interest | Specificity - 95%Clopper-PearsonCI | Sensitivity - 95%Clopper-PearsonCI | Specificity - 95%Clopper-PearsonCI | Sensitivity - 95%Clopper-PearsonCI | 
| Anklen(positive)= 378n(negative)= 805 | 0.784 [0.754 -0.812] | 0.95 [0.923 -0.969] | 0.897 [0.874 -0.917] | 0.899 [0.865 -0.928] | 
| Claviclen(positive)= 147n(negative)= 255 | 0.757 [0.699 -0.808] | 0.905 [0.845 -0.947] | 0.929 [0.891 -0.958] | 0.83 [0.759 -0.887] | 
| Elbown(positive)= 145n(negative)= 227 | 0.718 [0.655 -0.776] | 0.924 [0.868 -0.962] | 0.899 [0.852 -0.935] | 0.531 [0.446 -0.614] | 
| Femurn(positive)= 63n(negative)= 161 | 0.733 [0.658 -0.799] | 0.937 [0.845 -0.982] | 0.944 [0.897 -0.974] | 0.825 [0.709 -0.909] | 
| Footn(positive)= 985n(negative)=1,097 | 0.793 [0.768 -0.817] | 0.934 [0.917 -0.949] | 0.924 [0.907 -0.939] | 0.874 [0.852 -0.894] | 
| High-sensitivity operating pointDOUBT FRACT | High-specificity operating pointFRACT | |||
| Anatomical Areasof Interest | Specificity - 95%Clopper-PearsonCI | Sensitivity - 95%Clopper-PearsonCI | Specificity - 95%Clopper-PearsonCI | Sensitivity - 95%Clopper-PearsonCI | 
| Forearmn(positive)= 94n(negative)= 102 | 0.676 [0.577 -0.766] | 0.989 [0.942 - 1.0] | 0.912 [0.839 -0.959] | 0.851 [0.763 -0.916] | 
| Handn(positive)= 1,168n(negative)=1,003 | 0.809 [0.783 -0.832] | 0.966 [0.954 -0.975] | 0.917 [0.898 -0.934] | 0.915 [0.898 -0.931] | 
| Hipn(positive)= 145n(negative)= 235 | 0.77 [0.711 -0.822] | 0.938 [0.885 -0.971] | 0.953 [0.918 -0.976] | 0.793 [0.718 -0.856] | 
| Humerusn(positive)= 114n(negative)= 175 | 0.731 [0.659 -0.796] | 0.904 [0.834 -0.951] | 0.92 [0.869 -0.956] | 0.833 [0.752 -0.897] | 
| Kneen(positive)= 128n(negative)=1,045 | 0.889 [0.868 -0.907] | 0.891 [0.823 -0.939] | 0.975 [0.964 -0.984] | 0.797 [0.717 -0.863] | 
| LumbosacralSpinen(positive)= 125n(negative)= 209 | 0.737 [0.672 -0.795] | 0.776 [0.693 -0.846] | 0.947 [0.908 -0.973] | 0.6 [0.509 - 0.687] | 
| Pelvisn(positive)= 230n(negative)= 479 | 0.745 [0.704 -0.784] | 0.887 [0.839 -0.925] | 0.939 [0.914 -0.959] | 0.743 [0.682 -0.799] | 
| Ribsn(positive)= 252n(negative)= 95 | 0.684 [0.581 -0.776] | 0.753 [0.7 - 0.802] | 0.926 [0.854 -0.97] | 0.488 [0.425 -0.552] | 
| Shouldern(positive)= 255n(negative)= 586 | 0.782 [0.746 -0.814] | 0.929 [0.891 -0.958] | 0.947 [0.926 -0.964] | 0.851 [0.801 -0.892] | 
| Thoracic Spinen(positive)= 74n(negative)= 105 | 0.676 [0.578 -0.764] | 0.878 [0.782 -0.943] | 0.905 [0.832 -0.953] | 0.689 [0.571 -0.792] | 
| Tibia/Fibulan(positive)= 72n(negative)= 184 | 0.712 [0.641 -0.776] | 0.972 [0.903 -0.997] | 0.815 [0.751 -0.869] | 0.931 [0.845 -0.977] | 
| High-sensitivity operating pointDOUBT FRACT | High-specificity operating pointFRACT | |||
| Anatomical Areasof Interest | Specificity - 95%Clopper-PearsonCl | Sensitivity - 95%Clopper-PearsonCl | Specificity - 95%Clopper-PearsonCl | Sensitivity - 95%Clopper-PearsonCl | 
| Wristn(positive)= 573n(negative)= 502 | 0.771 [0.732 -0.807] | 0.97 [0.953 -0.983] | 0.892 [0.862 -0.918] | 0.934 [0.91 -0.953] | 
{14}------------------------------------------------
{15}------------------------------------------------
Image /page/15/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon to the left of the word "GLEAMER" in a dark blue sans-serif font. The icon appears to be a stylized letter "G" or a circular shape with a break in the upper right quadrant.
Additionally, the performance of BoneView was validated for potential confounders including weightbearing and non-weight bearing bone fractures and different X-ray system manufacturers.
7.5. Animal Studies
No animal studies were conducted in support of the 510(k) submission of BoneView.
7.6. Clinical Studies
No clinical studies were conducted in support of the 510(k) submission of BoneView 1.1-US.
BoneView 1.1-US is based on the same Al algorithm than the predicate device: BoneView 1.0-US (K212365). Thus, the clinical performance described in the 510(k) submission of the predicate device are still valid and applicable to BoneView 1.1-US, for both the adult and children adolescent population. The results are provided here for reference.
Gleamer conducted a fully-crossed multiple reader, multiple case (MRMC) retrospective reader study to determine the impact of BoneView on reader performance in diagnosing fractures. The primary objective of the study was to determine whether the diagnostic accuracy of readers aided by BoneView is superior to the diagnostic accuracy of readers unaided by BoneView as determined by the Specificity/Sensitivity pair (primary endpoint).
The clinical validation study design was the following:
- . 24 clinical readers each evaluated a dataset of 480 cases (31.9% of males, with age: range [21 – 93]; mean 59.2 +/- 16.4) in BoneView's Indications for Use and from various manufacturers (GE Healthcare, Kodak, Konica Minolta, Philips, Samsung) under both Aided and Unaided conditions.
 - This dataset was independent of the data used for model training, and establishment of device operating points.
 - . Each case had been previously evaluated by a panel of three U.S. board-certified radiologists who assigned a ground truth label indicating the presence of a fracture and its location.
 - Cases are from all the anatomical areas of interest included in BoneView's Indications for Use.
 - The MRMC study consisted of two independent reading sessions separated by a washout period of at least one month in order to avoid memory bias.
 - . For each case, each reader was required to provide a determination of the presence of a fracture and provide its location.
 
GLEAMER
{16}------------------------------------------------
Image /page/16/Picture/0 description: The image shows the logo for Gleamer. The logo consists of a red circular icon with a white design inside, followed by the word "GLEAMER" in a dark blue sans-serif font. The icon is positioned to the left of the text, creating a visual balance.
The results of the study found that the diagnostic accuracy of readers in the intended use population is superior when aided by BoneView than when unaided by BoneView, as measured at the task of fracture detection using the Specificity/Sensitivity pair.
In particular, the study results demonstrated:
- . Reader specificity improved significantly from 0.906 (95% bootstrap Cl: 0.898-0.913) to 0.956 (95% bootstrap CI: 0.951-0.960): +5% increase of the Specificity
 - Reader sensitivity improved significantly from 0.648 (95% bootstrap Cl: 0.640-0.656) to 0.752 (95% . bootstrap CI: 0.745-0.759): +10.4% increase of the Sensitivity
 
Additionally, subgroup analysis was carried out by anatomical areas of interest, listed in the Indications for Use. The subgroup analysis found that the Sensitivity and Specificity were higher for Aided reads versus Unaided reads for all of the anatomical areas of interest.
8. Conclusion
BoneView 1.1-US and BoneView 1.0-US predicate device have the same intended use and technological characteristics. Only the indications for use are different with the inclusion of children and adolescents in the intended patient population.
Performance testing was conducted to validate the performance of BoneView 1.1-US on the new patient population. The results of the testing show that the device performs as intended and the differences in indications for use including the new patient population of children and adolescents does not raise different questions of safety or effectiveness as compared with the predicate device.
Therefore, BoneView 1.1-US subject device and BoneView 1.0-US predicate device (K212365) are substantially equivalent.
§ 892.2090 Radiological computer-assisted detection and diagnosis software.
(a)
Identification. A radiological computer-assisted detection and diagnostic software is an image processing device intended to aid in the detection, localization, and characterization of fracture, lesions, or other disease-specific findings on acquired medical images (e.g., radiography, magnetic resonance, computed tomography). The device detects, identifies, and characterizes findings based on features or information extracted from images, and provides information about the presence, location, and characteristics of the findings to the user. The analysis is intended to inform the primary diagnostic and patient management decisions that are made by the clinical user. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithm, including a description of the algorithm inputs and outputs, each major component or block, how the algorithm and output affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide improved assisted-read detection and diagnostic performance as intended in the indicated user population(s), and to characterize the standalone device performance for labeling. Performance testing includes standalone test(s), side-by-side comparison(s), and/or a reader study, as applicable.
(iii) Results from standalone performance testing used to characterize the independent performance of the device separate from aided user performance. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Devices with localization output must include localization accuracy testing as a component of standalone testing. The test dataset must be representative of the typical patient population with enrichment made only to ensure that the test dataset contains a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant disease, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Results from performance testing that demonstrate that the device provides improved assisted-read detection and/or diagnostic performance as intended in the indicated user population(s) when used in accordance with the instructions for use. The reader population must be comprised of the intended user population in terms of clinical training, certification, and years of experience. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Test datasets must meet the requirements described in paragraph (b)(1)(iii) of this section.(v) Appropriate software documentation, including device hazard analysis, software requirements specification document, software design specification document, traceability analysis, system level test protocol, pass/fail criteria, testing results, and cybersecurity measures.
(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the device instructions for use, including the intended reading protocol and how the user should interpret the device output.
(iii) A detailed description of the intended user, and any user training materials or programs that address appropriate reading protocols for the device, to ensure that the end user is fully aware of how to interpret and apply the device output.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) A detailed summary of the performance testing, including test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as anatomical characteristics, patient demographics and medical history, user experience, and imaging equipment.