K Number
K150217
Device Name
ORTHOPHOS SL
Date Cleared
2015-05-08

(98 days)

Product Code
Regulation Number
892.1750
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The X-ray system creates data for digital exposures in the maxillofacial area and in subareas for dentistry, including pediatric dentistry, for hard-tissue diagnostics within ENT medicine, and for carpus exposures.

Device Description

The device comprises image receptors for cephalometric exposures, 2D panoramic radiographs and 3D volume exposures. The combination of sensors in the device varies depending on the installed options and the regions of interest can be altered. The technology behind this is collimation and different start- and end angles of exposure. These allow for a reduced dosage depending on the program selected. This function is available with some volume, cephalometric and panoramic programs.

Class I laser beam light localizers aid in the positioning of a patient's head which may be fixed through the use of bite blocks and adjustable forehead and temple supports.

From the obtained exposures, reconstructed images can be viewed. The reconstructed 3D volumes, simulated projection exposures and panoramic/cephalometric data can be conveyed to SIDEXIS (an FDA approved Sirona software for acquisition, administration, analysis, diagnosis, presentation and transfer of image data for medical/dental use) and stored in the SIDEXIS database.

An operator control panel allows for height adjustment, selection of mode and program, and indication of machine states. A separate handheld push-button serves for exposure release and an optional remote control is available.

AI/ML Overview

The provided text describes a 510(k) premarket notification for the "ORTHOPHOS SL" dental X-ray system. The document focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed acceptance criteria and a study proving the device meets them in the context of an AI/algorithm-driven device.

Therefore, many of the requested items (e.g., acceptance criteria table, sample size for test set, number of experts for ground truth, adjudication method, MRMC study, standalone performance, training set details) are not applicable or not present in this type of regulatory submission for a conventional medical imaging device without explicit AI involvement.

Based on the provided text, here's the information that can be extracted:

1. A table of acceptance criteria and the reported device performance

  • Not applicable for an AI device. This document is for a traditional X-ray system. The "acceptance criteria" discussed are largely related to meeting established X-ray performance standards and demonstrating substantial equivalence to predicate devices. Specific quantitative performance metrics for diagnostic accuracy (e.g., sensitivity, specificity, AUC) against a defined ground truth, as would be expected for an AI device, are not detailed.
  • The study focuses on demonstrating that the device's technical specifications and image quality are comparable to predicate devices and comply with relevant international standards.

2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • Test Set Sample Size: "test phantoms" were used for evaluating exposure programs. "Sample clinical images" were provided. No specific numerical sample size for either is given.
  • Data Provenance: Not specified beyond "test phantoms" and "sample clinical images." It does not mention country of origin or whether clinical data was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • "Oral Surgeon reports" were provided with the sample clinical images, "asserting the general diagnostic quality of the images."
  • Number of Experts: Not specified.
  • Qualifications: "Oral Surgeon" is mentioned as the expert type, but no specific experience level is provided.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • None explicitly mentioned. The "Oral Surgeon reports" seem to imply a singular assessment rather than an adjudicated consensus process.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No, an MRMC comparative effectiveness study was not done. This document is for a conventional X-ray system, not an AI-assisted diagnostic tool, so improvement of human readers with AI assistance is not applicable.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Not applicable. This is a hardware X-ray device, not a standalone algorithm.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • For technical testing, "test phantoms" were used.
  • For clinical images, "Oral Surgeon reports asserting the general diagnostic quality" served as a form of expert assessment of image quality, not necessarily a definitive "ground truth" for specific disease presence/absence.

8. The sample size for the training set

  • Not applicable; no training set mentioned. This is a conventional X-ray system, not an AI model that undergoes training.

9. How the ground truth for the training set was established

  • Not applicable; no training set mentioned.

§ 892.1750 Computed tomography x-ray system.

(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.