K Number
K190162
Device Name
SmartCeph
Manufacturer
Date Cleared
2019-10-17

(259 days)

Product Code
Regulation Number
892.2050
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

SmartCeph software is designed for use by dental practices for cephalometric tracing and presenting patient images which are utilized by dental professionals to assist in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed dental practitioners.

Device Description

SmartCeph is a software-only dental image device which allows the user to digitize landmarks on a patient's digital lateral cephalometric x- ray image, trace cephalometric structures, view cephalometric measurements, and superimpose images for analysis and presentation.

SmartCeph is imaging software designed for use in dentistry. The main SmartCeph software functionality includes image visualization, cephalometric tracing and measurements.

SmartCeph is used by dental professional for the visualization of patient images retrieved from a dental cephalometric imaging device scanner for assisting in case diagnosis, review and treatment planning for orthodontic and orthognathic applications. If a suitable JPEG image (specifically, a lateral cephalometric x-ray) has been imported into the software, the software can be utilized to define a number of structures and landmarks to establish specific anatomical features. The positions of specific landmarks are used to render tracing lines and calculate measurements used in orthodontic treatment planning. If multiple x-ray images for which cephalometric data has been digitized are imported, the resulting tracings can be overlaid to indicate changes and/or differences in the anatomy. The software operates upon standard Windows PC hardware and displays images on the PC's connected display/monitor.

SmartCeph is a standalone product but is designed to work cooperatively with Ortho2's dental practice management software that is used for scheduling and billing.

AI/ML Overview

The provided text does not contain detailed information about acceptance criteria and a study proving that the device meets these criteria. The document is a 510(k) summary for the SmartCeph device, primarily focusing on its substantial equivalence to a predicate device (Dolphin Imaging).

Here's what can be extracted and what is missing:

1. A table of acceptance criteria and the reported device performance

  • Acceptance Criteria: Not explicitly stated or in a table format. The document generally implies that the device should perform in a manner "substantially equivalent" to the predicate device, particularly for cephalometric tracing and measurement.
  • Reported Device Performance: The document states that "Ortho2, LLC has conducted extensive non-clinical (bench) performance testing and validation and verification testing of SmartCeph. All the different components of the SmartCeph have been stress tested to ensure that SmartCeph provides all the capabilities necessary to operate in a manner substantially equivalent to the Dolphin Imaging predicate." However, specific performance metrics (e.g., accuracy, precision, error rates) or a comparison against predefined criteria are not provided.

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

  • This information is not provided in the document. The document mentions "non-clinical (bench) performance testing" but does not detail the dataset used for this testing.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

  • This information is not provided. The document mentions that "Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed dental practitioners," indicating human oversight, but doesn't specify how ground truth was established for testing.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

  • This information is not provided.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • A MRMC comparative effectiveness study is not mentioned. The purpose of the substantial equivalence claim is often to avoid such extensive clinical studies if the device is similar enough to a legally marketed predicate. The device description emphasizes cephalometric tracing and measurement, which might lend itself more to objective performance metrics rather than reader improvement with AI assistance, especially since the document specifies that the "Diagnosis is not performed by this software but by doctors and other qualified individuals."

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

  • The document implies a standalone evaluation in the "non-clinical (bench) performance testing" and states that "SmartCeph is a standalone product." However, the performance metrics of this standalone evaluation are not detailed. It's described as "software-only" and performs digitizing landmarks, tracing, viewing measurements, and superimposing images.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • This information is not provided. Given the nature of cephalometric tracing, ground truth would typically involve expert manual annotations, but this is not confirmed in the text.

8. The sample size for the training set

  • This information is not provided. The document makes no mention of a training set or machine learning aspects, though a device performing "automatic structure drawing" (as mentioned in the comparison table) typically involves a training phase.

9. How the ground truth for the training set was established

  • This information is not provided.

In summary, the document states that "extensive non-clinical (bench) performance testing and validation and verification testing" was conducted to ensure substantial equivalence. However, it does not provide the specific details about the acceptance criteria, study design (sample size, data provenance, ground truth establishment, expert involvement, or adjudication methods) or reported performance metrics.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).