K Number
K243005
Manufacturer
Date Cleared
2025-05-30

(246 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AudaxCeph Cephalogram Analysis Software is designed for use by specialized dental practices for storing and presenting patient images and assisting in treatment planning. Results produced by the software's treatment planning tools are dependent on the interpretation of trained and licensed practitioners.

Device Description

AudaxCeph Cephalogram Analysis Software is a software-only device intended for use by specialty dental professionals in the development of 2D dental treatment planning utilizing cephalogram landmarks. Cephalogram imagery must be captured separately and inputted to the program using a variety of common file formats (BMP, JPG, JPEG, PNG, TIFF). Landmarks are digitized, structures are traced, measurements can be taken, and various custom analysis types are possible. The software assists in orthodontic treatment planning by providing image analysis, superimpositions, and VTO (visualized treatment objective).

The software is offered in two different client configurations: A desktop client as an application locally-hosted on a PC ("AudaxCeph Ultimate" and "Essentials"), and a cloud-hosted solution ("AudaxCeph WeDoCeph") which is accessible by a web browser client. Both are intended for use only by specialty dental professionals, have no patient-interacting components or interfaces, and all analysis capabilities of the AudaxCeph Cephalogram Analysis software that inform clinical decisions are within the capabilities of the CephX Cephalometric Analysis Software predicate device.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study that proves the AudaxCeph Cephalogram Analysis Software meets them, based on the provided FDA 510(k) Clearance Letter and Summary:

1. Acceptance Criteria and Reported Device Performance

ParameterAcceptance CriteriaReported Device Performance
MRE (Mean Radial Error) (mm)
Lateral$\le$ 1.5PASS
Frontal (PA)$\le$ 2.5PASS

Note: The document states "PASS" for the device performance, implying that the reported MRE values for both lateral and frontal (PA) aspects were within or below the specified acceptance criteria. The exact numerical values for the reported performance are not provided in this document.

2. Sample Size Used for the Test Set and Data Provenance

The document does not explicitly state the specific sample size used for the test set. It only mentions an "accuracy study" that involved automatically detected landmarks reviewed by orthodontists.

Regarding data provenance: The document does not specify the country of origin of the data. It also does not explicitly state whether the study was retrospective or prospective.

3. Number of Experts Used to Establish Ground Truth and Qualifications

The document states that the accuracy study involved automatically detected landmarks reviewed by orthodontists.

  • Number of Experts: Not explicitly stated. The phrasing "reviewed by orthodontists" could imply multiple orthodontists, but a specific number is not provided.
  • Qualifications of Experts: The experts are identified as "orthodontists." While not explicitly stated, it is implied that these are trained and licensed practitioners in dental specialties, as the device's indications for use target "specialized dental practices." No information regarding years of experience is provided.

4. Adjudication Method for the Test Set

The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1, none) for the test set's ground truth. It states that "automatically-detected landmarks [were] reviewed by orthodontists," which suggests expert review was part of the process, but the consensus-reaching or adjudication mechanism for discrepancies is not detailed.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

No. The document does not mention that a multi-reader multi-case (MRMC) comparative effectiveness study was done to evaluate how human readers improve with AI vs. without AI assistance. The study focuses on the accuracy of the automated landmark detection itself.

6. Standalone (Algorithm Only) Performance

Yes. The study described is a standalone performance evaluation of the algorithm's ability to detect landmarks automatically. The "Automatic Landmark Detection Accuracy Study Results" table directly addresses the performance of the algorithm without a human-in-the-loop component for the measurement itself, though human experts (orthodontists) were involved in establishing the ground truth against which the automatic detection was compared.

7. Type of Ground Truth Used

The type of ground truth used was expert consensus / expert review. The document states that "first an acceptance criteria study was performed to assess the accuracy of manual landmark placement. The results of that study, combined with a literature review of other studies, was used to establish the acceptance criteria upon which automatic landmark detection accuracy was evaluated. The accuracy study involved automatically-detected landmarks reviewed by orthodontists." This implies that the ground truth for landmark placement was based on established manual placement by experts (orthodontists) and potentially literature-derived standards.

8. Sample Size for the Training Set

The document does not provide information on the sample size used for the training set. It only discusses the "accuracy study" for the test set.

9. How the Ground Truth for the Training Set Was Established

The document does not provide information on how the ground truth for the training set was established. It only describes the process for establishing the acceptance criteria and evaluating the accuracy of the automatic landmark detection (which would relate to the test set).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).