K Number
K190767
Device Name
AnatomicAligner
Date Cleared
2020-03-16

(357 days)

Product Code
Regulation Number
892.2050
Panel
DE
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AnatomicAligner is software for pre-operative simulation of orthognathic surgical treatment options, based on imaging information from a medical scanner such as CT or MRI, in patients who have reached sketal and dental maturity.

Device Description

AnatomicAligner is an image-processing software with a user-friendly interface for pre-operative simulation of orthognathic surgical treatment options based on imaging information from a medical scanner such as CT or MRI. The software planning results may be transferred to surgery.

AI/ML Overview

The provided document describes the FDA 510(k) clearance for the AnatomicAligner software, which is intended for pre-operative simulation of orthognathic surgical treatment options. However, this document clearly states that "No clinical testing of AnatomicAligner was required or performed" (Section 1.8). Therefore, the direct information required to answer many of the questions regarding acceptance criteria, test sets, experts, and MRMC studies is not present in the provided text.

Based on the information available, here's a description of what is and isn't provided:

1. A table of acceptance criteria and the reported device performance:

The document states that "AnatomicAligner has successfully undergone extensive verification and validation testing to ensure that all requirements for the software have been met per FDA Guidance." It also mentions "These results provide objective evidence that the outputs of the software design activity meet all of the specified requirements for that activity."

However, the specific acceptance criteria (e.g., quantitative metrics like accuracy, precision, or specific user performance indicators) and the reported device performance against these criteria are NOT detailed in this document. The document implies that acceptance was based on successful completion of verification and validation testing in accordance with design controls.

2. Sample size used for the test set and the data provenance:

As no clinical testing was performed, there is no "test set" in the sense of patient data used for clinical performance evaluation. The document focuses on performance derived from verification and validation against software requirements.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

Not applicable, as no clinical test set requiring expert ground truth was used.

4. Adjudication method for the test set:

Not applicable, as no clinical test set requiring adjudication was used.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

No, an MRMC comparative effectiveness study was NOT done. The document explicitly states "no clinical testing of AnatomicAligner was required or performed." The AnatomicAligner is described as image-processing software for "pre-operative simulation," suggesting it's a tool for planning rather than a diagnostic aid that would directly assist human readers in interpreting images.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

The document states that "extensive verification and validation testing" was performed to ensure the software met its requirements. While this would involve "algorithm only" testing, the specific metrics and results are not detailed. The software's function is to assist in "pre-operative simulation," implying a human-in-the-loop process for planning, but the FDA clearance did not require human-in-the-loop clinical performance testing.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

Since no clinical testing was performed, there was no ground truth based on patient outcomes, pathology, or expert consensus on clinical cases. The ground truth for the "verification and validation testing" would have been derived from the design requirements of the software itself. This would typically involve:

  • Software requirements specifications: The "ground truth" would be that the software functions as designed according to its established requirements (e.g., correctly segments anatomical structures, accurately performs calculations, generates correct output files).
  • Test data or simulated data: Verification and validation would likely use synthetic and/or real (but not necessarily clinical trial-level) imaging data to confirm software functionality and accuracy against predefined expected outputs.

8. The sample size for the training set:

The document does not provide information on the training set size, as it's a 510(k) submission for a device that did not require clinical performance data. While the software likely uses algorithms, details about their development (including training data) are not part of this summary, especially since no clinical claims are being made based on AI performance in diagnostic accuracy or human reader improvement.

9. How the ground truth for the training set was established:

Not applicable based on the provided document, as no information on a training set or its ground truth establishment is included in this 510(k) summary. Given the nature of the device (pre-operative simulation software), any underlying algorithms would have been trained with data and ground truth relevant to their specific tasks (e.g., image segmentation, 3D reconstruction, landmark identification), but these details are not part of FDA's evaluation criteria for this specific substantial equivalence claim.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).