K Number
K183542
Date Cleared
2019-01-02

(13 days)

Product Code
Regulation Number
872.5470
Panel
DE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Signature Orthodontic System is a treatment planning software and orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances.

Device Description

The Signature Orthodontic System (SO System) is a treatment planning software (TPS) and orthodontic appliance system used to correct malocclusions in orthodontic patientmatched orthodontic appliances. The SO System consists of patient-specific ceramic brackets, patient-specific bracket placement jigs, arch wire templates, and a TPS for viewing, and modifying cases. Signature Orthodontics' (SO) operators and the orthodontists use the TPS to generate a prescription of their choosing. SO then manufactures the patient-specific brackets and placement jigs using proprietary additive manufacturing techniques. The orthodontist then bonds the brackets to the teeth using the optional placement jig and ligates wires to enable tooth movement. The SO System does not contain commercially-available or patient-specific shaped arch wires, ligatures, or adhesives that affixes the brackets to the teeth.

AI/ML Overview

The provided document is a 510(k) summary for the Signature Orthodontic System, specifically focusing on a software change (replacement of Meshmixer 3.4 with Treatment Planning 3.1). Based on the information presented, here's an analysis of the acceptance criteria and the study proving the device meets them:

Overall Conclusion from the Document:

The document states that the new software component (Treatment Planning 3.1) performs "equivalent" to the previous off-the-shelf software (Meshmixer 3.4) and that the product's indications for use, product codes/regulations, sequence of treatment plan, and manufacturing method are "identical" to the predicate device. The core of the equivalence claim rests on non-clinical performance testing of the software itself. It explicitly states "No clinical performance testing was conducted."


1. Table of Acceptance Criteria and Reported Device Performance

The acceptance criteria are implicitly defined by the demonstration of "equivalence" to the predicate device's software function. The performance is reported as "Equivalent" for all tested functions.

Function/WorkflowPredicate Device Performance (Meshmixer 3.4)New Device Performance (TPS 3.1)Acceptance Criteria Met?
4.1 Diagnosis - viewing patient's digital impressionRendering of impression using triangle meshes read from STL filesRendering of impression using triangle meshes read from STL filesEquivalent (Test Report)
4.1 Diagnosis - successful diagnosis of patient's malocclusionHide/show individual arches (requires multiple clicks)Hide/show individual arches (single click)Equivalent (Test Report)
4.4 Data Handling - case data delivered securely and un-corruptedSecure link used to download STL files, then File -> Open in device.Open secure link in web browser (removes download STL files step)Equivalent (Test Report)
4.1 Diagnosis - viewing and measuring patient's digital impressionRotate impression displayRotate impression displayEquivalent (Test Report)
4.1 Diagnosis - viewing and measuring patient's digital impressionPan impression displayPan impression displayEquivalent (Test Report)
4.1 Diagnosis - viewing and measuring patient's digital impressionZoom impression displayZoom impression displayEquivalent (Test Report)
4.1 Diagnosis - viewing and measuring patient's digital impressionPoint-to-point measurementN/A (not included in TPS 3.1 requirements specification)(No direct comparison, but overall equivalence claimed based on other features)

Note on "Point-to-point measurement": While TPS 3.1 does not include this feature, the overall claim is "equivalence" based on the other listed functions. The rationale for this difference not affecting equivalence is not explicitly detailed but might be because it's considered a minor functional change that doesn't impact the core safety and effectiveness of the device for its stated indications for use, or that other remaining features are sufficient.


2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size: The document does not specify a "sample size" in terms of patient data or number of cases for the performance testing. The validation testing was performed "in accordance with SO's design control activities for software and to the software's Test Plan." This suggests software validation processes (e.g., unit testing, integration testing, system testing) rather than a clinical study with a patient cohort.
  • Data Provenance: Not applicable, as this was non-clinical software performance testing against functional requirements, not testing with patient data.
  • Retrospective/Prospective: Not applicable.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

Not applicable. This testing was software performance validation, comparing the functionality of the new software to the old software, not a study requiring expert clinical interpretations or ground truth establishment based on medical images.


4. Adjudication Method for the Test Set

Not applicable. This was software performance validation against functional specifications, not a study requiring adjudication of expert readings.


5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done

No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "No clinical performance testing was conducted on SO System brackets." This implies no human reader studies (with or without AI assistance) were performed. The FDA clearance is based on the substantial equivalence of the software component for its functional performance, not on demonstrating improved human reader performance.


6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done

No. The testing was functional validation of the software itself, akin to software quality assurance (QA) and verification/validation (V&V) activities. It was not a "standalone performance" study in the sense of evaluating an AI algorithm's diagnostic accuracy against a ground truth on a large set of real-world patient data in an isolated fashion. The software (Treatment Planning 3.1) is a tool within a larger system used by human operators and orthodontists.


7. The Type of Ground Truth Used

For the non-clinical performance testing of the software, the "ground truth" was the functional requirements and expected outputs based on the performance of the predicate software (Meshmixer 3.4). The testing aimed to show that the new software performed "equivalent" functions to the previously cleared predicate software.


8. The Sample Size for the Training Set

Not applicable. The document describes a replacement software that was "developed by SO exclusively," implying it's a rule-based or deterministic software, not a machine learning or AI algorithm that requires a "training set."


9. How the Ground Truth for the Training Set was Established

Not applicable, as no training set for a machine learning model was used.

§ 872.5470 Orthodontic plastic bracket.

(a)
Identification. An orthodontic plastic bracket is a plastic device intended to be bonded to a tooth to apply pressure to a tooth from a flexible orthodontic wire to alter its position.(b)
Classification. Class II.