(30 days)
The LightForce Orthodontic System is a treatment planning software (TPS) and Orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances.
The LightForce Orthodontic System (LFO System) is a treatment planning software (TPS) and orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances. The LFO System consists of patient-specific ceramic brackets, patient-specific bracket placement jigs, arch wire templates, and a TPS for viewing, measuring,modifying cases and submitting orders. LightForce Orthodontics' (LFO) operators and the orthodontists use the TPS to generate a prescription of their choosing. LFO then manufactures the patient-specific brackets and placement jigs using proprietary additive manufacturing techniques. The orthodontist then bonds the brackets to the teeth using the optional placement jig and ligates wires to enable tooth movement. The LFO System does not contain commercially-available or patient-specific shaped arch wires, ligatures, or adhesives that affixes the brackets to the teeth.
The change is to upgrade LFO System's Treatment Planning Software (TPS) from version 3.1 to version 4.0. TPS version 3.1 was originally cleared in K183542 and the LFO System was originally cleared in K181271. TPS version 4.0 provides improved software architecture, updated hosting infrastructure and improvements to the user interface to better match the intended use of the product based on customer feedback and validation input.
A comparison of TPS 3.1 and TPS 4.0 shows that both software provide the same features and functional workflows is evidence of substantial equivalence. The replacement software is identical in performance and function to the previously used software.
The LightForce Orthodontic System (LFO System) is a treatment planning software (TPS) and orthodontic appliance system designed to correct malocclusions using patient-matched orthodontic appliances. The current submission (K200148) focuses on an upgrade to the Treatment Planning Software (TPS) from version 3.1 to version 4.0.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The submission does not explicitly define quantitative "acceptance criteria" with specific thresholds that the device must meet for performance metrics. Instead, the study aims to demonstrate equivalence between the new TPS version 4.0 and the previously cleared TPS version 3.1. The performance criteria are functional aspects of the software, and the reported performance is simply "Equivalent (Test Report)".
Functional Area | Acceptance Criteria (Implied) | Reported Device Performance (LFO System TPS 4.0) |
---|---|---|
4.1 Diagnosis - Viewing | Viewing of patient's digital impression is identical to TPS 3.1. | Equivalent (Test Report) |
4.1 Diagnosis - Successful Diagnosis | Successful diagnosis of patient's malocclusion is identical to TPS 3.1. | Equivalent (Test Report) |
4.2 Planning - Successful Movement (Teeth) | Successful movement of patient's teeth (within software) is identical to TPS 3.1 or improved. | Equivalent (Test Report) and "Improved movement accuracy by moving around desired axis" for TPS 4.0 compared to TPS 3.1 in Feature Comparison table. |
4.3 Planning - Successful Movement (Brackets) | Successful movement of brackets is identical to TPS 3.1. | Equivalent (Test Report) |
4.4 Data Handling - Case Delivery | Case delivered securely and uncorrupted is identical to TPS 3.1. | Equivalent (Test Report) |
Software Features (e.g., Rotate, Zoom, Pan Impression, Hide/Show arches) | These basic user interface functionalities are identical to TPS 3.1. | Identical (in feature comparison table) |
2. Sample size used for the test set and the data provenance
The document does not explicitly state the specific "sample size" for the test set used to validate TPS 4.0. The validation activities are described as "Validation testing of the TPS was performed in accordance with LFO's design control activities for software and to the software's Test Plan."
The data provenance (country of origin, retrospective/prospective) is not mentioned. Given it's a software upgrade validation, it's likely internal testing rather than clinical study data from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts or their qualifications used to establish ground truth for the software validation. The testing seems to be internal validation against a "Test Plan" and comparison to functionality of the previous software version.
4. Adjudication method for the test set
The document does not describe an adjudication method for the test set results. The determination of "equivalence" likely relied on direct comparison of functionalities and outputs against predefined test cases within the "Test Plan."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No multi-reader multi-case (MRMC) comparative effectiveness study was done or reported in this submission. The submission is focused on a software upgrade for a treatment planning system and does not involve AI assistance for human readers in a diagnostic capacity.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance evaluation of the algorithm (TPS 4.0) against its predecessor (TPS 3.1) was performed. "Validation testing of the TPS was performed in accordance with LFO's design control activities for software and to the software's Test Plan." The results in Table 6-1 compare the performance of TPS 4.0 directly to TPS 3.1 across various functional areas. This is essentially a standalone comparison of the software versions.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for the software validation appears to be the expected functional behavior and outputs as defined by the "Test Plan" for the software, and by direct comparison to the performance of the legally marketed predicate software (TPS 3.1). For example, the software's ability to "view patient's digital impression," "move patient's teeth," or "deliver case securely" would be verified against predetermined correct outcomes or the established behavior of the prior version. It is not based on expert consensus on clinical cases, pathological findings, or patient outcomes data.
8. The sample size for the training set
This submission pertains to the validation of a software upgrade (TPS 4.0 to TPS 3.1). It is not a submission for a de novo AI/ML algorithm that requires a training set in the traditional sense. The software's development would have involved internal testing and refinement, but a specific "training set" for an AI model is not applicable here as the primary claim is functional equivalence and improvement of an existing treatment planning software.
9. How the ground truth for the training set was established
As noted above, this submission does not involve a "training set" for an AI/ML algorithm in the context of establishing ground truth for training. The software's capabilities are based on its design and implementation for treatment planning, not on learning from a labeled dataset.
§ 872.5470 Orthodontic plastic bracket.
(a)
Identification. An orthodontic plastic bracket is a plastic device intended to be bonded to a tooth to apply pressure to a tooth from a flexible orthodontic wire to alter its position.(b)
Classification. Class II.