Search Results
Found 2 results
510(k) Data Aggregation
(30 days)
The LightForce Orthodontic System is a treatment planning software (TPS) and Orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances.
The LightForce Orthodontic System (LFO System) is a treatment planning software (TPS) and orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances. The LFO System consists of patient-specific ceramic brackets, patient-specific bracket placement jigs, arch wire templates, and a TPS for viewing, measuring,modifying cases and submitting orders. LightForce Orthodontics' (LFO) operators and the orthodontists use the TPS to generate a prescription of their choosing. LFO then manufactures the patient-specific brackets and placement jigs using proprietary additive manufacturing techniques. The orthodontist then bonds the brackets to the teeth using the optional placement jig and ligates wires to enable tooth movement. The LFO System does not contain commercially-available or patient-specific shaped arch wires, ligatures, or adhesives that affixes the brackets to the teeth.
The change is to upgrade LFO System's Treatment Planning Software (TPS) from version 3.1 to version 4.0. TPS version 3.1 was originally cleared in K183542 and the LFO System was originally cleared in K181271. TPS version 4.0 provides improved software architecture, updated hosting infrastructure and improvements to the user interface to better match the intended use of the product based on customer feedback and validation input.
A comparison of TPS 3.1 and TPS 4.0 shows that both software provide the same features and functional workflows is evidence of substantial equivalence. The replacement software is identical in performance and function to the previously used software.
The LightForce Orthodontic System (LFO System) is a treatment planning software (TPS) and orthodontic appliance system designed to correct malocclusions using patient-matched orthodontic appliances. The current submission (K200148) focuses on an upgrade to the Treatment Planning Software (TPS) from version 3.1 to version 4.0.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The submission does not explicitly define quantitative "acceptance criteria" with specific thresholds that the device must meet for performance metrics. Instead, the study aims to demonstrate equivalence between the new TPS version 4.0 and the previously cleared TPS version 3.1. The performance criteria are functional aspects of the software, and the reported performance is simply "Equivalent (Test Report)".
Functional Area | Acceptance Criteria (Implied) | Reported Device Performance (LFO System TPS 4.0) |
---|---|---|
4.1 Diagnosis - Viewing | Viewing of patient's digital impression is identical to TPS 3.1. | Equivalent (Test Report) |
4.1 Diagnosis - Successful Diagnosis | Successful diagnosis of patient's malocclusion is identical to TPS 3.1. | Equivalent (Test Report) |
4.2 Planning - Successful Movement (Teeth) | Successful movement of patient's teeth (within software) is identical to TPS 3.1 or improved. | Equivalent (Test Report) and "Improved movement accuracy by moving around desired axis" for TPS 4.0 compared to TPS 3.1 in Feature Comparison table. |
4.3 Planning - Successful Movement (Brackets) | Successful movement of brackets is identical to TPS 3.1. | Equivalent (Test Report) |
4.4 Data Handling - Case Delivery | Case delivered securely and uncorrupted is identical to TPS 3.1. | Equivalent (Test Report) |
Software Features (e.g., Rotate, Zoom, Pan Impression, Hide/Show arches) | These basic user interface functionalities are identical to TPS 3.1. | Identical (in feature comparison table) |
2. Sample size used for the test set and the data provenance
The document does not explicitly state the specific "sample size" for the test set used to validate TPS 4.0. The validation activities are described as "Validation testing of the TPS was performed in accordance with LFO's design control activities for software and to the software's Test Plan."
The data provenance (country of origin, retrospective/prospective) is not mentioned. Given it's a software upgrade validation, it's likely internal testing rather than clinical study data from patients.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not specify the number of experts or their qualifications used to establish ground truth for the software validation. The testing seems to be internal validation against a "Test Plan" and comparison to functionality of the previous software version.
4. Adjudication method for the test set
The document does not describe an adjudication method for the test set results. The determination of "equivalence" likely relied on direct comparison of functionalities and outputs against predefined test cases within the "Test Plan."
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No multi-reader multi-case (MRMC) comparative effectiveness study was done or reported in this submission. The submission is focused on a software upgrade for a treatment planning system and does not involve AI assistance for human readers in a diagnostic capacity.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance evaluation of the algorithm (TPS 4.0) against its predecessor (TPS 3.1) was performed. "Validation testing of the TPS was performed in accordance with LFO's design control activities for software and to the software's Test Plan." The results in Table 6-1 compare the performance of TPS 4.0 directly to TPS 3.1 across various functional areas. This is essentially a standalone comparison of the software versions.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" for the software validation appears to be the expected functional behavior and outputs as defined by the "Test Plan" for the software, and by direct comparison to the performance of the legally marketed predicate software (TPS 3.1). For example, the software's ability to "view patient's digital impression," "move patient's teeth," or "deliver case securely" would be verified against predetermined correct outcomes or the established behavior of the prior version. It is not based on expert consensus on clinical cases, pathological findings, or patient outcomes data.
8. The sample size for the training set
This submission pertains to the validation of a software upgrade (TPS 4.0 to TPS 3.1). It is not a submission for a de novo AI/ML algorithm that requires a training set in the traditional sense. The software's development would have involved internal testing and refinement, but a specific "training set" for an AI model is not applicable here as the primary claim is functional equivalence and improvement of an existing treatment planning software.
9. How the ground truth for the training set was established
As noted above, this submission does not involve a "training set" for an AI/ML algorithm in the context of establishing ground truth for training. The software's capabilities are based on its design and implementation for treatment planning, not on learning from a labeled dataset.
Ask a specific question about this device
(13 days)
The Signature Orthodontic System is a treatment planning software and orthodontic appliance system used to correct malocclusions in orthodontic patients using patient-matched orthodontic appliances.
The Signature Orthodontic System (SO System) is a treatment planning software (TPS) and orthodontic appliance system used to correct malocclusions in orthodontic patientmatched orthodontic appliances. The SO System consists of patient-specific ceramic brackets, patient-specific bracket placement jigs, arch wire templates, and a TPS for viewing, and modifying cases. Signature Orthodontics' (SO) operators and the orthodontists use the TPS to generate a prescription of their choosing. SO then manufactures the patient-specific brackets and placement jigs using proprietary additive manufacturing techniques. The orthodontist then bonds the brackets to the teeth using the optional placement jig and ligates wires to enable tooth movement. The SO System does not contain commercially-available or patient-specific shaped arch wires, ligatures, or adhesives that affixes the brackets to the teeth.
The provided document is a 510(k) summary for the Signature Orthodontic System, specifically focusing on a software change (replacement of Meshmixer 3.4 with Treatment Planning 3.1). Based on the information presented, here's an analysis of the acceptance criteria and the study proving the device meets them:
Overall Conclusion from the Document:
The document states that the new software component (Treatment Planning 3.1) performs "equivalent" to the previous off-the-shelf software (Meshmixer 3.4) and that the product's indications for use, product codes/regulations, sequence of treatment plan, and manufacturing method are "identical" to the predicate device. The core of the equivalence claim rests on non-clinical performance testing of the software itself. It explicitly states "No clinical performance testing was conducted."
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the demonstration of "equivalence" to the predicate device's software function. The performance is reported as "Equivalent" for all tested functions.
Function/Workflow | Predicate Device Performance (Meshmixer 3.4) | New Device Performance (TPS 3.1) | Acceptance Criteria Met? |
---|---|---|---|
4.1 Diagnosis - viewing patient's digital impression | Rendering of impression using triangle meshes read from STL files | Rendering of impression using triangle meshes read from STL files | Equivalent (Test Report) |
4.1 Diagnosis - successful diagnosis of patient's malocclusion | Hide/show individual arches (requires multiple clicks) | Hide/show individual arches (single click) | Equivalent (Test Report) |
4.4 Data Handling - case data delivered securely and un-corrupted | Secure link used to download STL files, then File -> Open in device. | Open secure link in web browser (removes download STL files step) | Equivalent (Test Report) |
4.1 Diagnosis - viewing and measuring patient's digital impression | Rotate impression display | Rotate impression display | Equivalent (Test Report) |
4.1 Diagnosis - viewing and measuring patient's digital impression | Pan impression display | Pan impression display | Equivalent (Test Report) |
4.1 Diagnosis - viewing and measuring patient's digital impression | Zoom impression display | Zoom impression display | Equivalent (Test Report) |
4.1 Diagnosis - viewing and measuring patient's digital impression | Point-to-point measurement | N/A (not included in TPS 3.1 requirements specification) | (No direct comparison, but overall equivalence claimed based on other features) |
Note on "Point-to-point measurement": While TPS 3.1 does not include this feature, the overall claim is "equivalence" based on the other listed functions. The rationale for this difference not affecting equivalence is not explicitly detailed but might be because it's considered a minor functional change that doesn't impact the core safety and effectiveness of the device for its stated indications for use, or that other remaining features are sufficient.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a "sample size" in terms of patient data or number of cases for the performance testing. The validation testing was performed "in accordance with SO's design control activities for software and to the software's Test Plan." This suggests software validation processes (e.g., unit testing, integration testing, system testing) rather than a clinical study with a patient cohort.
- Data Provenance: Not applicable, as this was non-clinical software performance testing against functional requirements, not testing with patient data.
- Retrospective/Prospective: Not applicable.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. This testing was software performance validation, comparing the functionality of the new software to the old software, not a study requiring expert clinical interpretations or ground truth establishment based on medical images.
4. Adjudication Method for the Test Set
Not applicable. This was software performance validation against functional specifications, not a study requiring adjudication of expert readings.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done
No, an MRMC comparative effectiveness study was explicitly NOT done. The document states: "No clinical performance testing was conducted on SO System brackets." This implies no human reader studies (with or without AI assistance) were performed. The FDA clearance is based on the substantial equivalence of the software component for its functional performance, not on demonstrating improved human reader performance.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done
No. The testing was functional validation of the software itself, akin to software quality assurance (QA) and verification/validation (V&V) activities. It was not a "standalone performance" study in the sense of evaluating an AI algorithm's diagnostic accuracy against a ground truth on a large set of real-world patient data in an isolated fashion. The software (Treatment Planning 3.1) is a tool within a larger system used by human operators and orthodontists.
7. The Type of Ground Truth Used
For the non-clinical performance testing of the software, the "ground truth" was the functional requirements and expected outputs based on the performance of the predicate software (Meshmixer 3.4). The testing aimed to show that the new software performed "equivalent" functions to the previously cleared predicate software.
8. The Sample Size for the Training Set
Not applicable. The document describes a replacement software that was "developed by SO exclusively," implying it's a rule-based or deterministic software, not a machine learning or AI algorithm that requires a "training set."
9. How the Ground Truth for the Training Set was Established
Not applicable, as no training set for a machine learning model was used.
Ask a specific question about this device
Page 1 of 1