K Number
K212770
Device Name
Vision
Manufacturer
Date Cleared
2021-12-21

(112 days)

Product Code
Regulation Number
872.5470
Panel
DE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The SoftSmile Vision is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual design of a series of dental casts, which may be used for sequential aligner trays or retainers, based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives. The use of SoftSmile Vision requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well to have received a dedicated training in the use of the software.

Device Description

SoftSmile Vision is orthodontic planning and treatment simulation software for use by dental professionals. SoftSmile Vision imports patient 3-D digital scans and allows the user to plan the orthodontic treatment needs of the patient and develop a treatment plan. The output of the treatment plan may be downloaded as files in standard stereolithographic (STL) format for fabrication of dental casts, which may be used to fabricate by a manufacturer sequential aligner trays or retainers.

AI/ML Overview

Here's an analysis of the provided text, focusing on acceptance criteria and the study proving device performance:

1. Table of Acceptance Criteria and Reported Device Performance

The provided FDA 510(k) summary does not explicitly state specific, quantifiable acceptance criteria or a direct table comparing them to reported device performance. Instead, it relies on a statement of meeting acceptance criteria established during software verification and validation. The primary form of "performance" discussed is the software functioning as intended and being substantially equivalent to the predicate device.

Therefore, a table with specific performance metrics cannot be generated from the given text.

The document states:

  • "All test results met acceptance criteria, demonstrating the Vision software performs as intended, raises no new or different questions of safety or effectiveness and is substantially equivalent to the predicate device."

This is a general statement of compliance rather than a detailed report of performance against predefined thresholds.

2. Sample Size Used for the Test Set and Data Provenance

The document does not specify the sample size used for the test set. It also does not mention the country of origin of the data or whether the data was retrospective or prospective.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

The document does not specify the number of experts used to establish ground truth or their qualifications. The study described is primarily focused on software verification and validation, not clinical performance reviewed against expert-derived ground truth.

4. Adjudication Method for the Test Set

The document does not mention any adjudication method (e.g., 2+1, 3+1, none) for a test set. This type of method is typically used when human readers or experts are involved in establishing ground truth for evaluating diagnostic or predictive devices, which is not the focus of the described "study."

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

No. The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus is on software validation relative to a predicate device, not on comparing human performance with and without AI assistance. Therefore, there is no effect size reported for human readers improving with AI.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

Yes, inferentially. The "study" described is a "Software and integration verification and validation testing." This type of testing assesses the algorithm's performance and functionality in a standalone manner, ensuring it operates as designed, without human intervention during the core processing. The statement "demonstrating the Vision software performs as intended" implies standalone evaluation of the software's functions.

7. The Type of Ground Truth Used

The document does not explicitly state the type of ground truth used in the context of clinical outcomes or expert consensus. Given the nature of a software verification and validation study, the "ground truth" would likely be defined by:

  • Software requirements specifications: The expected behavior and output of the software.
  • Predicate device behavior: The established functionality and output of the legally marketed predicate device (ULab Systems UDesign (K171295)).
  • Engineering specifications: Correctness of calculations, data handling, and file exports according to predefined digital standards.

8. The Sample Size for the Training Set

The document does not mention a training set or its sample size. This is consistent with the device being a "front-end" software for treatment planning and simulation, rather than a machine learning model that requires a large training dataset for inference.

9. How the Ground Truth for the Training Set Was Established

Since no training set is mentioned, there is no information on how its ground truth might have been established.

§ 872.5470 Orthodontic plastic bracket.

(a)
Identification. An orthodontic plastic bracket is a plastic device intended to be bonded to a tooth to apply pressure to a tooth from a flexible orthodontic wire to alter its position.(b)
Classification. Class II.