(269 days)
CEREC Ortho Software is intended for use with image data acquired from handheld intra oral 3D cameras and desktop laboratory scanners to create 3D virtual models to be used for data acquisition and modeling analysis for orthodontic patients and conditions. The CEREC Ortho Software 3D model data can be exported to orthodontic design software to aid in the design of orthodontic appliances.
The CEREC Ortho Software is stand-alone software which utilizes images of the patient's intraoral anatomy from intra-oral cameras and/or desktop laboratory scanners to create a 3D virtual dental model that can be used in the same manner as a traditional physical dental model.
The CEREC Ortho Software facilitates the segmentation and editing of the 3D virtual digital model as well as analysis which can be used in secondary orthodontic treatment planning. The software allows for measurement and jaw analysis to be performed - including Bolton, Nance, and Moyers analyses. The models and analysis produced by the proposed CEREC Ortho Software can be exported to an orthodontic laboratory or directly to orthodontic appliance manufacturers for use in orthodontic treatment planning and design of orthodontic appliances.
The document provides information on the CEREC Ortho Software (K171122) and its substantial equivalence to a predicate device. Here's a breakdown based on your request:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics in a pass/fail format. However, it states that "Testing to verify the accuracy of the measurement functions of the CEREC Ortho Software as well as the trueness and precision of optical impressions produced using CEREC optical impression systems and the CEREC Omnicam intra-oral scanner (510(k)-exempt under 21 CFR 872.3661)" was conducted. The conclusion is that "the results support substantial equivalence." This implies that the device met internal performance criteria for accuracy, trueness, and precision, which were deemed sufficient for substantial equivalence.
Therefore, the table cannot be fully completed as requested due to the absence of specific numeric acceptance criteria and detailed performance results in the provided text. However, we can infer the tested aspects:
Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|
Accuracy of measurement functions | Verified (Supports substantial equivalence) |
Trueness of optical impressions | Verified (Supports substantial equivalence) |
Precision of optical impressions | Verified (Supports substantial equivalence) |
Conformity with IEC 62304 | Achieved (Software validated) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Testing to verify the accuracy of the measurement functions of the CEREC Ortho Software as well as the trueness and precision of optical impressions produced using CEREC optical impression systems and the CEREC Omnicam intra-oral scanner." However, it does not specify the sample size used for these tests, nor the country of origin of the data, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide any information about experts used to establish ground truth for the test set or their qualifications. The non-clinical performance data section focuses on software verification and accuracy/trueness/precision of impressions, suggesting a more technical assessment rather than expert evaluation of diagnostic outcomes for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method for the test set, as it does not detail the process of establishing ground truth by experts.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. It focuses on the standalone performance of the software in creating and analyzing 3D models. The software aids in orthodontic treatment planning but does not directly assess human reader performance improvements.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance assessment was done. The "Non-Clinical Performance Data" section describes "Testing to verify the accuracy of the measurement functions of the CEREC Ortho Software as well as the trueness and precision of optical impressions produced using CEREC optical impression systems and the CEREC Omnicam intra-oral scanner." This testing evaluates the algorithm's output (measurements, trueness, precision of models) directly.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The type of "ground truth" implicitly used for the non-clinical performance testing would be physical measurements or established standards for accuracy, trueness, and precision of dental impressions and model measurements. For example, to verify the accuracy of measurement functions, the software's measurements would likely be compared against known, perhaps physically measured, distances or angles on a reference model or object.
8. The sample size for the training set
The document does not explicitly state a sample size for the training set. As this is a 510(k) submission for software that creates 3D virtual models for analysis, rather than a deep learning AI model for image interpretation, the concept of a "training set" in the machine learning sense might not directly apply in the same way. The software likely relies on algorithms and computational geometry for model creation and analysis, which are developed and validated through engineering principles rather than large-scale data training.
9. How the ground truth for the training set was established
Given that a "training set" in the machine learning context is not explicitly mentioned, and the software's function is more about model creation and measurement based on scans, the concept of ground truth for a training set in the typical AI sense is not addressed. The underlying algorithms would be developed and refined based on established principles of dental anatomy, imaging, and metrology.
§ 872.5470 Orthodontic plastic bracket.
(a)
Identification. An orthodontic plastic bracket is a plastic device intended to be bonded to a tooth to apply pressure to a tooth from a flexible orthodontic wire to alter its position.(b)
Classification. Class II.