(452 days)
ORTOMED is intended for use by specialized dental practices for capturing, storing and presenting patient images and radiographs and to aid in cephalometric analysis, orthodontic and orthognathic surgery treatment planning and communication as well as case follow-up. Results produced by the software tools are to be interpreted by trained and licensed dental practitioners.
Ortomed is an imaging software designed for use by specialized dental practices for capturing, storing and presenting patient's dental images and for assisting in treatment planning and case diagnosis, specifically cephalometric tracing for orthodontic and orthognathic cases. Results produced by the software's diagnostic and treatment planning tools must be interpreted by the trained, licensed practitioner. The software features and capabilities include image and management using the Gesden database and Gesimag image management suite and also specific cephalometric analysis functions and treatment simulation. Features and capabilities include:
- Cephalometric landmarking and analysis: lateral, frontal, models and soft profile -
- Analysis of models: Bolton, Moyers, Dentoalveolar discrepancy analysis -
- Lateral tracing superimposition, overlays on patient images -
- -CO/CR conversion, growth forecasts, simulation of facial profile growth
- VTO (Visual Treatment Objective) treatment simulation for orthodontic cases -
- STO (Surgical Treatment Objective) treatment simulation for orthognathic surgical cases -
- -Simulation and display of VTO/STO treatments using warping and morphing.
This document is a 510(k) premarket notification from INFOMED SERVICIOS INFORMATICOS S.L. for their device, ORTOMED. The FDA's letter states that the device is substantially equivalent to legally marketed predicate devices for its stated indications for use.
The document discusses the device's intended use, its comparison to predicate devices (Dolphin Imaging 11.5 and CS Orthodontic Imaging/CS OMS Imaging Software), and a summary of non-clinical data used to support the submission.
Crucially, this document is a 510(k) premarket notification. It does NOT describe a study that explicitly proves the device meets specific acceptance criteria based on quantifiable performance metrics with a defined test set, ground truth, or expert review process as would be expected for a typical AI/ML device approval.
Instead, the submission relies on demonstrating substantial equivalence to existing predicate devices. This means that the testing performed, primarily non-clinical, aimed to show that the new device has similar technological characteristics and performs comparably to the predicate devices, and that it introduces no new or increased risks.
Therefore, the requested information regarding acceptance criteria, sample sizes, expert ground truth establishment, adjudication methods, MRMC studies, standalone performance, and training set details are not explicitly present or applicable in the way they would be for a typical performance-based evaluation of an AI/ML diagnostic tool.
Here's how to address your request based on the provided text, highlighting what is available and what is not:
1. A table of acceptance criteria and the reported device performance
This document does not present a table of specific quantitative acceptance criteria or reported performance metrics in terms of accuracy, sensitivity, specificity, etc., for automated tasks. The regulatory pathway is substantial equivalence, which focuses on demonstrating the new device is as safe and effective as existing legally marketed devices, rather than meeting novel performance thresholds.
The document focuses on comparing features and functionalities with predicate devices to establish equivalence. The "performance" assessment is qualitative in this context, ensuring the software tools produce results that are to be "interpreted by trained and licensed dental practitioners," similar to how predicate devices function.
2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document primarily relies on "non-clinical test data" for "adequate safety and performance." It does not specify a distinct "test set" in the context of a clinical performance study with patient data. As this is a substantial equivalence submission, rigorous patient-data-based testing with a defined test set and provenance (like retrospective or prospective cohort studies) is typically not required unless new clinical questions or significant technological changes are present. The document focuses on software validation and verification.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
No information is provided on expert involvement for "ground truth" establishment, as this type of performance study (e.g., for diagnostic accuracy) was not described. The device's results are intended to be "interpreted by trained and licensed dental practitioners," implying human oversight rather than automated diagnostic claims requiring ground truth for validation.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable, as no clinical performance test set with a need for ground truth adjudication is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC study or any study evaluating the effect of AI assistance on human readers. The device is a "Picture archiving and communications system" and "Radiological Image Processing System" with tools for cephalometric analysis and treatment simulation, which imply computational assistance but not necessarily an AI-driven diagnostic aid that would typically warrant an MRMC study for assessing human performance improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document does not describe standalone algorithm performance for diagnostic claims. The Indications for Use explicitly state: "Results produced by the software tools are to be interpreted by trained and licensed dental practitioners." This indicates that the device is an aid to a human practitioner, not a standalone diagnostic tool.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Ground truth, in the context of a performance study for an AI/ML diagnostic, is not discussed because no such study is presented for ORTOMED. The "ground truth" for the software's functionality would be its adherence to specified design requirements and accurate execution of its algorithms (e.g., correct calculation of measurements), which are verified through software testing.
8. The sample size for the training set
Not applicable. This is not a description of an AI/ML model development where a "training set" of data would be used to train a machine learning algorithm. The document describes a software system with computational tools (e.g., for cephalometric analysis and treatment simulation) that are based on predefined algorithms and rules, not learned from data.
9. How the ground truth for the training set was established
Not applicable for the same reason as point 8.
Summary of Acceptance Criteria and Study (Based on Provided Document):
The "acceptance criteria" for ORTOMED, as implied by this 510(k) submission, are primarily centered on demonstrating substantial equivalence to predicate devices in terms of:
- Intended Use: Matching the intended use of existing, legally marketed devices.
- Technological Characteristics: Possessing similar fundamental technological features and functionalities.
- Safety and Performance: Demonstrating that the device is as safe and effective as the predicate devices, and that it does not introduce new or increased risks.
The "Study" that Proves Acceptance:
The "study" described is the 510(k) submission process itself, which includes:
- Non-clinical Data: The documentation states, "Non-clinical test data are submitted to support this premarket notification and to establish the decision concerning adequate safety and performance of the predicate device."
- Design and Development Compliance: The device "has been designed, developed, tested, verified and validated according to documented procedures and specific protocols in line with the following FDA guidance documents:
- Guidance for the Submission of Premarket Notifications for Medical Imaging Management Devices
- Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices.
- Risk Management: "Design and development included identification, evaluation and control of potential hazards as per standard ISO 14971."
- Software Lifecycle Processes: "Integration, verification and validation testing have been successfully completed following standard ISO 62304."
In conclusion, this document primarily details a regulatory submission for substantial equivalence based on
non-clinical software verification and validation, rather than a clinical performance study with predefined acceptance criteria and a detailed statistical analysis of AI/ML performance.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).