(257 days)
ImplantMax Software is a stand-alone Windows-based software application to support the diagnosis and treatment planning for dental implantation. It is designed for qualified dental practitioners, including dentists and lab technicians. The software imports the medical image dataset of the patient in DICOM format from medical CT or dental CBCT scanners and transfers the dataset into 3D dataset for pre-operative planning and simulation of dental placement.
The planned implant position can be exported and displayed as the position data of each joint of the articulated arm of ImplantMax Workstation. The user can manually set the position of ImplantMax Workstation arm to match the setting values displayed by ImplantMax Software. ImplantMax Workstation is set to the position to fabricate the surgical template. Please note that ImplantMax Workstation is not part of this 510(k) premarket notification.
The patient population is the general public.
ImplantMax is a software interface for the transfer and visualization of 3D medical imaging information from medical CT (computed tomography) or dental CBCT (cone-beam computed tomography) scanner. It is designed to support the diagnosis and treatment planning for dental implantation with the 3D graphic representation. To facilitate the planning, the major software functions are provided as follows.
MPR (multiplanar reconstruction), panoramic, and, 3D image reconstruction (1) for analyzing anatomical condition,
(2) Graphic visualization interface for placing the implant in mandible or maxilla images,
Nerve module for assisting users in distinguishing inferior alveolar nerve (3) (passing through mandibular canal),
Simulation of different sized implants, (4)
(2) Adjustment of implant location, including position and direction,
Alignment function for multiple implants, (6)
Simulation of abutment and crown, (7)
Measurement tools for measuring length and angle, (8)
(ਰ) Bone quality (indicated by CT number) displayed in the area around the implant.
The implant position can be exported and displayed as the joint position of the articulated arm of ImplantMax Workstation that is a separated manually-operated device for fabricating the surgical template.
The provided document in the prompt describes the ImplantMax Software device and its performance testing. Here's a breakdown of the information based on your requested criteria:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria in a table format with corresponding reported device performance metrics like sensitivity, specificity, accuracy, or AUC. Instead, it describes general verification and validation activities.
However, based on the descriptions, we can infer the types of criteria and the confirmation of their fulfillment:
Acceptance Criterion (Inferred) | Reported Device Performance (as stated in document) |
---|---|
Functional Verification: Complete integrated system functions as per design inputs. | "The functional testing was performed to verify the complete integrated system against the design inputs listed in the SDS document. The testing results confirmed that the software is performed as designed." |
Image Reconstruction Accuracy: Correction of image reconstruction algorithm meets acceptable standards. | "Performance test of image reconstruction verification was executed using an acrylic phantom with markers of known position. The testing results confirmed that the correction of the image reconstruction algorithm is verified to the acceptable criterion." |
Software Stability: Software maintains stability during long-term operation. | "The stability test was performed to confirm that the software is stable under long-time operation." |
Safety and Effectiveness (User Needs): Software performs to meet user needs for safety and effectiveness. | "The pre-clinical testing using partially edentulous and fully edentulous patient data was performed to verify safety and effectiveness as intended. The testing results confirmed that the software is performed to meet the user needs listed in SRS document." |
Usability: User interface (UI) design is acceptable for safety and effectiveness. | "The usability test was also performed to validate user interface (UI) design. The test was conducted by an independent usability lab... The testing results confirmed that the UI design is acceptable for safety and effectiveness per IEC62366::2007 'Medical devices – Application of usability engineering to medical devices'." |
2. Sample size used for the test set and the data provenance
- Test set sample size:
- For the "pre-clinical testing" for safety and effectiveness: "partially edentulous and fully edentulous patient data." A specific number of cases is not provided.
- For the "usability test": "Ten experienced dentists."
- Data provenance: Not explicitly stated (e.g., country of origin). Both tests appear to be prospective in nature, as they are described as "pre-clinical testing" and a "usability test" conducted by an "independent usability lab."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: This information is not directly provided for the "pre-clinical testing" where "patient data" was used. For the "usability test," ten experienced dentists participated to evaluate the user interface.
- Qualifications of experts: For the usability test, the experts were "ten experienced dentists from the dentistry department of two university hospitals." For the "pre-clinical testing," it's not stated how "safety and effectiveness as intended" or "user needs" were evaluated by experts against a ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method for establishing ground truth or evaluating performance in the pre-clinical or usability testing.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned comparing human readers with AI assistance vs. without AI assistance. The testing focuses on the standalone performance and usability of the software itself.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the "functional testing," "image reconstruction verification," and "stability test" are all examples of standalone algorithm performance testing. The "pre-clinical testing" also evaluates the software's performance (though involving patient data) seemingly in a standalone manner to verify safety and effectiveness "as intended" and against "user needs."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For functional testing: "design inputs listed in the SDS document."
- For image reconstruction verification: "an acrylic phantom with markers of known position." This implies a physical phantom with precisely known dimensions/positions.
- For pre-clinical testing (safety and effectiveness): "user needs listed in SRS document." The ultimate ground truth for "safety and effectiveness" with patient data is not explicitly detailed but would implicitly refer to established clinical standards or expert judgment based on medical imaging and dental planning principles.
- For usability testing: Adherence to "IEC62366::2007 'Medical devices – Application of usability engineering to medical devices'" as judged by experienced dentists.
8. The sample size for the training set
The document does not provide any information regarding a training set sample size. This suggests the software may not rely on machine learning models that require explicit training sets in the modern sense (e.g., deep learning), or if it does, the details are omitted. It is described as an "Image Processing System" that performs functions like MPR, 3D reconstruction, graphic visualization, nerve module assistance, simulation, and measurement tools, which are often rule-based or algorithmic, rather than learned.
9. How the ground truth for the training set was established
Since no training set information is provided, this question cannot be answered from the document.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).