(118 days)
Materialise Dental's SimPlant Go software is indicated for use as a medical front-end software that can be used by medically trained people for the purpose of visualizing gray value images. It is intended for use as a pre-operative software program for simulating /evaluating dental implant placement and surgical treatment options.
SimPlant Go allows the individual patient's CT image to be assessed in a three-dimensional way, to see the anatomical structures without patient contact or surgical insult. It includes features for dental implant treatment simulation. Additional information about the exact geometry of the tooth surfaces can be visualized together with the CT data and periodontic procedures with dental implants can be simulated. The output file is intended to be used in conjunction with diagnostic tools and expert clinical judgment.
The provided text describes the 510(k) submission for SimPlant Go, a pre-operative software program for simulating and evaluating dental implant placement. The document focuses on demonstrating that SimPlant Go is substantially equivalent to a predicate device (SimPlant 2011; K110300) rather than presenting a performance study with specific acceptance criteria and detailed quantitative results.
Here's an analysis based on the information provided:
1. Table of Acceptance Criteria and Reported Device Performance
The submission does not provide a table of acceptance criteria with specific quantitative performance metrics (e.g., accuracy, sensitivity, specificity, or error margins) for SimPlant Go. Instead, it describes various software validation and testing activities and concludes with a qualitative statement of equivalence.
Acceptance Criteria | Reported Device Performance |
---|---|
Quantitative Performance Metrics (e.g., specific accuracy, error rates, etc.) | Not explicitly stated in quantitative terms. |
Functional Equivalence to Predicate Device | "Compared to the predicate device SimPlant 2011, SimPlant Go yielded an identical output when using identical input data." This is the primary "performance" claim, asserting that its functionality produces the same results as the predicate under the same conditions. |
Software Testing Completion | Unit testing, peer code reviewing, integration testing, IR testing, smoke testing, formal testing, acceptance testing, alpha testing, beta testing all completed. Results are on file at Materialise Dental. |
Design Validation | Performed by an external usability company (Macadamian) through interviews and usability tests. "All validation criteria were met." |
Beta Validation (Usability) | Performed with 28 external users and 5 internal users. "All validation criteria were met." |
Clinical Case Planning Validation | Performed, and "All validation criteria were met." |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- For design validation by Macadamian: The text mentions "interviews and usability tests were performed" but does not specify the number of cases or data points used in these tests.
- For beta validation: "additional usability tests were performed with in total 28 external users and 5 internal users." This refers to the number of users, not necessarily the number of clinical cases or data sets they evaluated. The number of cases is not specified.
- For clinical case planning validation: The text states "Clinical case planning in SimPlant GO was validated" but does not specify the number of clinical cases used.
- Data Provenance: Not explicitly stated. The nature of the device (pre-operative planning software) suggests that the data would be medical images (CT scans). The country of origin and whether the data was retrospective or prospective are not mentioned.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not describe the establishment of a "ground truth" for the test set in the traditional sense of expert consensus on image interpretation or pathology. The validation activities focus on software functionality and usability.
- Design validation involved an "external usability company i.e. Macadamian." Their qualifications are not detailed beyond being a "usability company."
- Beta validation involved "external users" and "internal users." Their qualifications are not specified, though the device is intended for "medically trained people."
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1) is mentioned, as the validation focuses on comparing the software's output to the predicate device and usability, rather than expert interpretation of medical findings against a "ground truth."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study is mentioned. The submission primarily focuses on functional equivalence to a predicate device and usability, not on comparing human reader performance with and without AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
The validation conducted appears to be a form of standalone testing in that the software's output was compared to the predicate device's output. The statement "Compared to the predicate device SimPlant 2011, SimPlant Go yielded an identical output when using identical input data" suggests an algorithm-only comparison for functional correctness. However, this is presented within the context of demonstrating substantial equivalence, not as a standalone performance study with specific metrics like sensitivity or specificity.
7. Type of Ground Truth Used
The primary "ground truth" for demonstrating performance is implicitly the output of the predicate device (SimPlant 2011) when subjected to identical input data. For usability testing, the 'ground truth' would be user feedback and whether "validation criteria were met" as determined by the usability studies. There is no mention of pathology, outcomes data, or consensus from clinical experts for specific diagnostic or planning accuracy.
8. Sample Size for the Training Set
The document does not mention a separate "training set" or its sample size. This suggests that the device, being an updated version of existing planning software (SimPlant 2011), likely relies on well-established algorithms and logic rather than a machine learning model that requires a distinct training phase with a large labeled dataset. The testing focused on ensuring the new C# implementation produced identical results and maintained usability.
9. How the Ground Truth for the Training Set Was Established
Since no training set is mentioned, the method for establishing its ground truth is not applicable or described in the provided text.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).