(120 days)
SICAT Function is a software application for the visualization and segmentation of imaging information of the oral-maxillofacial region. The imaging data originates from medical scanners such as CT or CBCT scanners. It is also used as a software system to aid qualified dental professionals with the evaluation of dental treatment options. The dental professionals' planning data may be exported from SICAT Function and used as input data for CAD or Rapid Prototyping Systems.
SICAT Function is a pure software device. SICAT Function is a software application for the visualization and segmentation of imaging information of the oral-maxillofacial region. The imaging data originates from medical scanners such as CT or Cone Beam - CT scanners. This information can be complemented by the imaging information from optical impression systems and jaw tracking devices. The additional information about the exact geometry of the tooth surfaces and the mandibular movement can be visualized together with the radiological data. SICAT Function is also used as a software system to aid qualified dental professionals with the evaluation and planning of dental treatment options. The dental professionals' treatment planning information may be exported from SICAT Function to be used as input data for the manufacturing of therapeutic devices such as oral appliances.
The provided text describes the regulatory submission for "SICAT Function," a software device. While it outlines the device's intended use, comparison with predicate devices, and a list of verification and validation activities, it does not contain specific acceptance criteria or a detailed study report proving the device meets those criteria with quantitative performance metrics, sample sizes, or ground truth establishment details.
The document primarily focuses on demonstrating substantial equivalence to predicate devices based on design, material, and function, rather than presenting a standalone performance study with defined acceptance criteria.
However, based on the provided text, here's an attempt to answer the questions, highlighting where information is not available:
1. Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria for the SICAT Function's performance. Instead, it relies on demonstrating that the device "passed all verification and validation activities" and "safety and effectiveness of the product has been demonstrated in the context of its intended use" as compared to predicate devices.
Acceptance Criteria (Implicit from the text):
- Successful correct import, registration, and visualization of jaw motion data.
- Demonstrated safety and effectiveness of image segmentation features.
- Substantial equivalence in design, material, and function to predicate devices for visualization, segmentation, and evaluation of dental treatment options.
Reported Device Performance:
Feature/Metric | Reported Performance |
---|---|
Jaw Motion Data Handling | "Special bench testing has been performed with non-clinical data: to verify the correct import, registration and visualization of jaw motion data." (Page 5) The conclusion states: "SICAT Function passed all verification and validation activities and that safety and effectiveness of the product has been demonstrated in the context of its intended use." (Page 6) |
Image Segmentation | "Special bench testing has been performed with non-clinical data: to verify the safety and effectiveness of image segmentation features." (Page 5) The conclusion states: "SICAT Function passed all verification and validation activities and that safety and effectiveness of the product has been demonstrated in the context of its intended use." (Page 6) |
Overall Performance | "It is believed to perform as well as the predicate devices for the visualization and segmentation of imaging information and the evaluation of dental treatment options." (Page 6) |
Missing Information Details:
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Available. The document mentions "Special bench testing has been performed with non-clinical data" (Page 5) but does not specify the sample size (number of cases/patients), the type or origin of this non-clinical data, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Available. The document does not describe how ground truth was established for the "special bench testing."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Available. There is no mention of any adjudication method for establishing ground truth or evaluating test results.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done. The document primarily describes software verification and validation, along with bench testing. There is no indication of a study involving human readers or comparing performance with and without AI assistance. This device is described as "aid[ing] qualified dental professionals" (Page 1) and "pure software device" (Page 2), implying it's a tool, but no human-in-the-loop performance study is reported.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, implicitly. The "Special bench testing" (Page 5) for jaw motion data and image segmentation appears to be a standalone performance evaluation of the algorithms/software features, as it doesn't mention human interaction. The overall verification and validation activities also support a standalone software evaluation. However, specific quantitative metrics are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not Available directly. For the "special bench testing" with "non-clinical data" (Page 5), the method for establishing ground truth is not described. Given the nature of a software device for visualization and segmentation, ground truth would likely be based on idealized or known synthetically generated data for bench testing, or expert manual segmentation/annotation on real data, but this is not specified.
8. The sample size for the training set
- Not Applicable / Not Available. The document does not mention the use of a "training set" in the context of machine learning or AI development. The device is purely software for visualization and segmentation, which might rely on deterministic algorithms, rather than models that require training data. If any machine learning components were present, the training set size is not disclosed.
9. How the ground truth for the training set was established
- Not Applicable / Not Available. As no training set is mentioned, the method for establishing its ground truth is also not provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).