(163 days)
DTX Studio Clinic is a software program for the acquisition, management, transfer and analysis of dental and craniomaxillofacial image information, and can be used to provide design input for dental restorative solutions.
It displays and enhances digital images from various sources to support the diagnostic process and treatment planning. It stores and provides these images within the system or across computer systems at different locations.
It can be used to support guided implant surgery whereby the results can be exported. DTX Studio Clinic is a computer assisted detection (CADe) device that analyses intraoral radiographs to identify and localize dental findings, which include caries, calculus, periapical radiolucency, root canal filling deficiency, discrepancy at the margin of an existing restoration and bone loss. The DTX Studio Clinic CADe functionality is indicated for use by dentists for the concurrent review of bitewing and periapical radiographs of permanent teeth in patients 15 years of age or older.
DTX Studio™ Clinic is a software interface for dental/medical practitioners used to analyze 2D and 3D imaging data, in a timely fashion, for the treatment of dental, craniomaxillofacial and related conditions. DTX Studio Clinic displays and processes imaging data from different devices (i.e. Intra/Extra Oral X-Rays, (CB)CT scanners, intraoral and extraoral cameras).
DTX Studio Clinic features an Al-powered Focus Area Detection algorithm which analyzes 2D intraoral radiographs for potential dental findings or image artifacts. The detected focus areas can be converted afterwards to diagnostic findings after approval by the user.
The following dental findings can be detected by the device:
• Caries
· Discrepancy at marqin of an existing restoration
· Periapical radiolucency
- Root canal filling deficiency
• Bone loss
· Calculus
The provided text describes the 510(k) summary for the DTX Studio Clinic (4.0) device, focusing on its substantial equivalence to predicate devices, particularly regarding its AI-powered "Focus Area Detection" functionality. While the document mentions verification and validation activities, it does not provide a detailed breakdown of the acceptance criteria and performance study results as requested by the prompt.
Specifically, it states:
- "A comparative analysis between the output of the focus area detection algorithm of the output of the primary predicate device (K221921) was performed. The test results show that for each intraoral x-ray image, the focus area detection is executed, and the same number of focus areas were detected compared to the predicate device. All detected bounding boxes were identical and the acceptance criteria was met."
This statement indicates that the new device's AI performance for "Focus Area Detection" was evaluated based on its identity to the primary predicate device (K221921), not against independent clinical performance criteria like sensitivity, specificity, or AUC based on a ground truth established by experts. It implies a non-inferiority or equivalence study where the benchmark is the previously cleared AI, rather than a de novo clinical performance evaluation.
Therefore, many of the requested details about the study are not present in this document. I will fill in the table and address the questions based on the information available, highlighting what is not provided.
Acceptance Criteria and Device Performance Study Details for DTX Studio Clinic (4.0) Focus Area Detection
Based on the provided 510(k) summary, the evaluation of the DTX Studio Clinic (4.0)'s AI-powered "Focus Area Detection" functionality was primarily a comparative analysis against its primary predicate device (DTX Studio Clinic 3.0, K221921), rather than a standalone clinical performance study establishing specific diagnostic metrics against a human-expert ground truth. The acceptance criterion appears to be identical output to the predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (as per document) | Reported Device Performance (as per document) |
---|---|---|
Focus Area Detection Output | For each intraoral x-ray image, the focus area detection must be executed, and the same number of focus areas must be detected compared to the predicate device (K221921). All detected bounding boxes must be identical to the predicate. | "The test results show that for each intraoral x-ray image, the focus area detection is executed, and the same number of focus areas were detected compared to the predicate device. All detected bounding boxes were identical and the acceptance criteria was met." Additionally, "There are no functional or technical differences between the Focus Area detection in DTX Studio Clinic 3.0 (primary predicate - K221921) and the current subject device." |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not explicitly stated. The document refers to "each intraoral x-ray image" without specifying the total number of images used in the comparative analysis.
- Data Provenance: Not explicitly stated (e.g., country of origin). The document implies the use of intraoral x-ray images, but their source, whether retrospective or prospective, or geographical origin, is not detailed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable / Not provided. The ground truth for the evaluation appears to be the output of the predicate AI device (K221921), not a human expert consensus. Therefore, no human experts were explicitly stated to establish a direct ground truth for the DTX Studio Clinic (4.0)'s performance itself in this comparative test. The predicate device's original clearance would have been based on such studies.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable / None specified. Since the comparison was between the subject device's AI output and the predicate AI device's output, there's no mention of human adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. An MRMC comparative effectiveness study was not described for the DTX Studio Clinic (4.0) for this submission. The evaluation was a technical comparison of the AI output to an existing cleared AI, not a study on human-AI augmented performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, but not as a de novo clinical performance study. The described test was a standalone comparison of the algorithm's output (DTX Studio Clinic 4.0) against another algorithm's output (predicate K221921) to confirm identical functionality. It was not a performance study measuring the algorithm's diagnostic accuracy (e.g., sensitivity, specificity) against a clinical gold standard. The document emphasizes that "There are no functional or technical differences between the Focus Area detection in DTX Studio Clinic 3.0 (primary predicate - K221921) and the current subject device."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The output of the predicate AI device (K221921). The "ground truth" for verifying the DTX Studio Clinic (4.0)'s "Focus Area Detection" functionality was the identical output of the legally marketed predicate device (K221921), which previously underwent its own validation.
8. The sample size for the training set
- Not provided. The document describes the "Focus Area Detection" as an "AI-powered" algorithm using "supervised machine learning," but it does not specify the sample size of the training set used for this algorithm. This information would typically be part of the original K221921 submission.
9. How the ground truth for the training set was established
- Not provided. The document states "supervised machine learning" was used, which implies a labeled training dataset. However, the method for establishing the ground truth (e.g., expert annotations, pathology confirmation) for that training data is not detailed in this 510(k) summary. This information would likely have been part of the K221921 submission when the AI algorithm was first cleared.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).