Search Results
Found 2 results
510(k) Data Aggregation
(217 days)
The X-Guide® Surgical Navigation System is a computerized navigational system intended to provide assistance in both the preoperative planning phase and the intra-operative surgical phase of dental implantation procedures and/or endodontic access procedures.
The system provides software to preoperatively plan dental implantation procedures and/or endodontics access procedures and provides navigational guidance of the surgical instruments.
The device is intended for use for partially edentulous adult and geriatric patients who need dental implants as a part of their treatment plan. The device is also intended for endodontic access procedures (i.e., apicoectomies and/or access of calcified canals) where a CBCT is deemed appropriate as part of their treatment plan.
The X-Guide® Surgical Navigation System is a cart mounted mobile system utilizing video technology to track position and movement of a surgical instrument (Dental Hand-Piece) during surgical procedures.
The X-Guide® Surgical Navigation System consists of a Mobile Cart, equipped with an LCD Monitor, Boom Arm, Navigation Assembly, Keyboard, Mouse and an Electronics Enclosure.
The Electronics Enclosure contains the system power supplies, data processing hardware, and electronics control circuitry for coordinating operation of the X-Guide® Surgical Navigation System.
A LCD Monitor, Keyboard, and Mouse serve as the main user interface for the surgeon. The Go-Button serves as an additional form of input by providing virtual buttons that a user can activate by touching them with the surgical instrument tip.
The Boom Arm allows the operator to manipulate the Navigation Assembly position for optimal distance and alignment to patterns located with the surgical region (Navi-Zone) for tracking purposes.
The Navigation Assembly contains two cameras oriented in a stereo configuration, along with blue lighting for illuminating the patterns and mitigating ambient lighting noise.
This electro-optical device is designed to improve dental surgical procedures by providing the surgeon with accurate surgical tool placement and guidance with respect to a surgical plan built upon Computed Tomographic (CT scan) data.
The surgical process occurs in two stages. Stage 1 is the pre-planning of the surgical procedure. The dental surgeon plans the surgical procedure in the X-Guide System Planning Software. A virtual implant or endodontic trajectory is aligned and oriented to the desired location in the CT scan, allowing the dental surgeon to avoid interfering with critical anatomical structures during surgery. Once an implant or trajectory has been optimally positioned, the plan is transferred to the X-Guide Surgical Navigation System in preparation for surgery.
In Stage 2 the system provides accurate guidance of the dental surgical instruments according to the preoperative plan.
As the dental surgeon moves the surgical instrument around the patient anatomy, 2D barcode tracking patterns on the Handpiece Tracker and the Patient Tracker are detected by visible light cameras in a stereo configuration and processed by data processing hardware to precisely and continuously track the motion of the dental handpiece and the surgically-relevant portion of the patient.
The relative motion of the dental handpiece and the patient anatomy, captured by the tracking hardware, is combined with patient-specific calibration data. This enables a 3D graphical representation of the handpiece to be animated and depicted in precise location and orientation relative to a 3D depiction of the implant target, along with depictions of the patient anatomy, and other features defined in the surgical plan. This provides continuous visual feedback that enables the dental surgeon to manewer the dental handpiece into precise alignment.
During execution of the surgical procedure, the X-Guide® Surgical Navigation System correlates between the surgical plan and the surgeon's actual performance. If significant deviations in navigation between the plan and the system performance occur, the system will alert the user.
The provided text describes the X-Guide Surgical Navigation System, which includes a new feature: Automatic Image Processing (AIP) software integration (IconiX) using machine learning. This software is designed to segment and identify anatomical structures in maxillofacial CT scans and IntraOral Scans (IOS).
Here's an analysis of the acceptance criteria and the study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) summary does not explicitly list "acceptance criteria" in a quantitative, pass/fail format with reported performance for EACH of the new ML-driven features. Instead, it states that "software verification and validation testing were conducted and documented" and that the "combined testing and analysis of results provides assurance that the device performs as intended."
However, the "Technology Performance Characteristics" table (pages 12-14) implicitly presents several performance characteristics that would have acceptance criteria for the base device, which are maintained. For the new ML features, the validation tests described aim to demonstrate "correct segmentations and visualizations," "automatically create a pan curve," "register (superimpose) the IOS over the CT," and "generate the X-Guide SurfiX."
Given the information, a table focusing on the new ML features would look like this:
| Acceptance Criteria (Implied from Validation Test Descriptions) | Reported Device Performance (Implied from Submission Outcome) |
|---|---|
| Machine Learning Outputs Validation: | Met: The device received 510(k) clearance, implying that the FDA found sufficient evidence that the ML software outputs "correct segmentations and visualizations for the expected patient population." |
| - Correct segmentation and identification of anatomical structures in CT (Teeth, Maxilla bone, Mandible bone, Maxillary Sinuses, Mandibular Nerve Canal) | (Details not explicitly provided in the summary, but implied to be sufficient for clearance.) |
| - Correct segmentation and identification of anatomical structures in IOS (Teeth, Gingiva) | (Details not explicitly provided in the summary, but implied to be sufficient for clearance.) |
| Machine Learning Software Verification: | Met: The device received 510(k) clearance, implying that the FDA found sufficient evidence that the ML software "meets specifications and requirements when integrated with the X-Guide System software." |
| - Ability to automatically create a pan curve to fit the arch (minimum of two teeth per sextant required) | (Details not explicitly provided in the summary, but implied to be sufficient for clearance.) The new software provides automatic pan curve creation where the predicate required manual marking. This functionality is considered similar to reference devices that also auto-generate pan curves. |
| - Ability to register (superimpose) the IOS over the CT automatically | (Details not explicitly provided in the summary, but implied to be sufficient for clearance.) The new software provides automatic IOS to CT registration where the predicate required manual point-matching. This functionality is considered similar to a reference device that also combines surface models from intraoral and CBCT scans. |
| - Ability to generate the X-Guide SurfiX from segmented teeth and bone for X-Mark Registration or Refinement | (Details not explicitly provided in the summary, but implied to be sufficient for clearance.) The new software provides automatic Surface Definition (SurfiX) where the predicate required manual selection. |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary does not explicitly state the sample size used for the test set. It mentions "varied CT data" for training (page 5) but does not provide specifics for the validation/test set.
Similarly, the data provenance (e.g., country of origin, retrospective or prospective) for the test set is not specified in the provided document.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not specify the number or qualifications of experts used to establish ground truth for the test set. It mentions that users can "view and confirm the correctness and completeness of [ML] results and, if desired, replace or augment them with conventional tools/methods" (page 5), implying a human expert review process is part of the clinical workflow, but this does not detail how ground truth for the test set was established for regulatory validation.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method (e.g., 2+1, 3+1) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document explicitly states: "No clinical studies were performed for the submission of this 510(k)." (page 19) Therefore, no MRMC study was conducted, and no effect size regarding human reader improvement with AI assistance is provided.
6. Standalone (Algorithm Only) Performance Study
The summary describes "Machine Learning Outputs Validation" and "Machine Learning Software Verification" (page 20).
- Machine Learning Outputs Validation: "This validation test demonstrates that the machine learning software outputs correct segmentations and visualizations for the expected patient population." This suggests an assessment of the algorithm's performance in generating segmentations in a standalone context (i.e., whether the outputs themselves were correct compared to ground truth).
- Machine Learning Software Verification: "This verification test demonstrates that the machine learning software meets specifications and requirements when integrated with the X-Guide System software..." This part focuses on the integrated performance.
While the details of the "Machine Learning Outputs Validation" are not provided, its description implies a standalone assessment of the ML algorithm's output accuracy against some form of ground truth.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for validating the machine learning outputs (e.g., expert consensus, pathology, outcomes data).
8. Sample Size for the Training Set
The document mentions that the machine learning software is "trained on varied CT data" (page 5) but does not specify the sample size for the training set.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established.
Ask a specific question about this device
(29 days)
DTX Studio Clinic is a software program for the acquisition, management, transfer and analysis of dental and craniomaxillofacial image information, and can be used to provide design input for dental restorative solutions. It displays and enhances digital images from various sources to support the diagnostic process and treatment planning. It stores and provides these images within the system or across computer systems at different locations.
DTX Studio Clinic is a software interface for dental/medical practitioners used to analyze 2D and 3D imaging data, in a timely fashion, for the treatment of dental, craniomaxillofacial and related conditions. DTX Studio Clinic displays and processes imaging data from different devices (i.e. Intraoral and extraoral X-rays, (CB)CT scanners, intraoral scanners, intraoral and extraoral cameras).
Here's a breakdown of the acceptance criteria and study information for the DTX Studio Clinic device, based on the provided text:
Important Note: The provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device, not necessarily a comprehensive clinical study report. Therefore, some information requested (like specific sample sizes for test sets, the number and qualifications of experts for ground truth, adjudication methods, MRMC study effect sizes, and detailed information about training sets) is not explicitly stated in this document. The focus here is on software validation and verification.
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical "acceptance criteria" in the format of a table with specific metrics (e.g., sensitivity, specificity, accuracy thresholds). Instead, the acceptance is based on demonstrating that the DTX Studio Clinic software performs its intended functions reliably and safely, analogous to the predicate and reference devices, as verified through software validation and engineering testing.
The "reported device performance" is primarily described through the software's functionality and its successful verification and validation.
| Feature/Criterion | Reported Device Performance (DTX Studio Clinic) | Comments (Based on 510(k) Summary) |
|---|---|---|
| Clinical Use | Supports diagnostic and treatment planning for craniomaxillofacial anatomical area. | "Primarily the same" as the predicate device CliniView (K162799). Differences in wording do not alter therapeutic use. |
| Image Data Import & Acquisition | Acquires/imports DICOM, 2D/3D images (CBCT, OPG/panorex, intra-oral X-ray, cephalometric, clinical pictures). Also imports STL, NXA, PLY files from intraoral/optical scanners. Directly acquires images from supported modalities or allows manual import. Imports from 3rd party PMS systems via VDDS or OPP protocol. | Similar to CliniView, with additional capabilities (STL, NXA, PLY, broader PMS integration). Subject device does not control imaging modalities directly for acquisition settings, distinguishing it from CliniView. |
| Data Visualization & Management | Displays and enhances digital images. Provides image filters, annotations, distance/angular measurements, volume and surface area measurements (for segmentation). Stores data locally or in DTX Studio Core database. Comparison of 3D images and 2D intraoral images in the same workspace. | Core functionality is similar to CliniView. Enhanced features include volume/surface area measurements and comparison of different image types within the same workspace. |
| Airway Volume Segmentation | Allows volume segmentation of indicated airway, volume measurements, and constriction point determinations. | Similar to reference device DentiqAir (K183676), but specifically limited to airway (unlike DentiqAir's broader anatomical segmentation). |
| Automatic Image Sorting (IOR) | Algorithm for automatic sorting of acquired or imported intra-oral X-ray images to an FMX template. Detects tooth numbers (FDI or Universal). | This is a workflow improvement feature, not for diagnosis or image enhancement. |
| Intraoral Scanner Module (ioscan) | Dedicated intraoral scanner workspace for acquisition of 3D intraoral models (STL, NXA, PLY). Supports dental optical impression systems. | Classified as NOF, 872.3661 (510(k) exempt). Does not impact substantial equivalence. |
| Alignment of Intra-oral/Cast Scans with (CB)CT Data | Imports 3D intraoral models or dental cast scans (STL/PLY) and aligns them with imported CB(CT) data for accurate implant planning. | Similar to reference device DTX Studio Implant (K163122). |
| Implant Planning | Functionality for implant planning treatment. Adds dental implant shapes to imported 3D data, allowing user definition of position, orientation, type, and dimensions. | Similar to reference device DTX Studio Implant (K163122), which also adds implants and computes surgical templates. |
| Virtual Tooth Setup | Calculates and visualizes a 3D tooth shape for a missing tooth position based on indicated landmarks and loaded intra-oral scan. Used for prosthetic visualization and input for implant position. | A new feature not explicitly present in the predicate devices but supported by the overall diagnostic and planning workflow. |
| Software Validation & Verification | Designed and manufactured under Quality System Regulations (21 CFR § 820, ISO 13485:2016). Conforms to EN IEC 62304:2006. Risk management (ISO 14971:2012), verification testing performed. Software V&V testing conducted as per FDA guidance for "Moderate Level of Concern." Requirements for features have been met. | Demonstrated through extensive software engineering and quality assurance processes, not clinical performance metrics. |
Study Information
-
Sample sizes used for the test set and the data provenance:
- Not explicitly stated in the provided text. The document mentions "verification testing" and "validation testing" but does not detail the specific sample sizes of images or patient cases used for these tests.
- Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It focuses on the software's functionality and its comparison to predicate devices, rather than the performance on specific clinical datasets.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not explicitly stated in the provided text. The 510(k) summary primarily addresses software functionality verification and validation, not a diagnostic accuracy study involving expert ground truth.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not explicitly stated in the provided text.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done or reported. The document explicitly states: "No clinical data was used to support the decision of substantial equivalence." This type of study would involve clinical data and human readers.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, in spirit. The software validation and verification described are for the algorithm and software functionalities operating independently. While the device does not make autonomous diagnoses (it "supports the diagnostic process and treatment planning"), its individual features (like airway segmentation, image sorting, virtual tooth setup) are tested in a standalone manner in terms of their computational correctness and adherence to specifications. However, this is distinct from standalone clinical performance (e.g., an AI algorithm making a diagnosis without human input). The document focuses on the technical performance of the software.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the software verification and validation, the implicit "ground truth" would be the software's functional specifications and requirements. For features like measurements or segmentation, this would likely involve mathematical correctness checks or comparison to pre-defined anatomical models or manually delineated reference segmentations. It is not based on expert consensus, pathology, or outcomes data in a clinical diagnostic sense, as no clinical data was used for substantial equivalence.
-
The sample size for the training set:
- Not explicitly stated in the provided text. The document describes a medical device software for image management and analysis, not a machine learning model that typically requires a large 'training set' in the deep learning sense. If any features (like the automatic image sorting or virtual tooth setup) utilize machine learning, the details of their training (including sample size) are not provided in this 510(k) summary.
-
How the ground truth for the training set was established:
- Not explicitly stated in the provided text. As mentioned above, details about training sets are absent. If machine learning is involved in certain features, the ground truth would typically be established by expert annotation or curated datasets, but this is not detailed here.
Ask a specific question about this device
Page 1 of 1