Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K251835
    Manufacturer
    Date Cleared
    2025-10-10

    (116 days)

    Product Code
    Regulation Number
    872.4120
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K233925

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Yomi Robotic System (Yomi) is a computerized robotic navigational system intended to provide assistance in both the planning (pre-operative) and the surgical (intra-operative) phases of dental implantation surgery. The system provides software to preoperatively plan dental implantation procedures and provides robotic navigational guidance of the surgical instruments. The system can also be used for planning and performing guided bone reduction (also known as alveoplasty) of the mandible and/or maxilla. Yomi is intended for use in partially edentulous and fully edentulous adult patients who qualify for dental implants.

    When YomiPlan software is used for preplanning on third party PCs, it is intended to perform the planning (pre-operative) phase of dental implantation surgery. Yomi Plan provides pre-operative planning for dental implantation procedures using the Yomi Robotic System. The output of Yomi Plan is to be used with the Yomi Robotic System.

    Device Description

    Yomi Robotic System is a dental stereotaxic instrument and a powered surgical device for bone cutting. Yomi Robotic System is a computerized navigational system intended to provide assistance in both the planning (pre-operative) and the surgical (intra-operative) phases of dental implantation surgery. The system provides software to preoperatively plan dental implantation procedures and provides navigational guidance of the surgical instruments. The Yomi Robotic System is intended for use in partially edentulous and fully edentulous adult patients who qualify for dental implants.

    The Yomi Robotic System allows the user to plan the surgery virtually in YomiPlan, cleared for use alone on third-party PCs for preplanning. The operative plan is based on a cone beam computed tomography (CBCT) scan of the patient, which is used to create a 3-D model of the patient anatomy in the planning software. The plan is used for the system to provide physical, visual, and audible feedback to the surgeon during the implant site preparation. The Yomi robotic arm holds and guides a standard FDA-cleared third party powered bone cutting instrument.

    The patient tracking portion of Yomi is comprised of linkages from the patient to Yomi, which include the Patient Splint (YomiLink Teeth or YomiLink Bone), Tracker End Effector (TEE), and the Patient Tracker (PT). In cases where YomiLink Teeth is utilized, it is attached to the contralateral side of the patient's mouth over stable teeth using on-label dental materials prior to the presurgical CBCT scan. In cases where YomiLink Bone is utilized, it is placed using bone screws prior to the presurgical CBCT scan (appropriate local anesthesia is required), or after the scan when using the subject YomiLink Arch device.

    The subject of this submission is to: Integrating algorithms that provide automatic segmentation of maxillary sinuses, inferior alveolar nerve, and maxillary and mandibular bone. The integrated software, Relu Creator, was cleared in K233925. The software is not adaptive, it is trained at the manufacturer (Relu), and the weights are locked.

    Additionally, since the most recent clearance of Yomi Robotic System (K231018), minor modifications to the Yomi System include the following:
    • Planning software improvements
    • Restorative planning – Features to support customized crown design
    • Dual arch planning – Feature to enable the end user to plan multiple arches in a singe case and a singe scan
    • Patient work volume guidance improvements – Added guidance for the angulation of the patient chair
    • Added patient proximity for baseline
    • YomiLink Bone (YLB) planning – improved placement of the YLB
    • Added proximity threshold lower limit value
    • Improved alignment between CT scans and imported .stl objects
    • Added ability for user to designate soft tissue thickness to assist in bone reduction planning
    • Added max depth information to the implant cursor hover info
    • VTK Off-the-Shelf software version update
    • Added model details to implant selection
    • Added restorative planning case feedback option
    • Added additional implant models to the implant library

    • Control software and behavior improvements
    • Updates to handpiece interaction gestures, and optimization of the response of the control software to guide arm joint limits, singularities and potential wrist / base collisions.

    • Hardware improvements Tracker Arm Joint

    • Accessory improvements
    • Updates to the YomiLink Teeth and intraoral fiducial array

    • Minor bug fixes

    All other aspects of the Yomi Robotic System remain unchanged from prior clearances.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the Yomi Robotic System focuses on the substantial equivalence of the modified device to its predicate. While it mentions the integration of an automatic segmentation algorithm (Relu Creator, K233925), it does not contain the detailed acceptance criteria or the specific study that proves the device meets those criteria for the automatic segmentation algorithm.

    The document primarily describes:

    • The indications for use.
    • A comparison of technological characteristics between the subject device (Yomi Robotic System with Automatic Segmentation Algorithm) and its predicate (Yomi Robotic System K231018) and a reference device (Relu Creator K233925).
    • General statements about software, cybersecurity, and usability verification and validation testing, but without specific performance metrics or study details.

    Therefore, many of the requested details about acceptance criteria, specific performance results, sample sizes, expert qualifications, and ground truth establishment for the automatic segmentation algorithm are not present in the provided text. The document refers to the Relu Creator (K233925) as having been cleared, implying its own performance evaluations would have been submitted in that separate clearance.

    Here's a breakdown of the information that can be extracted or inferred, and what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance

    Not explicitly provided for the automatic segmentation algorithm (Relu Creator) in this document. The document states:

    • "Yomi Plan 2.7 with Automatic Segmentation Algorithm functionality was successfully verified and user validated."
    • "The software has been successfully verified to perform with the PC specifications of the Yomi Robotic System."
    • "All changes have been successfully verified and, therefore, not considered to affect the overall safety and efficacy profile of Yomi Plan."
    • "The combined testing and analysis of results provides assurance that the device performs as intended."

    These are general assurances of performance and validation but do not provide specific quantitative acceptance criteria or reported device performance metrics for the automatic segmentation algorithm itself.

    2. Sample size used for the test set and the data provenance

    Not provided in this document. The document mentions "Software verification and validation testing" and "User Validation testing" but does not specify the sample size of cases or the provenance (country of origin, retrospective/prospective) of the data used for testing the automatic segmentation algorithm.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not provided in this document.

    4. Adjudication method for the test set

    Not provided in this document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not provided in this document. The document does not mention an MRMC study or any results comparing human reader performance with and without AI assistance from the segmentation algorithm. The automatic segmentation algorithm is integrated into the planning software to assist (presumably by providing pre-segmented anatomy), but its impact on human reader performance is not quantified here.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Not explicitly detailed for the segmentation algorithm's performance. The document states, "The integrated software, Relu Creator, was cleared in K233925." This implies that the Relu Creator, which performs the automatic segmentation, underwent its own standalone performance evaluation as part of its original clearance (K233925). This current 510(k) focuses on its integration into the Yomi Robotic System, not its primary standalone performance evaluation.

    7. The type of ground truth used

    Not explicitly provided in this document for the automatic segmentation algorithm. For image segmentation algorithms, ground truth is typically established through manual segmentation by experts, often on a pixel/voxel level, sometimes validated by pathology or clinical outcomes. The document does not specify which method was used for the Relu Creator.

    8. The sample size for the training set

    Not provided in this document. The document states, "The software is not adaptive, it is trained at the manufacturer (Relu), and the weights are locked." This confirms that training occurred, but the size of the training dataset is not mentioned.

    9. How the ground truth for the training set was established

    Not provided in this document. Similar to item 7, the method for establishing ground truth for the training data is not detailed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243989
    Manufacturer
    Date Cleared
    2025-05-23

    (148 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    Reference Devices :

    K233925

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Second Opinion® 3D is a radiological automated image processing software device intended to identify and mark clinically relevant anatomy in dental CBCT radiographs; specifically Dentition, Maxilla, Mandible, Inferior Alveolar Canal and Mental Foramen (IAN), Maxillary Sinus, Nasal space, and airway. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.

    It is designed to aid health professionals to review CBCT radiographs of patients 12 years of age or older as a concurrent and second reader.

    Device Description

    Second Opinion® 3D is a radiological automated image processing software device intended to identify clinically relevant anatomy in CBCT radiographs. It should not be used in lieu of full patient evaluation or solely relied upon to make or confirm a diagnosis.

    It is designed to aid dental health professionals to identify clinically relevant anatomy on CBCT radiographs of permanent teeth in patients 12 years of age or older as a concurrent and second reader.

    Second Opinion® 3D consists of three parts:

    • Application Programing Interface ("API")
    • Machine Learning Modules ("ML Modules")
    • Client User Interface (UI) ("Client")

    The processing sequence for an image is as follows:

    1. Images are uploaded by user
    2. Images are sent for processing via the API
    3. The API routes images to the ML modules
    4. The ML modules produce detection output
    5. The UI renders the detection output

    The API serves as a conduit for passing imagery and metadata between the user interface and the machine learning modules. The API sends imagery to the machine learning modules for processing and subsequently receives metadata generated by the machine learning modules which is passed to the interface for rendering.

    Second Opinion® 3D uses machine learning to identify areas of interest such as Individual teeth, including implants and bridge pontics; Maxillary Complex; Mandible; Inferior Alveolar Canal and Mental Foramen (defined as IAN); Maxillary Sinus; Nasal Space; Airway. Images received by the ML modules are processed yielding detections which are represented as metadata. The final output is made accessible to the API for the purpose of sending to the UI for visualization. Masks are displayed as overlays atop the original CBCT radiograph which indicate to the practitioner a clinically relevant anatomy. The clinician can toggle over the image to highlight a particular anatomy.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Second Opinion® 3D:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the statistically significant accuracy thresholds for each anatomy segment that the device aims to identify. While explicit numerical thresholds for "passing" are not provided directly in the table, the text states, "Dentition, Maxilla, Mandible, IAN space, Sinus, Nasal space, and airway passed their individually associated threshold." The performance is reported in terms of the mean Dice Similarity Coefficient (DSC) with a 95% Confidence Interval (CI).

    AnatomyIDAnatomy NameAcceptance Criteria (Implied)Reported Device Performance (Mean DSC, 95% CI)Passes Acceptance?
    1DentitionStatistically significant accuracy0.86 (0.83, 0.89)Yes
    2Maxillary ComplexStatistically significant accuracy0.91 (0.91, 0.92)Yes
    3MandibleStatistically significant accuracy0.97 (0.97, 0.97)Yes
    4IAN CanalStatistically significant accuracy0.76 (0.74, 0.78)Yes
    5Maxillary SinusStatistically significant accuracy0.97 (0.97, 0.98)Yes
    6Nasal SpaceStatistically significant accuracy0.90 (0.89, 0.91)Yes
    7AirwayStatistically significant accuracy0.95 (0.94, 0.96)Yes

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: 100 images
    • Data Provenance: Anonymized images representing patients across the United States. It is a retrospective dataset, as it consists of pre-existing images.

    3. Number of Experts and Qualifications for Ground Truth Establishment

    The document does not explicitly state the "number of experts" or their specific "qualifications" (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. It only mentions that the images were "clinically validated."

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (such as 2+1, 3+1, or none) for establishing the ground truth on the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, the document describes a "standalone bench performance study" for the device's segmentation accuracy, not a comparative study with human readers involving AI assistance.
    • Effect size of human readers with AI vs. without AI assistance: Not applicable, as no MRMC study was performed or reported.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance

    • Was a standalone study done? Yes, the study described is a standalone bench performance study of the algorithm's segmentation accuracy. The reported Dice Similarity Coefficient scores reflect the algorithm's performance without human intervention after the initial image processing.

    7. Type of Ground Truth Used

    The ground truth used for the bench testing was established through "clinical validation" of the anatomical structures. Given that the performance metric is Dice Similarity Coefficient (a measure of overlap with a reference segmentation), the ground truth was most likely expert consensus segmentation or an equivalent high-fidelity reference segmentation created by qualified professionals. The term "clinically validated" implies expert review and agreement.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. It mentions the use of "machine learning techniques" and "neural network algorithms, developed from open-source models using supervised machine learning techniques," implying a training phase, but the size of the dataset used for this phase is not provided.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the technology utilizes "supervised machine learning techniques." This implies that the ground truth for the training set was established through manual labeling or segmentation by human experts which then served as the 'supervision' for the machine learning models during their training phase. However, the exact methodology (e.g., number of experts, specific process) is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1