Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K200078
    Device Name
    VARO Plan
    Date Cleared
    2020-04-10

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VARO Plan Software is a stand-alone Windows-based software application to support the treatment planning for dental implantation. It is designed for qualified dental practitioners, including dentists and lab technicians. The software imports the medical image dataset of the patient in DICOM format from medical CT or dental CBCT scanners for pre-operative planning and simulation of dental placement. It is intended for use as pre-operative planning software for the placement of dental implant(s) based on imported CT image data, optionally aligned to an optical 3D surface scan. Virtual Crowns can be used for optimized implant positioning under the prosthetic aspect. The digital three dimensional model of a surgical guide for a guided surgery can be designed based on the approved implant position. This 3D data can be exported to manufacture a separate physical product. Indications of the dental implants do not change with guided surgery compared to conventional surgery. Use of the software requires that the user has the necessary medical training in implantology and surgical dentistry.

    Device Description

    VARO Plan is a pure software device.

    VARO Plan is a dental implant surgical guide design software that is used to design surgical procedure guidelines for implanting one or more dental implants based on CT and the Tray data. Implant library, which includes certified implants, and sleeve libraries are provided. The guide model designed in accordance with the established dental implant operation plan can be exported as STL files.

    The followings are the major functions of VARO Plan.

    • Patient Management and Surgical Plan Management
    • Data Management and Matching
    • Crown Model Management and Mesh Edit
    • Panoramic Screen Generation and Nerve Setting
    • Implant Simulation
    • Surgical Guide Design
    • Results output.
    • Report
    • Project Information Management
    AI/ML Overview

    Here is a summary of the acceptance criteria and study information for the VARO Plan device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance
    Accuracy of Measurements on CT Datasets (using phantom)Average Absolute Difference < 2%Met
    Length (phantom)Maximum Absolute Difference < 2%Met
    Angle (phantom)Maximum Absolute Difference < 2%Met
    HU (Hounsfield Units) (phantom)Maximum Absolute Difference < 2%Met
    Surgical Guide Model (phantom)Generated output surgical guide matches user input requirements (Guide thickness, Offset from teeth to guide, Offset from sleeve to guide)Met
    Accuracy of Implant Library SizesAverage Absolute Difference < 2%Met
    Diameter (implant library)Maximum Absolute Difference < 2%Met
    Length (implant library)Maximum Absolute Difference < 2%Met

    2. Sample Size Used for the Test Set and Data Provenance

    The provided text only mentions accuracy testing using a phantom. It does not specify a sample size in terms of the number of patient cases or images, as the tests were conducted with a physical phantom.

    • Sample Size for Test Set: Not specified in terms of distinct "cases" or "patients." The testing used a phantom.
    • Data Provenance: Not applicable for phantom testing in terms of country of origin or retrospective/prospective. The data was generated from a phantom.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The ground truth for the phantom-based accuracy testing would be the known true values of the phantom itself. Therefore, no human experts were used to establish this ground truth:

    • Number of Experts: 0 (Ground truth established by known physical phantom properties).
    • Qualifications of Experts: Not applicable.

    4. Adjudication Method for the Test Set

    Not applicable, as the ground truth was based on the known physical properties of the phantom, not on expert consensus requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No.
    • Effect Size: Not applicable, as no MRMC study was performed. The device is a planning and design software, not an AI-assisted diagnostic tool for human readers.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, the studies described are standalone performance tests of the software's accuracy in measurements and guide generation against known phantom values. The device itself is described as a "standalone Windows-based software application."

    7. Type of Ground Truth Used

    Phantom-based true values: The ground truth for the accuracy tests was derived from the known physical dimensions, angles, and Hounsfield Units (HU) of a phantom, as well as the predefined design requirements for the surgical guide model.

    8. Sample Size for the Training Set

    The document does not provide information about a training set size. The VARO Plan is described as "pure software device" for dental implant planning and surgical guide design. This type of software, especially for its primary functions listed (MPR, panoramic generation, implant simulation, surgical guide design based on user input), might not involve a machine learning model that requires a distinct "training set" in the conventional sense. The verification and validation activities mentioned are typical for software engineering rather than AI/ML model development.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set or machine learning model requiring such a ground truth establishment process is described in the provided information.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183676
    Device Name
    DentiqAir
    Date Cleared
    2019-12-19

    (356 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DentiqAir is a software application for the visualization of imaging information of the oralmaxillofacial region. The imaging data originates from medical scanners such as CT or CBCT scanners. The dental professionals' planning data may be exported from DentiqAir and used as input data for CAD or Rapid Prototyping Systems.

    Device Description

    DentiqAir is a pure software device applied for the visualization of imaging information of the ear-nose-throat (ENT) region and oral-maxillofacial region. The imaging data originates from medical scanners such as CT or Cone Beam - CT (CBCT) scanners. This information can be complemented by the imaging information from optical impression systems. The medical professionals' input information and planning data may be exported from Dentiq Air to be used for CAD or Rapid Prototyping Systems.

    AI/ML Overview

    The provided text describes the 510(k) premarket notification for the DentiqAir device. While it mentions performance tests, it does not include a detailed table of acceptance criteria and reported device performance for all features, nor does it provide a full breakdown of the test set, expert involvement, or MRMC study results typically found in comprehensive performance studies for AI/ML-driven devices.

    However, based on the non-clinical testing section, we can infer some information regarding the performance and acceptance criteria for specific functionalities.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and the Reported Device Performance:

    The document primarily focuses on accuracy tests for measurements made using the device against phantom data. It does not provide performance metrics for segmentation accuracy (e.g., Dice score, Jaccard index) which might be expected for segmentation features.

    Feature TestedAcceptance CriteriaReported Device Performance
    LengthAverage and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    AngleAverage and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    HU (Hounsfield Unit)Average and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    VolumeLess than True Value and more than Dolphin Imaging average (for Airway volume, based on context)"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)

    Note: The phrasing "Less than True Value and more than Dolphin Imaging average" for Volume is a bit ambiguous regarding exact numerical targets, but it implies a comparative target against a predicate device's expected performance. The document states that the test results support substantial equivalence, implying these criteria were met.

    2. Sample Size Used for the Test Set and the Data Provenance:

    • Sample Size: The document states that accuracy tests were conducted "from loaded CT datasets using phantom." It does not specify the number of CT datasets or phantoms used for this testing.
    • Data Provenance: The data used was from "phantom" studies, meaning simulated or controlled anatomical models, not human patient data. The country of origin is not explicitly stated for the phantom data, but the submitter is from the Republic of Korea. It is retrospective in nature, as it's a pre-market submission based on completed testing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:

    • Number of Experts: Not applicable. The ground truth for the phantom accuracy tests was established by the known true values of the phantom itself, not by expert consensus.
    • Qualifications of Experts: N/A as expert consensus was not the method for establishing ground truth for the stated performance tests.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not applicable. Given the ground truth for the measurement accuracy tests was based on the known physical properties of the phantom, no human adjudication was necessary for these specific performance metrics.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No. The document explicitly states: "Clinical testing is not a requirement and has not been performed." The performance tests described are strictly non-clinical and focus on software functionality and measurement accuracy. This type of study would be highly relevant for devices intended to assist human readers in diagnosis or treatment planning.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: For the measurement accuracy tests (Length, Angle, HU, Volume), the described testing does reflect standalone performance as it compares the device's measurements directly to the phantom's true values, without human intervention in the measurement process itself.
    • The software also includes "segmentation features," and the document states, "Performance testing has been used to validate the safety and effectiveness of the DentiqAir segmentation features in comparison to the predicate devices." However, no specific quantitative standalone performance metrics (e.g., Dice coefficient, precision, recall) for segmentation are provided in this summary.

    7. The Type of Ground Truth Used:

    • Ground Truth: For the measurement accuracy tests, the ground truth was based on the known physical properties (true values) of the phantom used for testing.

    8. The Sample Size for the Training Set:

    • Training Set Sample Size: Not provided. The document is a 510(k) summary for a software device. While it mentions "software verification and validation testing activities" including "code review, integration review, and dynamic tests," and "performance tests," it does not discuss the training or development of any AI/ML models within the software or the data used for such purposes. The device's segmentation algorithm is mentioned as "Water Shed (a type of graph-cut algorithm)," which is a traditional image processing algorithm rather than a deep learning model requiring a specific training set with labelled data in the sense of modern AI/ML.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: Not applicable. As the document does not describe the use of an AI/ML model that requires a labelled training set in the contemporary sense, the establishment of ground truth for a training set is not discussed. The Water Shed algorithm does not require labeled training data in the same way a deep learning model would.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1