Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K223816
    Date Cleared
    2023-04-07

    (108 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DENTAL CT SCANNER AXR is designed to obtain 2D and 3D radiological images of the oral anatomy, including teeth, maxillofacial areas, oral structures, carpal images and head-neck bone regions. This system is exclusively for dental use and should be handled only by qualified health professionals.

    Device Description

    The Dental CT Scanner AXR is a complete 4-in-1 dental imaging system capable of generating panoramic, cephalometric and tomographic images using cone beam computerized tomography technique (Cone Beam). The AXR90 has a maximum kVp of 90 while the AXR120 has a maximum kVp of 120. The digital acquisition process utilizes an X-ray sensor and automatic image processing that allow you to increase the speed of diagnosis and improve the workflow of your clinic.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Dental CT Scanner AXR, presented in the requested format.

    It's important to note that the provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed clinical study report for novel AI algorithms. Therefore, specific details common in AI/ML performance studies, such as effects of AI assistance on human readers, detailed ground truth establishment for a large test set, and precise metrics for algorithm-only performance against acceptance criteria, are not present in this type of document. The "device" in this context refers to the entire CT scanner, not a specific AI component for interpretation.


    Acceptance Criteria and Device Performance Study for Dental CT Scanner AXR

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriteriaReported Device PerformanceComments
    SafetyGeneral Electrical Safety (IEC 60601-1)All tests passedMet
    Electromagnetic Compatibility (IEC 60601-1-2)All tests passedMet
    Radiation Safety (IEC 60601-1-3, IEC 60601-2-63)All tests passedMet
    Biocompatibility (EN ISO 10993-1)All tests passedMet (for irritation, sensitization, cytotoxicity)
    Risk Analysis & Software ValidationPerformed according to FDA guidance for moderate level of concernMet
    CybersecurityComplied with FDA guidance recommendationsMet
    PerformanceImage EvaluationImages found to be equivalent or better than predicate deviceMet (Qualitative assessment)
    Manufacturing/QualityConnection to Software100% testedMet
    Exposure Accuracy100% testedMet
    Tube Voltage and Exposure Time100% testedMet
    Reproducibility100% testedMet
    Beam Quality100% testedMet
    Tube Efficiency100% testedMet
    Leakage Radiation100% testedMet

    2. Sample Size Used for the Test Set and Data Provenance

    The provided 510(k) summary does not mention a specific "test set" in the context of an AI algorithm's performance evaluation against ground truth. The performance data presented refers to the overall system's image quality and technical specifications.

    • Sample Size: Not applicable in the context of an AI test set. The document states that "Each unit manufactured is 100% tested" for certain technical parameters. For image evaluation, a single statement is made: "Dental images were compared to the images obtained on the predicate device." This suggests a qualitative comparison rather than a quantitative study on a defined test set.
    • Data Provenance: Not specified. Based on the manufacturer's location (Brazil), it's likely the "dental images" used for comparison were generated internally or through clinical partners in Brazil. The data is implicitly retrospective as it compares images from the new device to a predicate, not a prospective clinical trial.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Two experts are mentioned for image evaluation: "both licensed dentist and a USA Board Certified Radiologist." The exact number of licensed dentists is not specified (e.g., one or multiple).
    • Qualifications:
      • "licensed dentist" (general qualification)
      • "USA Board Certified Radiologist" (specific high-level qualification in radiology)

    4. Adjudication Method for the Test Set

    No formal adjudication method (e.g., 2+1, 3+1) is described. The text states that "Dental images were compared to the images obtained on the predicate device and found to be equivalent or better," implying a consensus or agreement was reached by the experts during their evaluation, but no structured adjudication process is detailed.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not explicitly done. The document does not describe any study comparing human readers with and without AI assistance, nor does it provide an effect size for human reader improvement. The "Mult Slice" functionality is a software feature that enhances image quality for the reader by allowing virtual adjustment of the cutting plane, but it's not described as an AI-assisted diagnostic tool requiring an MRMC study.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    This device is not described as having a standalone artificial intelligence component that performs diagnostic tasks without human-in-the-loop. The "Mult Slice" panoramic software functionality described is an image processing feature, not a diagnostic AI algorithm. Therefore, no standalone algorithm-only performance study was conducted or is relevant based on the provided information.

    7. The Type of Ground Truth Used

    The concept of "ground truth" for a diagnostic AI is not directly applicable here. The evaluation of the device relied on:

    • Technical Standards Compliance: Successful completion of tests against established international safety and performance standards (IEC, ANSI/AAMI, EN ISO).
    • Expert Qualitative Image Comparison: Subjective assessment by a licensed dentist and a USA Board Certified Radiologist, comparing images from the new device to those from the predicate device.

    8. The Sample Size for the Training Set

    Not applicable. This document describes a medical imaging device (CT scanner) demonstrating substantial equivalence to a predicate, not an AI/ML algorithm that requires a training set. The "Mult Slice" function is described as a software functionality rather than a machine learning model that would be trained on data.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for an AI/ML algorithm is described in this submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210820
    Date Cleared
    2021-08-10

    (144 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EAGLE EDGE AXR90 and AXR120 are CBCT / Panoramic / Cephalometric X-Ray units designed to obtain 2D and 3D radiological images of the oral anatomy, including teeth, maxillofacial areas, oral structures, carpal images and head-neck bone regions. This system is exclusively for dental use and should be handled only by qualified health professionals.

    Device Description

    EAGLE EDGE Models AXR90, AXR120 are complete 3-in-1 dental imaging systems capable of generating panoramic, cephalometric and tomographic images using cone beam computerized tomography (CBCT) technique. The AXR90 has a maximum kVp of 90 while the AXR120 has a maximum kVp of 120. The digital acquisition process utilizes one or more X-ray sensors and automatic image processing that allow you to increase the speed of diagnosis and improve the workflow of your clinic.

    AI/ML Overview

    The provided document does not contain a detailed study proving the device meets specific acceptance criteria with reported device performance metrics in a table. It primarily outlines the non-clinical and clinical testing performed to establish substantial equivalence to a predicate device.

    However, based on the information provided, here's a breakdown of what can be extracted and inferred regarding acceptance criteria and the study:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not present a formal table of acceptance criteria with corresponding device performance metrics. Instead, it offers a qualitative assessment of imaging equivalence and software feature effectiveness.

    Acceptance Criteria (Inferred from testing)Reported Device Performance
    Safety Standards (Electrical, EMC)All standards tests passed.
    BiocompatibilityAll tests passed.
    Software ValidationPerformed according to FDA guidance for moderate level of concern.
    CybersecurityCompliance with FDA Guidance recommendations.
    Image Quality (Equivalence to predicate)Images found to be equivalent or better than predicate device.
    Effectiveness of Motion Artifact Reduction SoftwareFound to be effective.
    Effectiveness of Metal Artifact Reduction SoftwareFound to be effective.

    2. Sample Size and Data Provenance:

    • Test Set Sample Size: The document does not specify a numerical sample size for the "Sample X-Ray images" taken for evaluation. It only states that images were taken "across the different operational modes".
    • Data Provenance: The document indicates that the evaluation was performed by a "USA Board Certified Radiologist" implying the data was evaluated in the USA, but the country of origin of the actual patients for the "Sample X-Ray images" is not explicitly stated. The manufacturer is from Brazil (Alliage S/A Indústrias Médico Odontológica, Rodovia Abrão Assed, ... Ribeirão Preto - São Paulo- Brazil), so it's possible the images were generated in Brazil. The study is retrospective in the sense that the images were taken and then evaluated, but it's not explicitly labeled as such.

    3. Number of Experts and Qualifications:

    • Number of Experts: At least one expert. The document states, "Image evaluation was performed by both licensed dentist and a USA Board Certified Radiologist." and "These images were evaluated by an American Board of Radiology certified radiologist." This indicates one radiologist and at least one licensed dentist.
    • Qualifications of Experts:
      • "USA Board Certified Radiologist" or "American Board of Radiology certified radiologist."
      • "licensed dentist."

    4. Adjudication Method:

    The document does not describe a formal adjudication method (like 2+1 or 3+1). It states that image evaluation was performed by multiple individuals ("both licensed dentist and a USA Board Certified Radiologist"), but it doesn't detail how disagreements, if any, were resolved or if a consensus mechanism was employed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    A full MRMC comparative effectiveness study, comparing human readers with and without AI assistance to quantify improvement, was not reported. The study focused on assessing the device's image quality and the effectiveness of its built-in software features (motion and metal artifact reduction) in a standalone capacity.

    6. Standalone Performance:

    Yes, a standalone performance assessment was done. The "Sample X-Ray images" taken by the device were evaluated by a radiologist to determine image quality and the effectiveness of the software features (motion and metal artifact reduction). This evaluation was of the algorithm's output without direct human-in-the-loop interaction for diagnostic decision-making improvement.

    7. Type of Ground Truth:

    The ground truth used for evaluating image quality and software effectiveness appears to be expert consensus/opinion. The "USA Board Certified Radiologist" and "licensed dentist" provided their professional opinions on the equivalence of images to the predicate and the effectiveness of the artifact reduction software. There is no mention of pathology or outcomes data as ground truth.

    8. Sample Size for the Training Set:

    The document does not specify a sample size for the training set. It mentions "software validation" but doesn't detail any machine learning models or their training data. The entire submission focuses on the X-ray units and their imaging capabilities, not on an AI diagnostic algorithm.

    9. How Ground Truth for Training Set was Established:

    Since a training set for an AI model is not explicitly mentioned, the method for establishing its ground truth is not provided. The document describes validations for safety, EMC, biocompatibility, and software in general, but not the specific training process for an AI component with an associated ground truth.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1