Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K231396
    Manufacturer
    Date Cleared
    2024-01-31

    (261 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CEPHX - Cephalometric Analysis Software is indicated for use by dentists who provide orthodontic treatment for image analysis, simulation, profilogram, VTO (Visual Treatment Objective), and patient consultation. Results produced by the software's diagnosic, treatment planning and simulation tools are dependent on the interpretation of trained and licensed practitioners or dentists. The device is only for use on patients 14 years old and above.

    Device Description

    CEPHX – Cephalometric Analysis Software uses cephalometric x-ray images to help dentists study the relationships between bone and soft tissue landmarks and can be used to diagnose facial abnormalities throughout an orthodontic treatment.

    As a first step, the user uploads a 2D cephalometric image and the software generates 99 cephalometric landmark points. The landmarks are important points in a lateral radiographic view of the teeth, jaws and base of the skull, and are used in multiple cephalometric analyses, which have proven to be a useful aid in basic orthodontic differential diagnosis. Once the landmark points are created, the user can generate a full report by choosing from a large selection of built-in analysis methods or create a customed analysis using the Analysis Wizard. The report can be printed or downloaded according to the user selection.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the CEPHX - Cephalometric Analysis Software, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    For the primary endpoint, the acceptance criterion was at least 85% of cases meeting the "pass" criteria. A "pass" was defined as 21 clinically significant landmarks detected automatically by the AI algorithm being within 2.0mm of the manually detected landmarks.

    Acceptance CriteriaReported Device Performance
    At least 85% of cases meeting "pass"99% of landmarks were identified, meeting the study endpoint.
    "Pass" defined as ≤ 2.0mm margin for landmarks99% of landmarks met the ≤ 2.0mm margin.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The exact number of cases (images) in the test set is not explicitly stated. However, it mentions the comparison of AI-generated landmarks with manually detected landmarks across "21 clinically significant landmarks" per case. To determine "99% of landmarks were identified," a sufficient number of images would have been used.
    • Data Provenance: Not explicitly stated in the document whether the data was retrospective or prospective, or its country of origin.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three (3)
    • Qualifications: "Three experienced orthodontic specialists." Specific years of experience are not mentioned.

    4. Adjudication Method (e.g. 2+1, 3+1, none) for the Test Set

    The document states "compared to three experienced orthodontic specialists." This implies that the AI's performance was compared against the consensus or individual assessments of these three specialists. It doesn't detail a specific adjudication method like 2+1 or 3+1 (where differences are resolved by an additional reader). It simply states the AI's results were compared to the specialists' results. Given the wording, it's most likely that the ground truth was established by comparing the AI's output against the collective or individual manual markings of all three. If a strict consensus was used, it's not explicitly described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study involving human readers with vs. without AI assistance was not conducted or reported. The study focused solely on the standalone performance of the AI algorithm against human-established ground truth.

    6. If a Standalone (i.e. algorithm only, without human-in-the-loop performance) was Done

    Yes, a standalone performance study was done. The document explicitly states: "The device's stand-alone performance was established against the ground truth..."

    7. The Type of Ground Truth Used

    The ground truth used was expert consensus / manual annotation. Specifically, it was established by "manually generated landmarks by orthodontic specialists." The AI's performance was then compared to these manually detected landmarks.

    8. The Sample Size for the Training Set

    The sample size for the training set is not mentioned in the provided document. The performance data section only discusses the verification study (testing).

    9. How the Ground Truth for the Training Set Was Established

    Information on how the ground truth for the training set was established is not provided in this document. The document only details the ground truth methodology for the test set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1