Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K250198
    Device Name
    Laon Ortho
    Manufacturer
    Date Cleared
    2025-04-23

    (90 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    LAON MEDI Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Laon Ortho is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options based on 3D models of the patient's dentition before the start of an orthodontic treatment.

    The use of the Laon Ortho requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    Laon Ortho is a PC-based software that sets up virtual orthodontics via digital impressions. It automatically segments the crown and the gum in a simple manner and provides basic model analysis to assist digital orthodontic procedures.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for Laon Ortho primarily focus on demonstrating substantial equivalence to predicate devices, particularly concerning its design, functionality, and intended use. While it mentions "verification and validation (V&V) testing" and "performance test," it does not provide granular details about the specific acceptance criteria for AI performance, the study design, sample sizes, ground truth establishment methods, or expert qualifications that would typically be associated with rigorous clinical or non-clinical performance studies for AI/ML devices.

    The key takeaway is that the clearance appears to be based on the equivalence of the "Automatic Simulation Mode" to the "Manual Mode" in achieving the same treatment planning, rather than a standalone AI performance study against a clinical ground truth.

    Therefore, I will extract what is available and highlight what is not present given the prompt's request.

    Here's the breakdown based on the provided document:


    Acceptance Criteria and Device Performance for Laon Ortho

    The document states: "The results of the verification and validation (V&V) testing showed that the Automatic Mode achieves the same treatment planning as the existing workflow." This implies the "acceptance criteria" for the Automatic Simulation Mode (the new feature) was equivalence to the existing manual workflow for treatment planning. However, the specific metrics for "same treatment planning" are not detailed.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance
    Automatic Mode achieves the "same treatment planning" as existing workflow."The results of the verification and validation (V&V) testing showed that the Automatic Mode achieves the same treatment planning as the existing workflow."
    Meets all performance test criteria."Through the performance test, it was confirmed that Laon Ortho meets all performance test criteria and that all functions work without errors."

    Note: The document does not specify quantitative metrics (e.g., accuracy, precision, F1-score, or specific measurement deviations) for "same treatment planning" or "meets all performance test criteria."


    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated. The document mentions "verification and validation (V&V) testing" and "performance test" but does not provide the number of cases or scans used for these tests.
    • Data Provenance: Not explicitly stated. The company is based in South Korea, but the origin (e.g., country, specific clinics) of the data used for V&V testing is not mentioned. It also doesn't explicitly state if the data was retrospective or prospective, though performance testing often uses existing (retrospective) data.

    3. Number of Experts and Their Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The document states, "The use of the Laon Ortho requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software." This refers to the user of the software, not the experts who established the ground truth for V&V.
    • Qualifications of Experts: Not explicitly stated. It's highly probable that orthodontic experts were involved in evaluating if the Automatic Mode achieved "the same treatment planning," but their specific number, roles, and qualifications (e.g., years of experience, board certification) are not detailed in this summary.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. Given the lack of detail on the "same treatment planning" assessment, the method for resolving discrepancies among evaluators (if multiple were used) is unknown from this document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, not explicitly stated or implied. The summary explicitly notes "Clinical Test Summary: Not Applicable." This indicates that a rigorous human-in-the-loop study, such as an MRMC study comparing human readers with and without AI assistance, was not performed or submitted as part of this 510(k). The focus was on the internal equivalence of the AI-driven "Automatic Mode" to the device's own "Manual Mode."
    • Effect Size: N/A, as no MRMC study was conducted.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes, in an indirect sense, but against an internal benchmark. The "Automatic Simulation Mode" is an algorithm that performs treatment planning. The V&V testing confirmed that this algorithm's output ("Automatic Mode") aligns with the output of the "existing workflow" (presumably the manual or previously cleared aspects of the device). However, this is not a standalone study against an independent, external clinical ground truth (e.g., pathology, clinical outcomes). It's more of a functional validation against an established internal process. The document does not provide standalone quantitative performance metrics (e.g., sensitivity, specificity, accuracy) for the algorithm itself.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The "ground truth" used for evaluating the "Automatic Simulation Mode" was its ability to achieve "the same treatment planning as the existing workflow." This implies that the accepted output or method of the "existing workflow" served as the reference. It is an internal ground truth based on the device's established manual capabilities, rather than an independent clinical ground truth like pathology, surgical findings, or long-term patient outcomes.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not stated. The document refers to "Software Validation" and "Performance Testing" but provides no information about the size or characteristics of the data used to train the "Automatic Simulation Mode" algorithm.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training: Not stated. Since the training set details are omitted, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232564
    Device Name
    Align Studio
    Manufacturer
    Date Cleared
    2024-03-12

    (201 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Laon Medi Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Align Studio is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options based on 3Dmodels of the patient's dentition before the start of an orthodontic treatment.

    The use of the Align Studio requres the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    Align Studio is a PC-based software that sets up virtual orthodontics via digital impressions. It automatically segments the crown and the gum in a simple manner and provides basic model analysis to assist digital orthodontic procedures.

    AI/ML Overview

    The provided document, an FDA 510(k) summary for "Align Studio," does not contain detailed information about specific acceptance criteria, a comprehensive study proving the device meets those criteria, or the methodology (e.g., sample size, expert qualifications, ground truth establishment) typically associated with such studies for AI/ML-based medical devices.

    Instead, this document focuses on demonstrating substantial equivalence to predicate devices (Ortho System and CEREC Ortho Software) rather than presenting a detailed performance study against predefined acceptance criteria for an AI-powered system. The Non-Clinical Test Summary section briefly mentions "software validation" and "performance testing" but without quantifiable metrics or specific methodologies. It states that "Align Studio meets all performance test criteria and that all functions work without errors" and "test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."

    Therefore, I cannot populate the table or answer most of the questions as the required information is not present in the provided text.

    Here's what can be extracted based on the limited information provided:

    1. A table of acceptance criteria and the reported device performance
    The document does not provide a table of acceptance criteria with quantifiable performance metrics specific to an AI/ML system's output. It broadly states the device "meets all performance test criteria" and "functions work without errors." The focus is on functional equivalence to predicate devices rather than specific quantitative performance targets for an AI component.

    2. Sample size used for the test set and the data provenance
    Not specified. The document does not detail the test set used for performance evaluation, nor its size or origin (country, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
    Not specified. The document doesn't describe the establishment of a ground truth for a test set, which would typically involve expert review for AI/ML performance evaluation.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
    Not specified, as a detailed ground truth establishment process is not described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
    No MRMC comparative effectiveness study is mentioned. The submission focuses on substantial equivalence based on device features and intended use, not on human reader performance with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
    The "Non-Clinical Test Summary" section mentions "Performance Testing" which could imply standalone testing, but no specific metrics for an algorithm-only performance (e.g., segmentation accuracy, measurement precision without human interaction) are provided. The device is described as "PC-based software" for "virtual orthodontics" that "automatically segments the crown and the gum," implying an algorithm performing actions. However, the document does not detail the standalone performance metrics for this automated segmentation or other AI features.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
    Not specified. Given the lack of detailed performance study information, the type of ground truth used is not described.

    8. The sample size for the training set
    Not specified. The document does not provide details on the training set used for any AI/ML components within the "Align Studio" software.

    9. How the ground truth for the training set was established
    Not specified. Without information on a training set, the method of establishing its ground truth is also not provided.

    Summary of available information regarding software validation and performance:

    • Software Validation: "Align Studio contains Basic Documentation Level software was designed and developed according to a software development process and was verified and validated."
    • Performance Testing: "Through the performance test, it was confirmed that Align Studio meets all performance test criteria and that all functions work without errors. Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."
    • Clinical Studies: "No clinical studies were considered necessary and performed."

    This filing relies on demonstrating substantial equivalence to already cleared predicate devices based on shared technological characteristics and intended use, rather than presenting a novel performance study for an AI/ML component with specific acceptance criteria and detailed clinical validation results.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1