Search Filters

Search Results

Found 30 results

510(k) Data Aggregation

    K Number
    K250198
    Device Name
    Laon Ortho
    Manufacturer
    Date Cleared
    2025-04-23

    (90 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Laon Ortho is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options based on 3D models of the patient's dentition before the start of an orthodontic treatment.

    The use of the Laon Ortho requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    Laon Ortho is a PC-based software that sets up virtual orthodontics via digital impressions. It automatically segments the crown and the gum in a simple manner and provides basic model analysis to assist digital orthodontic procedures.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for Laon Ortho primarily focus on demonstrating substantial equivalence to predicate devices, particularly concerning its design, functionality, and intended use. While it mentions "verification and validation (V&V) testing" and "performance test," it does not provide granular details about the specific acceptance criteria for AI performance, the study design, sample sizes, ground truth establishment methods, or expert qualifications that would typically be associated with rigorous clinical or non-clinical performance studies for AI/ML devices.

    The key takeaway is that the clearance appears to be based on the equivalence of the "Automatic Simulation Mode" to the "Manual Mode" in achieving the same treatment planning, rather than a standalone AI performance study against a clinical ground truth.

    Therefore, I will extract what is available and highlight what is not present given the prompt's request.

    Here's the breakdown based on the provided document:


    Acceptance Criteria and Device Performance for Laon Ortho

    The document states: "The results of the verification and validation (V&V) testing showed that the Automatic Mode achieves the same treatment planning as the existing workflow." This implies the "acceptance criteria" for the Automatic Simulation Mode (the new feature) was equivalence to the existing manual workflow for treatment planning. However, the specific metrics for "same treatment planning" are not detailed.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied)Reported Device Performance
    Automatic Mode achieves the "same treatment planning" as existing workflow."The results of the verification and validation (V&V) testing showed that the Automatic Mode achieves the same treatment planning as the existing workflow."
    Meets all performance test criteria."Through the performance test, it was confirmed that Laon Ortho meets all performance test criteria and that all functions work without errors."

    Note: The document does not specify quantitative metrics (e.g., accuracy, precision, F1-score, or specific measurement deviations) for "same treatment planning" or "meets all performance test criteria."


    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated. The document mentions "verification and validation (V&V) testing" and "performance test" but does not provide the number of cases or scans used for these tests.
    • Data Provenance: Not explicitly stated. The company is based in South Korea, but the origin (e.g., country, specific clinics) of the data used for V&V testing is not mentioned. It also doesn't explicitly state if the data was retrospective or prospective, though performance testing often uses existing (retrospective) data.

    3. Number of Experts and Their Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The document states, "The use of the Laon Ortho requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software." This refers to the user of the software, not the experts who established the ground truth for V&V.
    • Qualifications of Experts: Not explicitly stated. It's highly probable that orthodontic experts were involved in evaluating if the Automatic Mode achieved "the same treatment planning," but their specific number, roles, and qualifications (e.g., years of experience, board certification) are not detailed in this summary.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. Given the lack of detail on the "same treatment planning" assessment, the method for resolving discrepancies among evaluators (if multiple were used) is unknown from this document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, not explicitly stated or implied. The summary explicitly notes "Clinical Test Summary: Not Applicable." This indicates that a rigorous human-in-the-loop study, such as an MRMC study comparing human readers with and without AI assistance, was not performed or submitted as part of this 510(k). The focus was on the internal equivalence of the AI-driven "Automatic Mode" to the device's own "Manual Mode."
    • Effect Size: N/A, as no MRMC study was conducted.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Yes, in an indirect sense, but against an internal benchmark. The "Automatic Simulation Mode" is an algorithm that performs treatment planning. The V&V testing confirmed that this algorithm's output ("Automatic Mode") aligns with the output of the "existing workflow" (presumably the manual or previously cleared aspects of the device). However, this is not a standalone study against an independent, external clinical ground truth (e.g., pathology, clinical outcomes). It's more of a functional validation against an established internal process. The document does not provide standalone quantitative performance metrics (e.g., sensitivity, specificity, accuracy) for the algorithm itself.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The "ground truth" used for evaluating the "Automatic Simulation Mode" was its ability to achieve "the same treatment planning as the existing workflow." This implies that the accepted output or method of the "existing workflow" served as the reference. It is an internal ground truth based on the device's established manual capabilities, rather than an independent clinical ground truth like pathology, surgical findings, or long-term patient outcomes.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not stated. The document refers to "Software Validation" and "Performance Testing" but provides no information about the size or characteristics of the data used to train the "Automatic Simulation Mode" algorithm.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training: Not stated. Since the training set details are omitted, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241153
    Date Cleared
    2024-10-11

    (168 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Progressive Orthodontics App is indicated for use as a front-end software tool for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options, which may be used for sequential aligner trays or retainers. These applications are based on 3D scans of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives.

    The use of the Progressive Orthodontics App requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    The Progressive Orthodontics App is an orthodontic appliance design and treatment simulation software is for use by dental professionals in a healthcare facility to assist in diagnosis and design solutions for patients. Digital scans (3D) of a patient's dentition can be loaded into the software and the dental professional can then review different treatment plans and simulations for each individual patient and decide on the most appropriate treatment. This is a software only device. Physical outputs such as aligners are not included in the scope of the clearance.

    AI/ML Overview

    The provided text is a 510(k) summary for the Progressive Orthodontics App. This document focuses on demonstrating substantial equivalence to a predicate device rather than presenting a standalone study with specific performance metrics against acceptance criteria. Therefore, several requested pieces of information are not explicitly detailed in this type of submission.

    Here's an attempt to answer your questions based on the provided text, along with indications where the information is not present:


    1. Table of acceptance criteria and reported device performance

    The document does not explicitly state quantitative acceptance criteria or specific performance metrics in a table. It only states that "All test results met acceptance criteria" (Page 7) and "The software passed the testing and performed per its intended use" (Page 8) after verification and validation.

    2. Sample size used for the test set and data provenance

    This information is not present in the provided document. The 510(k) summary mentions "integration, verification, and validation testing" (Page 8) but does not detail the size of the dataset used for these tests, nor its provenance (e.g., country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and their qualifications

    This information is not present. The document doesn't describe the process of establishing ground truth for any testing performed.

    4. Adjudication method for the test set

    This information is not present. No details are provided on how disagreements in ground truth (if experts were involved) were resolved.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size

    No, a multi-reader, multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not done or at least not reported in this 510(k) summary. The document focuses on demonstrating substantial equivalence through software verification and validation, not on clinical comparative effectiveness.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    The document states, "The Progressive Orthodontics App is an orthodontic appliance design and treatment simulation software is for use by dental professionals in a healthcare facility to assist in diagnosis and design solutions for patients." (Page 4). It further states that "The use of the Progressive Orthodontics App requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software." (Page 5). This strongly implies human-in-the-loop performance, as the software is a "front-end software tool" to assist professionals. While "standalone software module" is listed as a principle of operation (Page 6), this refers to the software existing independently, not necessarily operating without human interpretation or control. No "algorithm only" performance data is presented.

    7. The type of ground truth used

    The document does not specify the type of ground truth used for any testing. Since the product is a design and simulation tool, not a diagnostic one providing automated interpretations, the concept of "ground truth" for its performance evaluation (e.g., accuracy of measurements, validity of simulations) would likely involve comparison to established orthodontic principles, validated physical models, or expert consensus on design parameters, but this is not detailed.

    8. The sample size for the training set

    This information is not present. The 510(k) summary does not mention any "training set" for the software, suggesting that if machine learning or AI models are involved, their development and training details were not part of this specific submission summary. The product is described more as a design and simulation tool rather than an AI-driven image analysis or diagnostic tool.

    9. How the ground truth for the training set was established

    This information is not present, as no training set is mentioned in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233625
    Device Name
    RAYDENT SW
    Manufacturer
    Date Cleared
    2024-05-16

    (185 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RAYDENT SW is a software designed to assist dental professionals in planning patient treatment devices. The software performs simulations based on patient images, allowing reference to treatment plans, and is used as a tool to design treatment devices based on 3D mesh data. Treatment devices include prosthetic devices (Veneer, Crown, Bridge, In/Onlay) and orthodontic devices (Clear Aligner).

    To use RAYDENT SW, users must have the necessary education and domain knowledge in orthodontic practice and receive dedicated training in the use of the software.

    Device Description

    RAYDENT is a software that provides tools to simulate treatment plans based on patient images generated by compatible scanners and design treatment devices based on appropriate three-dimensional images. It allows dental offices to acquire patient data in conjunction with software on compatible imaging equipment and utilize the acquired images to create treatment plans and devices for skilled dentists and oral and maxillofacial specialists.

    AI/ML Overview

    The document K233625 is a 510(k) Premarket Notification for the device "RAYDENT SW," a software designed to assist dental professionals in planning patient treatment devices. As such, the document provides information on the device's intended use, comparison to predicate devices, and a summary of performance testing. However, it does NOT include detailed information about acceptance criteria or a specific study proving the device meets those criteria, particularly not in the context of an AI/ML-enabled medical device performance study.

    The document states that RAYDENT SW includes "Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices: YES" in its comparison table (Page 7). However, the "Performance Testing" section (Page 9) does not describe an AI/ML-specific performance study with acceptance criteria, sample sizes, ground truth establishment, or human-in-the-loop evaluation. It merely states that "Software, hardware, and integration and validation testing was performed in accordance with the FDA Guidance Document 'Guidance for the Content of Premarket Submissions for Device Software Functions' and 'Guidance for the Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions'." It then concludes that "All test results have been reviewed and approved, showing the RAYDENT SW to be substantially equivalent to the predicate devices."

    Therefore, based on the provided text, I cannot extract the information required to answer your prompt about the acceptance criteria and a study proving the device meets those criteria in the context of AI/ML performance.

    To answer your specific points:

    1. A table of acceptance criteria and the reported device performance: Not found in the provided document. The document mentions general validation testing but no specific performance metrics or acceptance criteria for an AI component.
    2. Sample sized used for the test set and the data provenance: Not found.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not found.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not found.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not found. The document explicitly states "Clinical testing is not a requirement and has not been performed" (Page 9), implying no such MRMC study was conducted.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not explicitly detailed for an AI component. The general performance testing is mentioned, but without specifics for the AI.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not found.
    8. The sample size for the training set: Not found.
    9. How the ground truth for the training set was established: Not found.

    The document focuses on substantiating equivalence primarily through comparison of indications for use, technological characteristics, and general software/hardware validation, rather than an in-depth AI/ML performance study.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232564
    Device Name
    Align Studio
    Manufacturer
    Date Cleared
    2024-03-12

    (201 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Align Studio is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options based on 3Dmodels of the patient's dentition before the start of an orthodontic treatment.

    The use of the Align Studio requres the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    Align Studio is a PC-based software that sets up virtual orthodontics via digital impressions. It automatically segments the crown and the gum in a simple manner and provides basic model analysis to assist digital orthodontic procedures.

    AI/ML Overview

    The provided document, an FDA 510(k) summary for "Align Studio," does not contain detailed information about specific acceptance criteria, a comprehensive study proving the device meets those criteria, or the methodology (e.g., sample size, expert qualifications, ground truth establishment) typically associated with such studies for AI/ML-based medical devices.

    Instead, this document focuses on demonstrating substantial equivalence to predicate devices (Ortho System and CEREC Ortho Software) rather than presenting a detailed performance study against predefined acceptance criteria for an AI-powered system. The Non-Clinical Test Summary section briefly mentions "software validation" and "performance testing" but without quantifiable metrics or specific methodologies. It states that "Align Studio meets all performance test criteria and that all functions work without errors" and "test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."

    Therefore, I cannot populate the table or answer most of the questions as the required information is not present in the provided text.

    Here's what can be extracted based on the limited information provided:

    1. A table of acceptance criteria and the reported device performance
    The document does not provide a table of acceptance criteria with quantifiable performance metrics specific to an AI/ML system's output. It broadly states the device "meets all performance test criteria" and "functions work without errors." The focus is on functional equivalence to predicate devices rather than specific quantitative performance targets for an AI component.

    2. Sample size used for the test set and the data provenance
    Not specified. The document does not detail the test set used for performance evaluation, nor its size or origin (country, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
    Not specified. The document doesn't describe the establishment of a ground truth for a test set, which would typically involve expert review for AI/ML performance evaluation.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
    Not specified, as a detailed ground truth establishment process is not described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
    No MRMC comparative effectiveness study is mentioned. The submission focuses on substantial equivalence based on device features and intended use, not on human reader performance with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
    The "Non-Clinical Test Summary" section mentions "Performance Testing" which could imply standalone testing, but no specific metrics for an algorithm-only performance (e.g., segmentation accuracy, measurement precision without human interaction) are provided. The device is described as "PC-based software" for "virtual orthodontics" that "automatically segments the crown and the gum," implying an algorithm performing actions. However, the document does not detail the standalone performance metrics for this automated segmentation or other AI features.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
    Not specified. Given the lack of detailed performance study information, the type of ground truth used is not described.

    8. The sample size for the training set
    Not specified. The document does not provide details on the training set used for any AI/ML components within the "Align Studio" software.

    9. How the ground truth for the training set was established
    Not specified. Without information on a training set, the method of establishing its ground truth is also not provided.

    Summary of available information regarding software validation and performance:

    • Software Validation: "Align Studio contains Basic Documentation Level software was designed and developed according to a software development process and was verified and validated."
    • Performance Testing: "Through the performance test, it was confirmed that Align Studio meets all performance test criteria and that all functions work without errors. Test results support the conclusion that actual device performance satisfies the design intent and is equivalent to its predicate device."
    • Clinical Studies: "No clinical studies were considered necessary and performed."

    This filing relies on demonstrating substantial equivalence to already cleared predicate devices based on shared technological characteristics and intended use, rather than presenting a novel performance study for an AI/ML component with specific acceptance criteria and detailed clinical validation results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K233616
    Manufacturer
    Date Cleared
    2024-01-11

    (59 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Clevaligner software is intended for use as a medical device standalone software, providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual design of a series of dental casts, which may be used for sequential aligner trays or retainers, based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives. The use of Clevaligner Software requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well to have received a dedicated training in the use of the software.

    Device Description

    Clevaligner Software imports patient 3-D digital scans and provides the orthodontic treatment planning of the patient under dental professional supervision.

    The Clevaligner Software performs the automatic segmentation of the 3-D digital scans, achieves an automatic design of an ideal arch form, which is approved by orthodontists using a 2D software interface.

    The Clevaligner Software provides an initial to ideal final stage treatment plan, with each step of the treatment plan path presented in the 3D software interface.

    Every 3D digital model from the path treatment plan, generated by the Clevaligner Software, can be exported for fabrication of orthodontic appliances, either to an orthodontic laboratory or directly to orthodontic appliance manufacturers for further use in orthodontic treatment.

    AI/ML Overview

    The provided text does not contain the detailed acceptance criteria and study results in the format requested. While it refers to "Software verification and validation testing" and states that "All test results met acceptance criteria," it does not specify what those acceptance criteria were or what the reported device performance was against those criteria.

    Specifically, the document is a 510(k) summary for the Clevaligner Software (V1.0.0), a Class II orthodontic software. It focuses on demonstrating substantial equivalence to a predicate device (SoftSmile, Inc. Vision K212770).

    Here's a breakdown of what can be extracted and what is missing based on your request:

    Information Extracted from the Document:

    • Device Name: Clevaligner Software (V1.0.0)
    • Intended Use: "intended for use as a medical device standalone software, providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual design of a series of dental casts, which may be used for sequential aligner trays or retainers, based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives." (Page 6)
    • Type of Study (Non-Clinical): Software testing, including verification and validation, conducted in accordance with IEC 62304 and FDA Guidance Documents ("General Principles of Software Validation" and "Cybersecurity in Medical Devices"). An "aligner manufacturing validation" was also performed to demonstrate that the digital aligners planned by the software match the fabrication of aligner trays/retainers with an acceptable level of accuracy. (Page 10)
    • Clinical Testing: "Clinical testing has not been performed and is not required since the proposed device, Clevaligner Software, is a stand-alone medical device software which is used without direct patient contact." (Page 10)
    • Documentation Level: Basic Documentation level as per FDA Guidance for the Content of Premarket Submissions for Device Software Functions. (Page 10)
    • Cybersecurity Analysis: Performed in accordance with FDA Guidance. (Page 10)

    Missing Information (Crucial for your request):

    1. A table of acceptance criteria and the reported device performance: This is the most significant missing piece. The document merely states "All test results met acceptance criteria" (Page 10) but does not list them or the specific performance metrics achieved.
    2. Sample size used for the test set and the data provenance: Not specified for the software verification/validation tests. For the "aligner manufacturing validation," it mentions "digital aligners treatment planned" but no specific sample size or origin.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
    4. Adjudication method for the test set: Not mentioned.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not done, as it states clinical testing was not performed. The software is for "virtual design" and "treatment planning," not direct interpretation by humans that would be assisted by AI in the diagnostic sense.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, non-clinical software testing was done, but the specific metrics are not detailed. The "aligner manufacturing validation" also seems to be a standalone performance check.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not specified for the software testing. For the aligner manufacturing validation, the "ground truth" implicitly would be the physical accuracy of the 3D printed models compared to the software's digital design, but the method for assessing this "acceptable level of accuracy" isn't detailed.
    8. The sample size for the training set: Not mentioned. This device likely uses machine learning for features like "automatic segmentation" and "automatic design of an ideal arch form," but no details on training data are provided.
    9. How the ground truth for the training set was established: Not mentioned.

    Conclusion:

    The provided FDA 510(k) summary confirms the device's regulatory clearance based on substantial equivalence, but it does not provide the specific technical details of the acceptance criteria nor the performance data that would typically be found in a detailed study report. The document indicates that thorough non-clinical software testing and validation were performed, and all acceptance criteria were met, but the specific details of these criteria and the results are not included in this summary. The "aligner manufacturing validation" hints at a performance metric related to accuracy in aligner fabrication, but again, without specific numbers or methods.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232549
    Device Name
    NemoCast
    Date Cleared
    2023-11-21

    (90 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NemoCast is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options (Export of Models, Indirect Bonding Transfer Media) based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned dectives. It can also be applied during the treatment to inspect and analyze the progress of the treatment.

    The use of the NemoCast requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    NemoCast is a software system used for the management of 3D scanned orthodontic models of the patients, orthodontic diagnosis by measuring, analyzing, inspecting and visualize 3D scanned orthodontic models, virtual planning of orthodontic treatments by simulating tooth movements, virtual placement of orthodontic brackets on the 3D models and design of orthodontic appliances based on 3D scanned orthodontic models, including transfer methods for indirect bonding of brackets. Output includes STL Models (also called dental casts) for thermoforming aligners, STL files for direct printing aligners and Indirect Bonding Transfer Trays (also called orthodontic bracket placement trays). The device has no patient contact.

    AI/ML Overview

    This document (K232549) is a 510(k) premarket notification for the device "NemoCast," an orthodontic software. It establishes substantial equivalence to existing predicate devices, particularly 3Shape Ortho System™ (K152086) and a previous version of NemoCast (K193003).

    The key takeaway is that this is primarily a software validation and substantial equivalency claim, rather than a study proving new clinical performance. The manufacturer is demonstrating that their current software performs similarly to a previously cleared version and a predicate device.

    Here's an analysis of the provided information regarding acceptance criteria and performance study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a "table of acceptance criteria" with corresponding "reported device performance" in the typical format of a clinical study summary. Instead, the acceptance criteria are implicitly met by demonstrating substantial equivalence to predicate devices. The "reported device performance" is the functionality of NemoCast itself, and the "study" is the comparison against the predicate devices.

    The "Comparison of Intended Use and Technological Characteristics with the reference Device" table (pages 7-8) serves as the primary evidence of meeting "acceptance criteria" for substantial equivalence. It lists various features and functions, and for each, it aims to show "None" under "Differences" or a difference that does not affect safety and effectiveness.

    Here's a condensed version of how that table functions as a de facto "acceptance criteria" and "performance report":

    FeatureAcceptance Criteria (Implied by Predicate)NemoCast Performance (Reported)
    Product CodePNN, LLZPNN, LLZ
    Common NameOrthodontic SoftwareOrthodontic Software
    Classification NameOrthodontic Plastic BracketOrthodontic Plastic Bracket
    Regulation Number21 CFR 872.547021 CFR 872.5470
    Supported anatomic areasMaxilla and MandibleMaxilla and Mandible
    Use by dental professionals in orthodontic treatment planningYes (NemoCast K193003: "only before treatment")Yes (NemoCast K232549: "before, during, after treatment")
    Difference noted: broadened scope compared to reference device, but aligned with primary predicate.
    Management of patients and modelsYesYes
    Inspection, measurement and analysis of orthodontic modelsYesYes
    Treatment simulationYesYes
    Virtual appliance preparation, handling and exportYesYes
    Provide digital file and device outputYes (STL files for dental casts)Yes (STL files for dental casts, and additionally indirect Bonding Transfer Media)
    Supported PC formatsWindowsWindows
    Creating, editing, deleting and copying patient dataYesYes
    Creating, editing, deleting and copying case dataYesYes
    Surface scan from intra-oral scannerYesYes
    Surface scan from STL, PLY, OBJ fileYes (Predicate: STL only)Yes (STL, PLY, OBJ)
    Difference noted: Broader import formats, stated not to affect security/safety.
    CT image dataDICOMDICOM
    2D overlayPNG, JPG, BMPPNG, JPG, BMP
    Aligning surface scan and CT imageYesYes
    Aligning cephalometric imagesYesYes
    Alignment of surface scan with 2D overlaysYesYes
    Ability to check/adjust DICOM visibilityYesYes
    DICOM scan segmentationYesYes
    Occlusal OrientationYes (Reference Device) / No (Predicate Device)Yes
    Difference noted: Feature present in subject and reference, not in primary predicate.
    Segmenting teeth rootsYes (Reference Device) / No (Predicate Device)Yes
    Difference noted: Feature present in subject and reference, not in primary predicate.
    DICOM orientationYesYes
    2D measurement toolboxYesYes
    3D measurement toolboxYesYes
    Arch shape analysisYesYes
    Wire length analysisYesYes
    Tooth width analysisYesYes
    Bolton analysisYes (Predicate Device) / No (Reference Device)Yes
    Space analysisYesYes
    Overjet/overbite analysisYes (Predicate Device) / No (Reference Device)Yes
    Occlusion mapYesYes
    Treatment analysis and report generationYesYes
    2D & 3D simulationYesYes
    Orthodontic appliance searchYesYes
    Orthodontic appliance virtual preparationYesYes
    Orthodontic appliance designYesYes
    Orthodontic appliance exportYesYes
    Virtual articulatorYesYes
    Intended UserDental ProfessionalsDental Professionals
    Intended Patient PopulationPatients requiring Orthodontic Treatment (Predicate) / Adults requiring Orthodontic Treatment (Reference)Patients requiring Orthodontic Treatment
    Difference noted: Aligned with primary predicate.

    2. Sample size used for the test set and the data provenance

    The document states: "The performance testing remains unchanged from the company's own reference device submission, NemoCast K193003. The performance testing for the subject device is being leveraged from the company's own reference device including: design verification and validation testing."

    This implies that the sample size and data provenance for the current 510(k) submission are not new. They are relying on previous testing. The document does not explicitly state the sample size (number of cases/patients) or the data provenance (e.g., country of origin, retrospective/prospective) for the test set. This information would typically be found in the original K193003 submission or internal validation reports, which are not detailed here.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. As this is a software substantial equivalence submission leveraging previous testing, details about the ground truth establishment for the test set (number and qualifications of experts) are not specified here.

    4. Adjudication method for the test set

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a MRMC comparative effectiveness study demonstrating improved human reader performance with AI assistance was not done for this submission. The "NemoCast" device is described as a "medical front-end device providing tools for management... analysis, treatment simulation, and virtual appliance design." It is a planning and design software, not an AI-assisted diagnostic tool that would typically involve a human-in-the-loop MRMC study for assessing reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document mentions "The software is thoroughly tested in accordance with a documented test plan. This test plan is derived from the specifications and ensures that all controls and features are functioning properly. The software is validated together with end-users." This general statement indicates functional and validation testing that would assess standalone performance (software functioning as intended). However, specific quantifiable metrics of "algorithm-only" performance (e.g., accuracy of automatic measurements) are not reported in this summary. The focus is on the functional equivalence of the tools provided within the software.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of "ground truth" used for the underlying validation of the software's measurements or simulations. Given the nature of orthodontic planning software, ground truth for features like measurements or segmentations would typically be established through:

    • Expert consensus: Manual measurements or segmentations performed by experienced orthodontists on 3D models.
    • Physical measurements/phantom data: Validation against known physical dimensions or phantom models.

    However, these details are not provided in this 510(k) summary.

    8. The sample size for the training set

    This information is not provided. As this is a software product, not necessarily one relying on a large deep learning model needing a defined "training set" in the context of AI development for image interpretation, this detail might not be applicable or explicitly stated. If there are features utilizing machine learning (which is not explicitly detailed but possible for functions like segmentation), the training set details are not included. The submission is focused on demonstrating functional equivalence.

    9. How the ground truth for the training set was established

    This information is not provided and is likely not applicable in the context of this 510(k) submission, as it focuses on demonstrating substantial equivalence of a software tool rather than a novel AI algorithm with a distinct training phase. If machine learning components exist, the process for establishing their "training ground truth" is not disclosed here.

    In summary, the provided text details a 510(k) submission for substantial equivalence. It does not contain the detailed clinical study information (like sample sizes, expert qualifications, adjudication methods, or MRMC study results) that would typically accompany a submission for a novel diagnostic AI device where independent performance validation is the primary focus. Instead, it relies on demonstrating that the updated software (NemoCast K232549) is functionally equivalent to its predicate devices (NemoCast K193003, 3Shape Ortho System™ K152086) and that any differences do not impact safety or effectiveness.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232429
    Manufacturer
    Date Cleared
    2023-10-13

    (63 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Titan Dental Design is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options (Export of Models, Indirect Bonding Transfer Media, Sequential aligners) based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives.

    The use of Titan Dental Design requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    Titan Dental Design by ClearAdvance, LLC is an orthodontic appliance design and treatment planning software. This software is for use by dentists, orthodontists, and trained health care professionals to diagnose and design treatment solutions for patients. The Titan Dental Design software allows users to upload digital scans of patient's dentition to the system, manipulate and move teeth in the dentition scan to create treatment plans for malocclusion, and export or send treatment planning files for physical production of orthodontic appliances such as thermoplastic aligners with the use of 3D printers.

    AI/ML Overview

    The provided text is a 510(k) summary for the "Titan Dental Design" software. It focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed acceptance criteria and performance data from a specific study, especially not one that involves human readers or clinical outcomes.

    Therefore, many of the requested details about acceptance criteria, study design, expert involvement, and performance metrics (especially those related to AI assistance or standalone performance) are not present in this document.

    This document describes a software device that provides tools for managing orthodontic models, systematic inspection, detailed analysis, treatment simulation, and virtual appliance design options. It is not an AI-assisted diagnostic tool in the sense of detecting or identifying conditions from medical images, which is typically where the detailed performance metrics you've asked for (e.g., sensitivity, specificity, MRMC studies) are most relevant. This software is more of a design and planning tool for orthodontists.

    Here's what can be extracted and what cannot:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a formal table of quantitative acceptance criteria for performance metrics (such as sensitivity, specificity, or precision), nor does it report specific numerical performance data against these criteria. Instead, it relies on demonstrating substantial equivalence to predicate devices by comparing indications for use, technological characteristics, and stating that software validation testing was successfully performed.

    The "Summary of Performance Data and Substantial Equivalence" section states:
    "Utilizing FDA Guidance document 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices' (issued May 11, 2015) the Proposed Device, Titan Dental Design underwent appropriate integration, verification, and validation testing. The software passed the testing and performed per its intended use."

    This is the reported "performance" - a qualitative statement confirming it "passed testing" and "performed per its intended use," rather than specific quantitative metrics.

    2. Sample size used for the test set and the data provenance

    The document does not specify the sample size used for any test set or the provenance (e.g., country of origin, retrospective/prospective) of any data used for testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not mention the use of experts to establish a "ground truth" for a test set, as its validation appears to be primarily software-centric (integration, verification, validation) rather than based on a clinical performance study with human readers.

    4. Adjudication method for the test set

    Not applicable, as no clinical test set with human adjudication is described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study is mentioned or implied. This device is described as a "medical front-end device providing tools for management... treatment simulation and virtual appliance design options," not an AI-assisted diagnostic tool that aids human readers in interpretation.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    The document describes the device as "Stand Alone Software" under "Technological Features," but this refers to the software's operational independence (i.e., it doesn't require other specific hardware or software beyond OS/RAM/CPU) rather than a "standalone performance study" in the context of an AI algorithm's diagnostic accuracy. The entire device functions as "algorithm only" in the sense that it is software, but performance data like sensitivity/specificity are not provided.

    7. The type of ground truth used

    Not explicitly stated. The validation appears to be against functional requirements and intended use, rather than a clinical ground truth like pathology or outcomes data.

    8. The sample size for the training set

    The document does not mention a training set size, implying that this is not a machine learning model that requires a distinct "training set." It's more of a rule-based or calculational software tool for dental design.

    9. How the ground truth for the training set was established

    Not applicable, as no training set or machine learning components are detailed.


    In summary, the provided FDA document is a 510(k) clearance letter and summary for a dental software device that functions as a design and planning tool, not an AI-driven diagnostic or assistive technology. Therefore, it does not contain the detailed performance study information typically found in submissions for AI/ML-based diagnostic devices, such as acceptance criteria based on accuracy metrics, test set characteristics, expert ground truthing, or MRMC study results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223518
    Device Name
    iOrtho
    Date Cleared
    2023-06-13

    (202 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    iOrtho is intended for use as a medical front-end device providing tools for management of orthodontic cases, systematic inspection, detailed analysis, treatment simulation and virtual applians (Export of Models, Indirect Bonding Transfer Media, Sequential aligners) based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives.

    The use of iOrtho requires the user to have the necessary training and domain knowledge in the practice of orthodonties, as well as to have received a dedicated training in the use of the software.

    Device Description

    iOrtho (hereafter referred to as "Proposed Device") includes modifications to the currently marketed software included in K203688, cleared October 8, 2021 (hereafter referred to as "Reference Device"). The Proposed Device is an orthodontic appliance design and treatment simulation software is for use by dental professionals to aid in diagnosis and design solutions for patients. Digital scans (3D) of a patient's dentition can be loaded into the software and the dental professional can then create treatment plans for each individual patient and their needs. The system can be used to fabricate 3D dental models using standard stereolithographic (STL) files for use in 3D printers. These models can then be used as a template for thermoforming aligners or retainers by Angel Align technicians.

    AI/ML Overview

    The provided text is a 510(k) summary for the iOrtho device. It describes the device, its intended use, and compares it to predicate and reference devices to demonstrate substantial equivalence. However, it does not contain information about acceptance criteria or a specific study that proves the device meets those criteria in detail.

    The summary states:
    "Utilizing FDA Guidance document "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (issued May 11, 2015) the Proposed Device, iOrtho underwent appropriate integration, verification, and validation testing. The software passed the testing and performed per its intended use."

    And:
    "The software has been designed, integrated, verified, and validated in accordance with IEC 62304-Medical device software – software life cycle processes."

    These statements confirm that testing was performed and passed, and that it followed relevant standards and guidance. However, the document does not provide the specific acceptance criteria, the detailed results (e.g., in a table), sample sizes, ground truth establishment methods, or expert qualifications that are typically found in a clinical or performance study summary.

    Therefore, I cannot fulfill your request for a table of acceptance criteria and reported device performance, or details about the study, as this information is not present in the provided document. The 510(k) summary focuses on demonstrating substantial equivalence through comparison of specifications and general statements about passing validation testing, rather than presenting a detailed performance study with specific metrics and methodologies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212173
    Manufacturer
    Date Cleared
    2022-01-25

    (197 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The HDH Treatment Planning System is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual design of a series of dental casts, which may be used for sequential aligner trays or retainers, based on 3D models of the patient's dentition. The use of the HDH Treatment Planning System requires the necessary training and domain knowledge in the practice of orthodontics, as well to have received a dedicated training in the use of the software.

    Device Description

    The HDH Treatment Planning system is a software system for orthodontic diagnosis and treatment simulation utilized by dental professionals. The software imports patient 3D digital scans serving as a base for diagnosing the orthodontic treatment needs, analyzing, inspecting, measuring, and simulating tooth movements, and allows the user to develop a virtual treatment plan. The output of the treatment plan may be downloaded as files in STL format, a standard stereolithographic file format, or OBJ format, a standard 3D image format, for fabrication of dental casts, which may be used to fabricate sequential aligner trays or retainers.

    AI/ML Overview

    The provided text is related to the FDA clearance of the "HDH Treatment Planning System," a software device used for orthodontic treatment planning. The application focuses on demonstrating "substantial equivalence" to a predicate device, rather than proving a specific performance level against pre-defined acceptance criteria for a medical imaging AI.

    Therefore, the document does not contain the detailed information typically found in a study proving a device meets acceptance criteria for an AI/ML medical device, such as:

    • A table of acceptance criteria and reported device performance (in terms of clinical metrics like sensitivity, specificity, AUC, etc.).
    • Sample sizes for test sets used to assess clinical performance.
    • The number and qualifications of experts used for ground truth establishment.
    • Adjudication methods.
    • Results of multi-reader multi-case (MRMC) studies.
    • Standalone (algorithm-only) performance.
    • Type of ground truth used (pathology, outcomes data).
    • Sample size for the training set.
    • How ground truth for the training set was established.

    Instead, the document states:

    • Performance Data: "Software verification and validation testing was performed in accordance with the FDA Guidance Document 'Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices' (issued May 11, 2005). All test results met acceptance criteria, demonstrating the HDH Treatment Planning System performs as intended and is substantially equivalent to the predicate devices." (Page 5)

    This statement indicates that the "acceptance criteria" referred to are likely related to software verification and validation (e.g., functional testing, performance under various loads, error handling, etc.), rather than clinical performance metrics for an AI system. The basis of clearance is "substantial equivalence" to a predicate device (3Shape Ortho System K152086), meaning it performs similarly and raises no new safety or effectiveness concerns.

    In summary, because this is a 510(k) submission for a non-AI/ML software device (as indicated by the application date and the "substantial equivalence" pathway description), the detailed information requested for a study proving an AI device meets acceptance criteria is not present. The "acceptance criteria" here refer to software engineering and validation standards, not clinical performance benchmarks for an AI.

    Ask a Question

    Ask a specific question about this device

    K Number
    K212770
    Device Name
    Vision
    Manufacturer
    Date Cleared
    2021-12-21

    (112 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    PNN

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SoftSmile Vision is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual design of a series of dental casts, which may be used for sequential aligner trays or retainers, based on 3D models of the patient's dentition before the start of an orthodontic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment. It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned/desired treatment objectives. The use of SoftSmile Vision requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well to have received a dedicated training in the use of the software.

    Device Description

    SoftSmile Vision is orthodontic planning and treatment simulation software for use by dental professionals. SoftSmile Vision imports patient 3-D digital scans and allows the user to plan the orthodontic treatment needs of the patient and develop a treatment plan. The output of the treatment plan may be downloaded as files in standard stereolithographic (STL) format for fabrication of dental casts, which may be used to fabricate by a manufacturer sequential aligner trays or retainers.

    AI/ML Overview

    Here's an analysis of the provided text, focusing on acceptance criteria and the study proving device performance:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided FDA 510(k) summary does not explicitly state specific, quantifiable acceptance criteria or a direct table comparing them to reported device performance. Instead, it relies on a statement of meeting acceptance criteria established during software verification and validation. The primary form of "performance" discussed is the software functioning as intended and being substantially equivalent to the predicate device.

    Therefore, a table with specific performance metrics cannot be generated from the given text.

    The document states:

    • "All test results met acceptance criteria, demonstrating the Vision software performs as intended, raises no new or different questions of safety or effectiveness and is substantially equivalent to the predicate device."

    This is a general statement of compliance rather than a detailed report of performance against predefined thresholds.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify the sample size used for the test set. It also does not mention the country of origin of the data or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used to establish ground truth or their qualifications. The study described is primarily focused on software verification and validation, not clinical performance reviewed against expert-derived ground truth.

    4. Adjudication Method for the Test Set

    The document does not mention any adjudication method (e.g., 2+1, 3+1, none) for a test set. This type of method is typically used when human readers or experts are involved in establishing ground truth for evaluating diagnostic or predictive devices, which is not the focus of the described "study."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    No. The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus is on software validation relative to a predicate device, not on comparing human performance with and without AI assistance. Therefore, there is no effect size reported for human readers improving with AI.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, inferentially. The "study" described is a "Software and integration verification and validation testing." This type of testing assesses the algorithm's performance and functionality in a standalone manner, ensuring it operates as designed, without human intervention during the core processing. The statement "demonstrating the Vision software performs as intended" implies standalone evaluation of the software's functions.

    7. The Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used in the context of clinical outcomes or expert consensus. Given the nature of a software verification and validation study, the "ground truth" would likely be defined by:

    • Software requirements specifications: The expected behavior and output of the software.
    • Predicate device behavior: The established functionality and output of the legally marketed predicate device (ULab Systems UDesign (K171295)).
    • Engineering specifications: Correctness of calculations, data handling, and file exports according to predefined digital standards.

    8. The Sample Size for the Training Set

    The document does not mention a training set or its sample size. This is consistent with the device being a "front-end" software for treatment planning and simulation, rather than a machine learning model that requires a large training dataset for inference.

    9. How the Ground Truth for the Training Set Was Established

    Since no training set is mentioned, there is no information on how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3