Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K220648
    Device Name
    OMF ASP System
    Manufacturer
    Date Cleared
    2022-08-11

    (157 days)

    Product Code
    Regulation Number
    872.4120
    Panel
    Dental
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    OMF ASP System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The OMF ASP System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the OMF ASP System, and the result is an output data file that may then be provided as digital models or used as input to an additive manufacturing portion of the system that produces physical outputs including anatomical models, templates, and surgical guides for use in maxillofacial surgery. The OMF ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.

    Device Description

    The Oromaxillofacial Advanced Surgical Planning (OMF ASP) System utilizes Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input, and to produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides for planning and performing maxillofacial surgeries.

    The system utilizes medical imaging, such as CT-based imaging data of the patient's anatomy to create surgical plans with input from the physician to inform surgical decisions and guide the surgical procedure. The system produces a variety of patient specific to the maxillofacial region including anatomic models (physical and digital), physical surgical templates and/or guides, and patient specific case reports. The system utilizes additive manufacturing to create patient specific guides, and anatomical models.

    AI/ML Overview

    The provided text describes the OMF ASP system and its substantial equivalence to a predicate device but does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the study proving it.

    Specifically, the document states:

    • "All acceptance criteria for design validation were met."
    • "All acceptance criteria for performance testing were met."
    • "All acceptance criteria for software verification testing were met."
    • "All acceptance criteria for the cleaning validation were met."
    • "All acceptance criteria for the steam sterilization validation were met."
    • "All acceptance criteria for biocompatibility were met and the testing adequality addresses biocompatibility for the output devices and their intended use."

    However, it does not explicitly state what those specific acceptance criteria were (e.g., quantifiable metrics like accuracy, sensitivity, specificity, or specific error margins for measurements). It also does not provide the reported device performance in measurable terms against those criteria.

    Therefore, I can only provide a partial answer based on the available information.


    1. A table of acceptance criteria and the reported device performance

    Based on the provided text, specific quantifiable acceptance criteria and reported device performance (e.g., specific accuracy percentages, dimensions, etc.) are not detailed. The document only broadly states that "All acceptance criteria... were met" for various validation tests.

    Acceptance Criteria CategoryNature of Acceptance Criteria (as implied)Reported Device Performance (as implied)
    Design ValidationDevice designs conform to user needs and intended use for maxillofacial surgeries, identical to predicate in indications, design envelope, worst-case configuration, and post-processing conditions.All acceptance criteria were met.
    Performance TestingManufacturing process assessment; operator repeatability within the digital workflow; digital and physical outputs verified against design specifications.All acceptance criteria were met.
    Software VerificationCompliance with FDA guidance for software in medical devices (Moderate Level of Concern).All acceptance criteria were met.
    Cleaning ValidationBioburden, protein, and hemoglobin levels within acceptable limits post-cleaning (in accordance with AAMI TIR 30).All acceptance criteria were met.
    Sterilization ValidationSterility assurance level (SAL) of 10^-6 achieved for dynamic-air-removal cycle (in accordance with ISO 17665-1).All acceptance criteria were met.
    Biocompatibility TestingCompliance with ISO 10993-1, -5, -10, -11 requirements for biological safety.All acceptance criteria were met.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document mentions "Cases used for testing were representative of the reconstruction procedures within the subject device's intended use" for performance testing, but does not specify the sample size used for any test sets, nor the data provenance (country of origin, retrospective/prospective nature).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not provide any information regarding the number or qualifications of experts used to establish ground truth for any test sets.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not provide any information regarding an adjudication method for test sets.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not conducted or reported. The document states, "No clinical data were provided in order to demonstrate substantial equivalence." The device is described as a "software system and image segmentation system for the transfer of imaging information" and a "pre-operative software tool for simulating/evaluating surgical treatment options," indicating it's primarily a planning and manufacturing aid, not an AI diagnostic tool intended to assist human readers in interpretation that would typically necessitate an MRMC study.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes the OMF ASP System as utilizing "Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input," and notes that its primary function is to "produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides." It also mentions "operator repeatability within the digital workflow" during performance testing. This suggests a human-in-the-loop process for surgical planning and model generation, rather than a standalone algorithm performance evaluation. However, the exact nature of the "performance testing" to verify digital and physical outputs might imply some level of standalone assessment against design specifications, but this is not explicitly detailed as an "algorithm only" performance study.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for any of the validation processes. It refers to "design specifications" against which outputs were verified, but the origin of these specifications (e.g., derived from expert consensus, anatomical measurements, etc.) is not detailed.

    8. The sample size for the training set

    The document does not mention or provide any information about a training set or its sample size. This is consistent with the description of the system using "Commercial Off-the-Shelf (COTS) software" rather than a custom-developed AI algorithm that would typically require a specific training phase.

    9. How the ground truth for the training set was established

    Since no training set is mentioned (see point 8), this information is not applicable and not provided.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1