Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K222072
    Date Cleared
    2022-08-08

    (25 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Dolphin Imaging 12.0 software is designed for use by specialized dental practices for capturing, storing and presenting patient images and assisting in treatment planning and case diagnosis. Results produced by the software's diagnostic and treatment planning tools are dependent on the interpretation of trained and licensed practitioners.

    Device Description

    Dolphin Imaging 12.0 software provides imaging, diagnostics, and case presentation capabilities for dental specialty professionals. The Dolphin Imaging 12.0 suite of software products is a collection of modules that together provide a comprehensive toolset for the dental specialty practitioner. Users can easily manage 2D/3D images and x-rays; accurately diagnose and treatment plan, quickly communicate and present cases to patients and can work efficiently with colleagues on multidisciplinary cases. The below functionalities make up the medical device modules:
    Cephalometric Tracing: Digitize landmarks on a patient's radiograph, trace cephalometric structures, view cephalometric measurements, superimpose images for analysis and perform custom analysis.
    Treatment Simulation (VTO): Simulate orthodontic and surgical treatment results using Visual Treatment Objective (VTO) and growth features.
    Arnett/Gunson FAB Analyses: Perform face, airway, bite (FAB) analysis and simulate treatment for orthodontic and surgical cases based on the methodologies of Dr. William Arnett.
    McLaughlin Dental VTO: Analyze and evaluate orthodontic and surgical visual treatment objective (VTO) based on the theories of Dr. Richard McLaughlin.
    Implanner™: Plan dental implant procedures in 2D.
    Dolphin 3D: Plan, diagnose and present orthodontic and surgical cases, airway analysis, study models, implant planning and surgery treatment simulation in 3D.

    AI/ML Overview

    The provided text is a 510(k) Premarket Notification from the FDA for "Dolphin Imaging 12.0." This document primarily focuses on establishing substantial equivalence to a predicate device (Dolphin Imaging 11.5) rather than presenting a detailed acceptance criteria and study proving device performance against such criteria for a novel AI/ML medical device.

    Therefore, the requested information regarding acceptance criteria and a study proving the device meets those criteria, particularly in the context of AI/ML performance metrics (like accuracy, sensitivity, specificity, MRMC studies, standalone performance, and ground truth establishment), is not present in the provided document.

    The document states: "No clinical testing was required to support substantial equivalence." This indicates that no new performance studies (clinical or non-clinical, beyond basic software and system testing) were conducted or needed for this specific 510(k) clearance, as the changes were deemed moderate and the device maintained the same "key medical device functionality" as its predicate.

    The "Non-Clinical Performance Testing" section lists general software and system testing (Performance Testing, Manual Testing/Integration Testing, System and Regression testing) and adherence to recognized standards (Usability, Software Life Cycle, DICOM, Risk Management). These are standard validation practices for software modifications, not performance studies as typically understood for AI/ML devices with specific numerical acceptance criteria.

    To directly answer your request based on the provided text:

    Acceptance Criteria and Study Proving Device Meets Acceptance Criteria

    No specific acceptance criteria related to a numerical performance metric (e.g., accuracy, sensitivity, AUC) for a diagnostic AI/ML algorithm are mentioned or detailed in this 510(k) summary.

    No study proving the device meets specific performance-based acceptance criteria (as would be typical for an AI/ML algorithm) is described. The 510(k) submission primarily relies on demonstrating substantial equivalence to a previously cleared predicate device due to software usability enhancements and system updates, rather than a new AI-driven diagnostic capability.


    However, if we were to infer the closest thing to "acceptance criteria" and "proof" from the document's context of substantial equivalence and safe and effective functionality, it would be:

    • Acceptance Criteria (Implied): The Dolphin Imaging 12.0 software operates with the same core medical device functionalities (listed in the "Medical Device Features" table) as the predicate (Dolphin Imaging 11.5) without introducing new safety or efficacy concerns. Usability enhancements are functional and do not degrade existing performance. Compliance with specified industry standards (IEC, DICOM, ISO) is met.
    • Study Proving Acceptance (Implied): The "Non-Clinical Performance Testing" which included "Performance Testing," "Manual Testing/Integration Testing," and "System and Regression testing," alongside adherence to recognized standards, served to demonstrate that the updated software continued to function as intended and comparably to the predicate, with the added usability enhancements.

    Responding to your specific numbered points, recognizing that this document is not for a novel AI/ML algorithm performance study:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria: Not explicitly stated as numerical performance targets. Implicitly, the device must maintain the "Medical Device Features" as the predicate and perform comparably, without issues.
      • Reported Device Performance: No quantitative performance metrics (e.g., accuracy, sensitivity) are reported. The "performance" is demonstrated through verification that the software functions as expected and complies with relevant standards.
      Acceptance Criterion (Implied)Reported Device Performance (as evident from clearance)
      Maintains all "key medical device functionality" of predicate.Confirmed via internal testing and substantial equivalence claim.
      Usability enhancements function as intended.Confirmed via internal testing.
      No new safety or efficacy concerns compared to predicate.Concluded by FDA based on submission.
      Compliance with IEC 62366 (Usability Engineering).Stated as compliant.
      Compliance with ANSI/AAMI/IEC 62304 (Software Life Cycle).Stated as compliant.
      Compliance with NEMA PS 3.1-3.20 (DICOM).Stated as compliant.
      Compliance with ISO 14971 (Risk Management).Stated as compliant.
    2. Sample size used for the test set and the data provenance: Not applicable/not provided. This was a software upgrade submission, not an AI/ML diagnostic performance study requiring a test set of patient data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable/not provided. No ground truth was established from clinical data for a diagnostic performance study.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable/not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No MRMC study was performed or required. The device is described as "assisting in treatment planning and case diagnosis," with results "dependent on the interpretation of trained and licensed practitioners," implying a human-in-the-loop system, but no study on human performance improvement with the updated software is detailed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: No standalone performance study was performed or required.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable/not provided for a diagnostic performance study. The "ground truth" for the software's functionality was its adherence to specifications and its comparable behavior to the predicate device.

    8. The sample size for the training set: Not applicable/not provided. This is not an AI/ML model being trained on a dataset.

    9. How the ground truth for the training set was established: Not applicable/not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200572
    Device Name
    Planmeca Romexis
    Manufacturer
    Date Cleared
    2020-12-02

    (272 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K061035, K141570, K133320, K110430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Planmeca Romexis is a medical imaging software intental and medical care as a tool for displaying and visualizing dental and medical 2D and 3D image files from imaging devices, such as projection radiography and CBCT. It is intended for use by radiologists, clinicians, referring physicians and other qualified individuals to retrieve, process, render, diagnose, review, store, print, and distribute images of both adult and pediatric patients.

    Planmeca Romexis is also a preoperative software used for dental implant planning. Based on the plant position a model of a surgical guide for a guided implant surgery can be designed objects can be exported to manufacture a separate physical product.

    Planmeca Romexis is also a preoperative software for simulating surgical treatment options.

    Planmeca Romexis is also intended to be used for monitoring, storing and displaying mandibular jaw positions and movements relative to the maxilla.

    Additionally, Planmeca Romexis includes monitoring features for Planmeca devices for maintenance purposes. The software is designed to work as a stand-alone or as an accessory to Planmeca dental unit products in standard PC.

    The software is for use by authorized healthcare professionals. Use of the software for implant planning requires that the user has the necessary medical training in implantology and surgical dentistry. Use of the software for surgical treatment planning requires that the user has the necessary medical training in maxillofacial surgery.

    Indications of the dental implants do not change with guided surgery compared to conventional surgery.

    Device Description

    Planmeca Romexis is a modular imaging software for dental and medical use. It is divided into modules to provide user access to different workflow steps involving different diagnostic views of images. Patient management screen with search capabilities lets users to find patients and identify correct patient file before starting work with a patient. After creating or selecting patient, new images can be acquired using select Planmeca X-ray units.

    Planmeca Romexis is capable of processing and displaying 2D images in different formats and 3D CBCT images in DICOM format. 3D CBCT images can be viewed in near real-time multi projection reconstruction (MPR) views. 2D and 3D image browsers are provided to allow user access to relevant images. Typical image enhancement filters and tools are available to assist the user in making diagnosis, but original exposure is always kept in the database for reference.

    lmages can be exported to files or writable media, printed to paper or DICOM media or transferred securely to other users using Planmeca online services. Interfaces to select external software are provided to facilitate exchange of patient information and images or data between Software and 3rd party applications.

    AI/ML Overview

    The provided text describes Planmeca Romexis, a medical imaging software. However, it does not contain specific acceptance criteria or details of a study proving the device meets acceptance criteria. Instead, it focuses on demonstrating substantial equivalence to predicate devices for regulatory clearance.

    Therefore, I cannot directly answer your request based on the provided text. The document lists the following:

    • Indications for Use: The software's intended uses (displaying 2D/3D images, dental implant planning, surgical treatment simulation, jaw movement monitoring, device maintenance).
    • Technological Characteristics Comparison: A table comparing features of Planmeca Romexis with predicate and reference devices, highlighting similarities in operating environment, functionalities, image files, and major features.
    • Non-Clinical Test Results: A general statement about quality assurance measures applied during development (Risk Analysis, Requirements Reviews, Design Reviews, Performance testing, Safety testing, Final acceptance testing, Bench testing). It concludes that testing confirmed stability, operation as designed, hazard evaluation, and risk reduction. It also states that bench testing compared images rendered by Planmeca Romexis with predicate software and confirmed they are "equally effective in performing the essential functions and provide substantially equivalent clinical data."

    Missing Information:

    The document does not provide:

    1. A table of specific acceptance criteria and reported device performance against those criteria.
    2. Sample sizes for a test set, data provenance, or details about retrospective/prospective nature of a study.
    3. Number or qualifications of experts for ground truth establishment.
    4. Adjudication method for a test set.
    5. MRMC comparative effectiveness study details, including effect size.
    6. Details about a standalone (algorithm only) performance study.
    7. Type of ground truth used (expert consensus, pathology, outcomes data).
    8. Sample size for the training set.
    9. How ground truth for the training set was established.

    In summary, while the document confirms non-clinical testing was performed to establish substantial equivalence, it does not detail the specific acceptance criteria or the study methodology (sample sizes, ground truth, expert involvement, etc.) that would prove the device quantitatively meets such criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183676
    Device Name
    DentiqAir
    Date Cleared
    2019-12-19

    (356 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    DentiqAir is a software application for the visualization of imaging information of the oralmaxillofacial region. The imaging data originates from medical scanners such as CT or CBCT scanners. The dental professionals' planning data may be exported from DentiqAir and used as input data for CAD or Rapid Prototyping Systems.

    Device Description

    DentiqAir is a pure software device applied for the visualization of imaging information of the ear-nose-throat (ENT) region and oral-maxillofacial region. The imaging data originates from medical scanners such as CT or Cone Beam - CT (CBCT) scanners. This information can be complemented by the imaging information from optical impression systems. The medical professionals' input information and planning data may be exported from Dentiq Air to be used for CAD or Rapid Prototyping Systems.

    AI/ML Overview

    The provided text describes the 510(k) premarket notification for the DentiqAir device. While it mentions performance tests, it does not include a detailed table of acceptance criteria and reported device performance for all features, nor does it provide a full breakdown of the test set, expert involvement, or MRMC study results typically found in comprehensive performance studies for AI/ML-driven devices.

    However, based on the non-clinical testing section, we can infer some information regarding the performance and acceptance criteria for specific functionalities.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and the Reported Device Performance:

    The document primarily focuses on accuracy tests for measurements made using the device against phantom data. It does not provide performance metrics for segmentation accuracy (e.g., Dice score, Jaccard index) which might be expected for segmentation features.

    Feature TestedAcceptance CriteriaReported Device Performance
    LengthAverage and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    AngleAverage and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    HU (Hounsfield Unit)Average and maximum absolute difference less than 2% compared to true value"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)
    VolumeLess than True Value and more than Dolphin Imaging average (for Airway volume, based on context)"The testing results support that the subject device is substantially equivalence to the predicate or reference devices." (Implicitly met the criteria)

    Note: The phrasing "Less than True Value and more than Dolphin Imaging average" for Volume is a bit ambiguous regarding exact numerical targets, but it implies a comparative target against a predicate device's expected performance. The document states that the test results support substantial equivalence, implying these criteria were met.

    2. Sample Size Used for the Test Set and the Data Provenance:

    • Sample Size: The document states that accuracy tests were conducted "from loaded CT datasets using phantom." It does not specify the number of CT datasets or phantoms used for this testing.
    • Data Provenance: The data used was from "phantom" studies, meaning simulated or controlled anatomical models, not human patient data. The country of origin is not explicitly stated for the phantom data, but the submitter is from the Republic of Korea. It is retrospective in nature, as it's a pre-market submission based on completed testing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:

    • Number of Experts: Not applicable. The ground truth for the phantom accuracy tests was established by the known true values of the phantom itself, not by expert consensus.
    • Qualifications of Experts: N/A as expert consensus was not the method for establishing ground truth for the stated performance tests.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not applicable. Given the ground truth for the measurement accuracy tests was based on the known physical properties of the phantom, no human adjudication was necessary for these specific performance metrics.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No. The document explicitly states: "Clinical testing is not a requirement and has not been performed." The performance tests described are strictly non-clinical and focus on software functionality and measurement accuracy. This type of study would be highly relevant for devices intended to assist human readers in diagnosis or treatment planning.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: For the measurement accuracy tests (Length, Angle, HU, Volume), the described testing does reflect standalone performance as it compares the device's measurements directly to the phantom's true values, without human intervention in the measurement process itself.
    • The software also includes "segmentation features," and the document states, "Performance testing has been used to validate the safety and effectiveness of the DentiqAir segmentation features in comparison to the predicate devices." However, no specific quantitative standalone performance metrics (e.g., Dice coefficient, precision, recall) for segmentation are provided in this summary.

    7. The Type of Ground Truth Used:

    • Ground Truth: For the measurement accuracy tests, the ground truth was based on the known physical properties (true values) of the phantom used for testing.

    8. The Sample Size for the Training Set:

    • Training Set Sample Size: Not provided. The document is a 510(k) summary for a software device. While it mentions "software verification and validation testing activities" including "code review, integration review, and dynamic tests," and "performance tests," it does not discuss the training or development of any AI/ML models within the software or the data used for such purposes. The device's segmentation algorithm is mentioned as "Water Shed (a type of graph-cut algorithm)," which is a traditional image processing algorithm rather than a deep learning model requiring a specific training set with labelled data in the sense of modern AI/ML.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: Not applicable. As the document does not describe the use of an AI/ML model that requires a labelled training set in the contemporary sense, the establishment of ground truth for a training set is not discussed. The Water Shed algorithm does not require labeled training data in the same way a deep learning model would.
    Ask a Question

    Ask a specific question about this device

    K Number
    K152086
    Manufacturer
    Date Cleared
    2016-04-28

    (276 days)

    Product Code
    Regulation Number
    872.5470
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3Shape Ortho System™ is intended for use as a medical front-end device providing tools for management of orthodontic models, systematic inspection, detailed analysis, treatment simulation and virtual appliance design options (Custom metal bands, Export of Models, Indirect Bonding Transfer Media) based on 3D models of the patient's dentition before the start of an orthodonic treatment. It can also be applied during the treatment to inspect and analyze the progress of the treatment It can be used at the end of the treatment to evaluate if the outcome is consistent with the planned the amment objectives.

    The use of the Ortho System™ requires the user to have the necessary training and domain knowledge in the practice of orthodontics, as well as to have received a dedicated training in the use of the software.

    Device Description

    3Shape's Ortho System™ is a software system used for the management of 3D scanned orthodontic models of the patients, orthodontic diagnosis by measuring, analyzinq, inspecting and visualize 3D scanned orthodontic models, virtual planning of orthodontic treatments by simulating tooth movements, virtual placement of orthodontic brackets on the 3D models and design of orthodontic appliances based on 3D scanned orthodontic models, including transfer methods for indirect bonding of brackets. Output includes only Export Model (also called dental casts), Custom Metal Bands (also called metal bands), and Indirect Bonding Transfer Trays (also called orthodontic bracket placement trays). All devices are to be fabricated from FDA cleared materials.

    The device has no patient contact.

    AI/ML Overview

    The provided FDA 510(k) summary for the 3Shape Ortho System™ does not contain acceptance criteria or a study proving the device meets said criteria in the traditional sense of a performance study with defined metrics (sensitivity, specificity, accuracy, etc.) against a specific ground truth.

    Instead, this submission focuses on demonstrating substantial equivalence to predicate devices through a comparison of intended use, technical characteristics, and nonclinical testing that confirms the software acts as intended.

    Here's a breakdown of the requested information based on the provided text, highlighting what is implicitly or explicitly stated and what is absent:


    1. Table of Acceptance Criteria and Reported Device Performance

    As noted, the document does not present a table of specific quantitative acceptance criteria (e.g., "accuracy > 95%") nor does it report performance against such criteria. The "performance" assessment is primarily qualitative, focusing on whether the device's features match or are equivalent to the predicate devices.

    | Acceptance Criteria Category | Specific Acceptance Criteria (from document) | Reported Device Performance (from document) |
    |------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
    | Functional Equivalence | The device must demonstrate the same intended uses and technical characteristics as predicate devices. | The 3Shape Ortho System™ Software is found to have the same intended use as the primary predicate OrthoCAD iQ (K082207) and similar technical characteristics to both OrthoCAD iQ and reference predicate Dolphin Imaging (K110430) across supported anatomic areas, patient/case management, study material collection/alignment/measurement/analysis, treatment simulation, and virtual appliance design. A detailed feature comparison table is provided (Page 5). |
    | Safety and Effectiveness | The device must be as safe and effective as the predicate devices. | Based on nonclinical testing (software, hardware, and integration V&V) and comparison to predicates, the Ortho System™ is found to be "as safe and effective as the predicate devices." (Page 7). Risk management procedures related to device hazards were also validated. |
    | Software Verification and Validation (V&V) | All verification and validation activities should be performed according to FDA guidance specific to software in medical devices. | Software, hardware, and integration V&V testing was performed according to the "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (Issued on May 11, 2005). All test results were reviewed and approved. |


    2. Sample Size Used for the Test Set and the Data Provenance

    • Sample Size: Not explicitly stated as a separate "test set" in the context of clinical performance evaluation. The "nonclinical testing" involved software, hardware, and integration V&V, which typically uses various test cases and scenarios rather than a dataset of patient cases for clinical performance evaluation.
    • Data Provenance: Not mentioned. Since it's a nonclinical V&V, synthetic data or internal test data would likely be used, but this is not specified. It's not a retrospective or prospective study on patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • This information is not provided. As "clinical testing is not a requirement and has not been performed" (Page 7), there was no need for expert ground truth establishment for a patient-level test set.

    4. Adjudication Method for the Test Set

    • This information is not provided as there was no patient-level test set requiring ground truth adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • A MRMC comparative effectiveness study was not performed. The document explicitly states: "Clinical testing is not a requirement and has not been performed." (Page 7). The device is a "medical front-end device providing tools" rather than an AI-driven diagnostic aid for human readers.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    • A standalone performance study in the sense of accuracy metrics (sensitivity, specificity, etc.) was not performed. The V&V testing ensures the software functions as designed (e.g., measurements are calculated correctly, models display properly), but not against a "ground truth" to measure diagnostic or predictive performance. The device is a tool to be used by a trained orthodontist.

    7. The Type of Ground Truth Used

    • For the nonclinical V&V, the "ground truth" would be the expected output or behavior of the software based on its design specifications and requirements. This is not explicitly detailed but is intrinsic to software testing. There's no use of expert consensus, pathology, or outcomes data, as no clinical study was conducted.

    8. The Sample Size for the Training Set

    • This device is described as a "software system" programmed in Delphi that applies "digital imaging tools." While it mentions "encrypted libraries of the bracket geometry provided by the manufacturers," this refers to pre-programmed data, not a "training set" for a machine learning model. There is no mention of a training set in the context of machine learning, as this does not appear to be an AI/ML device in the modern sense (e.g., deep learning classification).

    9. How the Ground Truth for the Training Set Was Established

    • Since there's no explicit mention of a training set or machine learning, this information is not applicable or provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K152661
    Device Name
    HICAT Air
    Manufacturer
    Date Cleared
    2015-11-16

    (60 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110430

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    HICAT Air is a software application for:

    • · Aiding diagnosis in the ear-nose-throat region
    • Aiding treatment planning in the ear-nose-throat region
    • · Aiding comparisons of different treatment options
    • · Aiding treatment planning for oral appliances
    Device Description

    HICAT Air is a pure software device.

    HICAT Air is a software application for the visualization and segmentation of imaging information of the ear-nose-throat (ENT) region.

    The imaging data originates from medical scanners such as CT or Cone Beam – CT (CBCT) scanners.

    This information can be complemented by the imaging information from optical impression systems. The additional information about the exact geometry of the tooth surfaces can be visualized together with the radiological data.

    HICAT Air is also used as a software system to aid qualified medical professionals with the diagnosis, and followed by the evaluation, comparison and planning of ENT treatment options.

    The medical professionals' input information and planning data may be exported from HICAT Air to be used as input data for CAD or Rapid Prototyping Systems for the manufacturing of therapeutic devices such as oral devices.

    AI/ML Overview

    Here's a breakdown of the requested information based on the provided text. Unfortunately, much of the detailed quantitative information for acceptance criteria and study particulars for HICAT Air is not explicitly stated in this 510(k) summary. The document focuses on demonstrating substantial equivalence to predicate devices rather than providing a detailed performance study like a clinical trial.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly list specific quantitative acceptance criteria (e.g., minimum accuracy percentages, sensitivity, specificity) for the device's performance that were "met." Instead, it relies on demonstrating that HICAT Air's capabilities are comparable to or an improvement upon the predicate devices, particularly for visualization, segmentation accuracy, and measurement accuracy.

    Below is a table summarizing the reported device performance as described in the comparison to predicate devices, which can be inferred as the "criteria" it aims to meet or exceed based on the predicates.

    Feature / MetricAcceptance Criteria (Implied / Predicate-based)Reported HICAT Air Performance
    Overall Length Measurement Accuracy100 µm (based on Primary Predicate: SICAT Function)100 µm (Algorithms identical to SICAT Function)
    Overall Angular Measurement Accuracy1 degree (based on Primary Predicate: SICAT Function)1 degree (Algorithms identical to SICAT Function)
    Segmentation AlgorithmsSegmentation of anatomical structures using Water Shed (graph-cut algorithm) (based on Primary Predicate: SICAT Function) and segmentation for Dolphin Imaging 11.5 (algorithm unknown but present). Airway segmentation present in Dolphin Imaging 11.5.Yes, using a segmentation wizard. Algorithm: Water Shed (a type of graph-cut algorithm), identical to SICAT Function. Airway Segmentation of the Airway using the segmentation wizard.
    Airway Analysis (volume, cross-section, min cross-section)Present in Reference Predicate: Dolphin Imaging 11.5 (analyze airway by drawing border, program fills and displays airway space, reports volume, locates and measures most constricted spot).Yes (Calculation of airway volume, airway cross section dimensions, position and size of the area with minimum cross section in airway).
    Imaging Data Visualization RegionENT region including the oral-maxillofacial region (based on Reference Predicate: Dolphin Imaging 11.5) and oral-maxillofacial region (based on Primary Predicate: SICAT Function).ENT region including the oral-maxillofacial region.
    Safety and EffectivenessDemonstrated through verification and validation activities and substantial equivalence to predicate devices, ensuring no adverse impact from minor differences.All verification and validation activities passed; safety and effectiveness demonstrated in context of indications for use.

    Note: The phrase "Algorithms identical to SICAT Function" is frequently used, implying that the performance metrics of SICAT Function are directly applicable to HICAT Air for those features.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document mentions "Special bench testing has been performed with non-clinical data" to verify segmentation performance and calculation of geometric dimensions. However, it does not specify the sample size for this test set.
    • Data Provenance: The document states "non-clinical data" was used for special bench testing. There is no information provided on the country of origin of this data, nor if it was retrospective or prospective. It implies simulated or idealized data given the "non-clinical" description.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts used to establish ground truth for the test set, nor does it detail their qualifications. The tests appear to be bench tests against pre-defined or simulated "ground truth" rather than expert-derived ground truth on clinical cases.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set. Given the "non-clinical data" and "bench testing" description, it's likely that a human adjudication process for complex clinical opinions was not part of this specific testing.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    An MRMC comparative effectiveness study was not explicitly described in the provided text. The submission focuses on demonstrating substantial equivalence through comparison of features, technical characteristics, and performance metrics (like measurement accuracy) to predicate devices, along with internal verification and validation activities. There is no mention of human readers improving with or without AI assistance, or any effect size calculation.

    6. Standalone Performance Study

    A standalone performance study (algorithm only without human-in-the-loop performance) was performed. The "Special bench testing" mentioned falls under this category, focusing on "segmentation performance of anatomical structures of the airway" and "correct calculation of geometric dimensions of the airway by the airway analysis tool." These tests aim to verify the algorithm's capabilities independently.

    7. Type of Ground Truth Used (for standalone testing)

    For the standalone "Special bench testing," the ground truth likely involved known or simulated anatomical structures with precise geometric dimensions to verify segmentation and measurement accuracy. The term "non-clinical data" supports this interpretation, suggesting synthetic data, phantoms, or highly characterized datasets where definitive measurements and segmentations are pre-established rather than derived from expert consensus on complex, variable clinical cases or pathology.

    8. Sample Size for the Training Set

    The document does not provide any information regarding a training set sample size. As HICAT Air is described primarily as a "radiological visualization software for diagnosis and treatment planning" with identical algorithms to a predicate for many features, and its segmentation logic is based on a "Water Shed (a type of graph-cut algorithm)," it is possible that it does not involve a machine learning model that requires a "training set" in the conventional sense, or if it does, the details are not included in this summary.

    9. How Ground Truth for the Training Set Was Established

    Since no training set is mentioned or implied for a machine learning model that would require one, there is no information provided on how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1