Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K241065
    Device Name
    ChecQ (AC100)
    Manufacturer
    Date Cleared
    2025-03-21

    (337 days)

    Product Code
    Regulation Number
    872.4200
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ChecQ (AC100)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ChecQ is indicated for use in measuring the stability of implants in the oral cavity and maxillofacial region.

    Device Description

    The ChecQ is a dental implant stability analyzer comprising a main unit and a charging cradle that uses Resonance Frequency Analysis (RFA) to measure implant stability non-invasively. It works with the ChecQPEG, which attaches to the dental implant, and measures stability by analyzing resonant frequencies generated by magnetic field stimulation from the probe tip. The stability result is displayed on the main unit's screen as an Implant Stability Quotient (ISQ), ranging from 1 to 99. The RFA method evaluates by applying transverse force to the implant using magnetism, measuring movement, and analyzing the resulting resonance frequency. The resonance frequency depends on the bone-implant gap and is calculated using a formula, with values expressed as ISQ scores (1-99) for clinical interpretation. The ChecQ mechanism converts voltage pulses generated by the DAC of the MCU into magnetic pulses, which induce vibrations in the ChecQPEG magnets. The device captures the free vibrations as magnetic pulses, processes them through an FFT to extract the natural frequency, and calculates the ISQ value based on this frequency.

    AI/ML Overview

    The provided text relates to the FDA 510(k) clearance for the ChecQ (AC100) device, a dental implant stability analyzer. However, the document outlines the equivalence to predicate devices based on non-clinical testing rather than providing detailed acceptance criteria and a study demonstrating the device meets those criteria through a multi-reader multi-case (MRMC) study or a standalone algorithm performance study.

    The document emphasizes "technological comparison" and "non-clinical testing" for substantial equivalence. It does not describe a clinical study in the format typically used to assess the performance of an AI/ML medical device, which would involve a test set, ground truth experts, adjudication, and performance metrics like sensitivity, specificity, or AUC.

    Therefore, many of the requested details cannot be extracted from the provided text.

    Here is what can be inferred or explicitly stated based on the given information:

    1. A table of acceptance criteria and the reported device performance

    The document does not specify quantitative acceptance criteria in terms of clinical performance metrics (e.g., sensitivity, specificity, accuracy) for the device. Instead, it focuses on demonstrating equivalence to a predicate device through non-clinical testing.

    The reported device performance is described qualitatively:

    • "The results confirmed that ChecQ exhibited equivalent performance, establishing its functional equivalence with the predicate device."
    • "Additional testing was conducted to evaluate the potential effects of the barrier sleeve on ISQ measurements, confirming that the presence of the barrier sleeve did not impact measurement accuracy."

    2. Sample size used for the test set and the data provenance

    The document mentions "multiple measurements using standardized test jigs" for comparative testing against the predicate device (Osstell Beacon). This indicates an in-vitro, non-clinical test using artificial setups rather than a test set derived from patient data. Therefore, there's no information on:

    • Sample size: Not applicable in the context of patient data; the "test set" consists of measurements on test jigs.
    • Data provenance: Not applicable as it's not human patient data. The testing appears to be conducted by the manufacturer, presumably in South Korea given the applicant's origin.
    • Retrospective or Prospective: Not applicable as it's not a clinical study.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. The "ground truth" for the non-clinical comparison was established by the measurements from the predicate device (Osstell Beacon) and potentially engineering specifications for dimensional tests. There were no human experts involved in establishing ground truth for a clinical test set.

    4. Adjudication method for the test set

    Not applicable, as there was no clinical test set requiring expert adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. The document explicitly states that the "ChecQ dental implant stability analyzer underwent comparative testing against the predicate device, Osstell Beacon, to assess its accuracy. The self-test methodology involved multiple measurements using standardized test jigs..." This is a non-clinical comparison of the device's output to a known standard (the predicate device) or engineering specifications, not an MRMC study with human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, in a sense, a "standalone" performance comparison was done, but it was comparing the device's measurement (ISQ score) against the measurement of another device (Osstell Beacon) on artificial test jigs, not against a clinical ground truth for diagnostic accuracy. The device (ChecQ) measures "Implant Stability Quotient (ISQ), ranging from 1 to 99," directly.

    7. The type of ground truth used

    The "ground truth" for the non-clinical comparison was the output of the predicate device (Osstell Beacon) under the same test conditions, and engineering specifications for mechanical properties and dimensions. It was not expert consensus, pathology, or outcomes data from patients.

    8. The sample size for the training set

    Not applicable. This device is an electro-mechanical measurement device, not an AI/ML algorithm that requires a "training set" in the context of machine learning. The device determines ISQ values based on resonance frequency analysis and established formulas, not through learned patterns from a training dataset.

    9. How the ground truth for the training set was established

    Not applicable, for the same reason as point 8.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1