Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K112380
    Device Name
    CLEARVISION
    Manufacturer
    Date Cleared
    2011-11-22

    (97 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClearVision is intended to be used by dentists and other qualified professionals for producing diagnostic x-ray radiographs of dentition, jaws and other oral structures.

    Device Description

    ClearVision is a digital imaging system for dental radiographic application. The product is to be used for routine dental radiographic examinations such as bitewings, periapicals, etc. Two different sized sensors (size 1 and size 2) are utilized to image different anatomy and for different patient sizes. The CMOS sensor connects directly to a USB connection in a PC without the need for an intermediate electrical interface. ClearVision works with a standard dental intraoral x-ray source without any connection to the x-ray source. ClearVision captures an image automatically upon sensing the production of x-ray and after the x-ray is complete, transfers the image to an imaging software program on the PC. Disposable sheaths are used with each use to prevent cross-contamination between patients.

    AI/ML Overview

    This provides an analysis of the provided text regarding the ClearVision Digital Sensor System.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the ClearVision Digital Sensor System are derived from its comparison to predicate devices (Schick CDR and Gendex GXS-700) and general engineering requirements. The reported device performance indicates equivalency or superiority to these predicates.

    Acceptance Criteria CategorySpecific Criteria/TestPredicate Device A (Schick CDR) PerformancePredicate Device B (Gendex GXS-700) PerformanceClearVision Sensor PerformanceMet?
    Imaging PerformanceImage Line Pair Phantom--Equivalent to GXS-700, Superior to Schick CDRYes
    Image Aluminum Step Wedge--Equivalent to GXS-700, Superior to Schick CDRYes
    Image Tooth Phantom--Equivalent to GXS-700, Superior to Schick CDRYes
    Electrical SafetyIEC 60601-1 compliance--Meets requirementsYes
    EMI/EMCIEC 60601-1-2 compliance--Meets requirementsYes
    DurabilitySensor housing and cable mechanical testing--Met all specified requirementsYes
    ReliabilityConsistent image capture and transfer over extended life--Completely reliableYes
    Image Quality ConsistencyConsistent over expected lifetime exposures to radiation--Meets requirementsYes
    Hermetic ClassificationIP67 per IEC 60529--Meets requirementsYes

    Note: The document states "found to be equivalent to the Gendex GXS-700 in all three tests and superior to the Schick sensor in all three imaging tests," implying that the performance level of the GXS-700 served as the primary benchmark for "equivalency" for the ClearVision's imaging performance.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a specific numerical sample size for the test set used in the imaging performance comparison. It mentions "each sensor to image a line pair phantom, an aluminum step wedge, and a tooth phantom." This implies at least one instance of imaging each of these phantoms per sensor.

    The data provenance is not explicitly stated. Given the context of a 510(k) submission and the nature of the tests (imaging phantoms), it is highly likely that this was prospective data generated in a controlled laboratory or engineering setting, likely within the United States where the company is based. There is no mention of patient data or clinical trials.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not describe the use of human experts to establish ground truth for the test set. The tests performed ("imaging a line pair phantom, an aluminum step wedge, and a tooth phantom") are objective, physical measurements against established standards for image quality and resolution (e.g., line pairs, step wedge density differences). Therefore, the "ground truth" would be inherent in the physical phantoms themselves and the objective metrics used to evaluate the images.

    4. Adjudication Method for the Test Set

    No adjudication method is described, as the evaluation methods appear to be objective and quantitative (e.g., measuring line pairs, density differences). Human interpretation or consensus for ground truth was not mentioned or implied.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a technical performance comparison of the device against predicate devices using physical phantoms, not a clinical study involving human readers or patient cases. Therefore, there is no effect size related to human reader improvement with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the described imaging performance comparison of the ClearVision sensor is a standalone assessment. The evaluation focuses solely on the device's ability to capture and produce images from phantoms, without any involvement of a human interpreter in the loop for diagnostic decision-making during the testing process. The device itself is a digital sensor, not an AI algorithm.

    7. The Type of Ground Truth Used

    The ground truth used for the imaging performance tests was objective, physical standards provided by the phantoms:

    • Line pair phantom: Provides known spatial frequencies (lines per millimeter) to assess resolution.
    • Aluminum step wedge: Provides known material thicknesses/densities to assess contrast and dynamic range.
    • Tooth phantom: Likely provides a realistic but standardized representation of dental anatomy to assess overall image quality and detail capture.

    8. The Sample Size for the Training Set

    The document describes a physical medical device (a digital X-ray sensor), not an AI/machine learning algorithm. Therefore, there is no training set in the context of an algorithm or AI model. The device's "training" or development would have involved engineering design, prototyping, and iterative testing to meet specifications, but not a dataset for training an algorithm.

    9. How the Ground Truth for the Training Set Was Established

    Since this is a physical device and not an AI algorithm, the concept of a "training set" and establishing ground truth for it is not applicable. The device's "ground truth" during its development would have been established through engineering specifications, material properties, and performance targets derived from scientific principles and a comparison to existing technologies.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1