Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250716
    Device Name
    IMAGE ONE
    Date Cleared
    2025-07-23

    (135 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Infomed Software, S.L.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMAGE ONE is a dental imaging software that is intended to provide viewer and image processing tools for maxillofacial radiographic images. These tools are available to view and interpret a series of DICOM compliant dental radiology images and are meant to be used by trained medical professionals such as radiologist and dentist.

    IMAGE ONE is intended for use as software to acquire, view, and save 2D and 3D image files, to load DICOM project files from panorama, cephalometric, and intra-oral imaging equipment, and to provide 3D visualization and 2D analysis.

    Device Description

    IMAGE ONE software is an image management software which runs in a browser containing a user interface that allows the dentist to access and display patients' images or videos that have been taken during the course of a clinical practice.

    IMAGE ONE is cloud-based Software which runs in a web browser with no access to hardware devices.

    The only way that IMAGE ONE may retrieve files is by requesting them from the file system of the Operating System.

    There are two scenarios involving the abovementioned files:

    • In the first one, the files may have been generated in the past by an external system, in which case IMAGE ONE allows the user to import and catalog them immediately.
    • The other one is when the files do not exist yet and are going to be generated during a clinical procedure by an external system. The user can request IMAGE ONE to wait for the user to complete an external acquisition process before importing and cataloging the files automatically.

    IMAGE ONE software supports image and video standards commonly used in dentistry to date. These include X-ray radiographic images in PNG, JPEG, or BMP format, 3D models in STL format obtained from intraoral scans, intraoral video in MP4 format, and computed tomography images in DICOM format.

    In addition to its image management functions, the software also offers through its user interface a visualization enhancement function (zoom and filters functions) for X-ray radiographic images in either PNG, JPEG, or BMP format. An additional distance measuring function is also included but strictly restricted to X-ray radiographic images in format PNG, JPEG, or BMP displaying a calibrator.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the IMAGE ONE device (K250716) is a summary of the clearance process, not a comprehensive study report. Therefore, it contains limited details regarding the specific acceptance criteria and the full study that proves the device meets these criteria. Medical device 510(k) submissions typically include extensive validation and verification documentation, which are summarized in the publicly available clearance letter.

    Based on the provided text, here's what can be extracted and what information is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "SW verification/validation and the measurement accuracy test were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria."

    However, the specific "pre-determined Pass/Fail criteria" (acceptance criteria) and the quantitative "reported device performance" are not explicitly detailed in the provided 510(k) summary. These would typically be found in the full testing and validation reports submitted to the FDA, not in the public summary.

    The "Performance Data" section merely states:

    Feature/TestAcceptance Criteria (Implicit)Reported Device Performance (Summary)
    Software Verification/ValidationAll tests based on pre-determined Pass/Fail criteriaDevice passed all tests
    Measurement Accuracy TestAll tests based on pre-determined Pass/Fail criteriaDevice passed all tests
    FunctionalityEstablished through SW V&VEstablished through SW V&V
    ReliabilityEstablished through SW V&VEstablished through SW V&V

    Missing Information: The specific numerical or qualitative thresholds for "Pass/Fail criteria" for accuracy, functionality, and reliability are not provided. For example, for measurement accuracy, it doesn't state "measurement error must be less than X mm."


    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify the sample size used for the test set.

    Data Provenance: The document does not specify the country of origin of the data or whether the data was retrospective or prospective.


    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    The document does not specify the number of experts used or their qualifications for establishing ground truth for the test set. It only states that the device is "meant to be used by trained medical professionals such as radiologist and dentist." This implies that the ground truth would likely be established by such professionals, but no details are given about the validation process for the test set.


    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1, none) used for the test set.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The focus of this 510(k) is on the software's functionality and measurement accuracy rather than a comparative study of human readers with and without AI assistance. The device is described as "viewer and image processing tools" and "software to acquire, view, and save," implying it's a tool to assist, not an AI for diagnosis that would typically require MRMC studies.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The document suggests that the performance evaluation was primarily focused on the device's technical functionality and accuracy. The measurement accuracy test could be considered a form of standalone performance evaluation for that specific feature. However, it is not explicitly labeled as "standalone" performance for an AI algorithm. Given the description ("viewer and image processing tools"), the device's primary function seems to be assisting human interpretation rather than providing autonomous diagnostic outputs.


    7. The Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used. However, for a "measurement accuracy test" on X-ray images, common ground truth methods would involve:

    • Physical measurements: Using a known, precisely measured object (like a caliper) on the image as a reference.
    • Expert Consensus: For features beyond simple measurements, radiologists/dentists providing a consensus reading.

    The document heavily emphasizes the use of a "caliper of known size" for distance measurement, implying that physical measurement references (calipers) would form a key part of the ground truth for this specific function.


    8. The Sample Size for the Training Set

    The document does not mention a training set sample size or any details about a training set. This is consistent with the nature of the device, which is described as image viewing and processing software with specific tools (zoom, filters, linear measurement). The description does not indicate that it is an AI/ML device that requires a dedicated training set beyond standard software development and testing practices. The "Performance Data" section refers to "SW verification/validation and the measurement accuracy test," which are typical for traditional software, not necessarily AI model training.


    9. How the Ground Truth for the Training Set Was Established

    Since the document does not mention a training set (as the device is not described as a machine learning model), the method for establishing ground truth for a training set is not applicable based on the provided information.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1