Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K160788
    Device Name
    RIOScan (RPS500)
    Manufacturer
    Date Cleared
    2016-04-15

    (24 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K143000

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This system is a digital intraoral dental radiographic imaging system intended for use by dentists and dental sub-specialists. The system captures, displays and sores diagnostic intraoral radiographic images.

    Device Description

    RIOScan(Model RPS500) is Computed radiography system for dental intraoral applications. Imaging plates (i.e, storage phorsphor plates) are exposed in the same way as traditional x-ray film. The x-ray images on these plates are then fed into a small computed radiography system and scanned using a laser. The scanned image data form the plates is digitized and the images are displayed on a monitor and saved to computer. RPS500 is capable of scanning the X-ray exposed imaging plates at various speed, sizes and resolutions. Once an imaging is scanned, the image data is automatically erased form the plate and the plate ejected reuse. RIOScan digital scanner doesn't have a wireless option for data transmission

    AI/ML Overview

    The provided document is a 510(k) Premarket Notification for a dental intraoral radiographic imaging system (RIOScan, Model RPS500). It focuses on establishing substantial equivalence to previously marketed predicate devices rather than providing a detailed study of the device's performance against specific acceptance criteria for a novel AI/medical imaging algorithm.

    Therefore, much of the requested information regarding acceptance criteria, sample sizes, expert involvement, and ground truth establishment for an AI performance study is not available in this document. This submission primarily relies on non-clinical performance data and comparisons of technical specifications to predicate devices.

    Here's an attempt to answer the questions based on the available information, noting when information is absent:


    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state "acceptance criteria" in the context of an AI algorithm's performance on a test set (e.g., sensitivity, specificity thresholds). Instead, for this imaging device, the performance is demonstrated through technical specifications and comparisons to predicate devices.

    Acceptance Criteria (Implied / Comparison Point)Reported Device Performance (RIOScan/RPS500)Predicate Device 1 (DIGORA Optime) PerformancePredicate Device 2 (FireCR Dental) Performance
    MTF (Modulation Transfer Function) @ 3 lp/mmMore than 35%32%34%
    DQE (Detective Quantum Efficiency) @ 3 lp/mmMore than 10%10% @ 2.4 lp/mm10% @ 2.6 lp/mm
    Resolution (Theoretical)HS: 9 lp/mm, 16 lp/mm; HR: 21 lp/mm, SHR: (Missing Value)16.7 lp/mmHS: 7.8 lp/mm; HR: 14.3 lp/mm
    Pixel size (selectable)24 um (High speed), 32 um (Super resolution), 56 um (Super High resolution)30 um (Super resolution), 60 µm (High resolution)35 um (Super resolution), 64 um (High resolution)
    Image data bit depth14 bit14 bit16 bit

    Note: The document explicitly states: "Base on the Non-Clinical Test report, Even though the pixel size and active area of predicate detectors are different, the diagnostic image quality of RPS500 detector is equal or better than that of predicate device and there is no significant difference in efficiency and safety." This serves as the overarching "acceptance."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample Size: This information is not provided as the submission relies on non-clinical (engineering/bench) testing rather than a clinical human-subject study or an image-based test set for an AI algorithm.
    • Data Provenance: Not applicable, as the evaluation is based on technical specifications and non-clinical tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • This information is not provided and is not applicable to this type of submission, which focuses on device specifications and non-clinical performance, not diagnostic accuracy requiring expert interpretation.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Not applicable. No multi-reader adjudication method was used as there was no clinical image test set or AI algorithm evaluation requiring human interpretation.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. The document states: "In the case of the RioScan system, clinical images are not necessary to establish substantial equivalence, and the non-clinical performance data alone show that the device works as intended." There is no AI component discussed in this submission that would assist human readers.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • No. This submission is for a medical imaging device (scanner), not an AI algorithm. The performance evaluation is based on the scanner's physical and technical capabilities (e.g., MTF, DQE) rather than an algorithm's output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Not applicable in the context of diagnostic accuracy. For the technical performance aspects, the "ground truth" would be established by industry standards for measuring MTF, DQE, resolution, etc., using specific optical benches and phantoms.

    8. The sample size for the training set

    • Not applicable. This submission is for hardware (a scanner), not a machine learning algorithm. Therefore, there is no "training set."

    9. How the ground truth for the training set was established

    • Not applicable. As there is no training set for an AI algorithm, no ground truth was established for it.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1