Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K965203
    Device Name
    RADX SYSTEM
    Date Cleared
    1997-03-17

    (81 days)

    Product Code
    Regulation Number
    872.1800
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K914376

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The RAD& System digitizes and processes dental radiographs to perform longitudinal radiographic analysis using the digital subtraction technique. This technique is helpful in the detection of hard tissue changes including pathologies as well as resolution of those same pathologies.

    Device Description

    The RAD System enables the dental practitioner to take a longitudinal series of X-rays using the long-cone paralleling technique, digitize the resulting images, and perform digital subtraction analysis to detect very small changes in bone densities. The RAD System consists of a high resolution digital image scanner and the RADy software program. Also required are an IBM compatible personal computer, a long cone, and a parallel aiming system. The ability of the RADx System to create spatially registered images for comparison without the use of rigid projection geometry allows longitudinal radiographic analysis using equipment readily available to the average dental practitioner.

    AI/ML Overview

    The provided text describes the RADx System, a longitudinal radiographic analysis system that digitizes dental X-rays and performs digital subtraction analysis to detect small changes in bone densities. However, the document is a 510(k) summary from 1997 and lacks the detailed information required to fully answer all aspects of your request, particularly regarding specific acceptance criteria, a formal study with statistical data, and modern regulatory study requirements.

    Here's an attempt to extract and infer information based on the provided text, highlighting where information is unavailable:

    1. Table of Acceptance Criteria and Reported Device Performance

    The 510(k) summary does not explicitly list quantitative acceptance criteria in a dedicated table format nor does it provide specific reported device performance metrics like sensitivity, specificity, accuracy, or AUC. Instead, it makes a general claim of achieving performance "well within the range reported in the literature for accepted manual registration methods."

    Acceptance Criteria (Inferred from text)Reported Device Performance (Inferred from text)
    Registration Error:
    Reduction of registration error to a level "well within the range reported in the literature for accepted manual registration methods."Achieved; stated as "warping algorithm reduces registration error to a level that is well within the range reported in the literature for accepted manual registration methods and either alignment stent or long source-to-object projection techniques."
    Suitability of Commercially Available Scanners:
    Ability to adequately digitize X-rays for longitudinal analysis.Verified; "Non-clinical tests were conducted to evaluate the suitability of commercially available scanners for digitizing x-rays."
    System Functionality:
    Ability to digitize and process dental radiographs to perform digital subtraction analysis to detect hard tissue changes.Demonstrated by the system's intended function and general conclusion of substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated. The document refers to "non-clinical tests" but does not give the number of images or cases used in these tests.
    • Data Provenance: Not explicitly stated. Given the era and the nature of non-clinical tests, it likely involved a limited number of test images, potentially from internal sources or publicly available datasets at the time, if any. Whether they were retrospective or prospective is not mentioned.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not mentioned.
    • Qualifications: Not mentioned. It's unclear if expert review was even part of the "non-clinical tests" beyond internal engineering and scientific evaluation. The "accepted manual registration methods" imply a comparison to human performance, but this is not detailed in terms of a formal ground truth establishment for the test datasets.

    4. Adjudication Method for the Test Set

    • Not mentioned. Given the non-clinical nature of the tests described, a formal adjudication process with multiple experts is unlikely to have been documented in this summary.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • No. The document does not describe an MRMC study. The "non-clinical tests" were focused on the technical performance of the warping algorithm and scanner suitability, not on human reader performance with or without AI assistance.
    • Effect Size of Human Readers Improvement: Not applicable, as no MRMC study was conducted or reported.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, implicitly. The "non-clinical tests" primarily focused on the technical performance of the RADx software and scanner. The evaluation of the "warping algorithm" and its ability to reduce registration error to a given level without relying on human interpretation of specific outcomes indicates a standalone assessment of key algorithmic components. However, this was not a full standalone diagnostic performance study (e.g., sensitivity/specificity for disease detection) but rather a technical validation of a processing step.

    7. The Type of Ground Truth Used

    • Inferred based on "registration error": The ground truth for the "registration error" would likely have been based on precisely measured known anatomical landmarks or fiducials on the X-ray images, or perhaps a gold standard manual registration performed by highly skilled individuals under controlled conditions.
    • For the "suitability of commercially available scanners," the ground truth would involve objective image quality metrics or visual assessment against established standards for diagnostic image quality.
    • There's no mention of pathology, outcomes data, or formal expert consensus creating a ground truth for diagnostic accuracy, as the tests focused on technical image processing rather than diagnostic performance per se.

    8. The Sample Size for the Training Set

    • Not mentioned. As the document primarily describes "non-clinical tests" for a relatively early image processing system, deep learning or extensive machine learning paradigms with large training sets as understood today were not likely applicable in 1997. The "warping algorithm" would have been developed through algorithmic design and validation, possibly using smaller, controlled datasets, but not a "training set" in the modern sense of AI.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable as a "training set" in the modern AI sense is not described. If there were data used for algorithm development or initial parameter tuning, the ground truth would likely have been established through methods relevant to image processing, such as highly accurate manual measurements or synthetically generated data with known transformations.

    Conclusion:

    The 510(k) summary for the RADx System from 1997 provides a high-level overview of its intended use and a general statement of equivalency based on "non-clinical tests." It emphasizes the technical capability of the system's image registration (warping algorithm) and scanner suitability. However, it significantly predates current rigorous requirements for AI/ML device validation and thus lacks detailed information regarding:

    • Quantitative acceptance criteria for diagnostic performance (sensitivity, specificity, etc.)
    • Specific sample sizes for test and training sets
    • Details on expert involvement for ground truth establishment
    • Formal MRMC studies or dedicated standalone performance studies for diagnostic accuracy.

    The information provided is more aligned with the technical validation of an image processing method rather than a comprehensive clinical or AI performance study.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1