Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K020888
    Date Cleared
    2002-11-07

    (234 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Fundus AutoImager™ is an automated ocular fundus imaging device that allows for the rapid capture, storage, manipulation and transmission of images of the eye, especially the retina area, as an aid in diagnosing or monitoring diseases of the eye that may be observed and photographed.

    Device Description

    The Fundus AutoImager utilizes video cameras for alignment, focus and tracking of the patient's pupils and a high-resolution (1k x 1k) chargecoupled device (CCD) camera for focus and tracking of the fields of interest on the patient's fundus. The system acquires stereo pairs of images that are displayed on a video monitor. The operator can select for imaging in monochrome or color and the images can be stored to disk, printed, or sent to a remote location via the Internet.

    AI/ML Overview

    This document is a 510(k) Summary for the Fundus AutoImager™ by Visual Pathways, Inc. It describes the device, its intended use, technological characteristics, and claims of substantial equivalence to predicate devices. The review indicates that the 510(k) lacks detailed information typically required to fully respond to all points in your request. For many of the requested items, the document states that specific information was not provided or cannot be extracted.

    Here's an analysis of the provided text against your requested information:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria in terms of accuracy, sensitivity, or specificity for the Fundus AutoImager™. Instead, the evaluation focuses on qualitative aspects and operational performance compared to predicate devices.

    Acceptance Criteria (Implicit)Reported Device Performance (as described in the document)
    Image Quality: Quality of images obtainedConfirmed to produce images of comparable quality to the Zeiss FF450 fundus camera.
    Ease of Operation: Minimal operator training and intervention.Requires minimal operator training and intervention during the imaging process; improved performance through more automated, faster, and simpler operation.
    Time Required for Imaging: Rapid capture of images.Shorter time to access a single stereo pair of images; shorter time to acquire multiple field stereo imaging (e.g., diabetic retinopathy NIH standard, seven-field stereo imaging).
    Safety: Patient and operator safety.Presents no new issues in regard to patient or operator safety.
    Substantial Equivalence: Similar intended use and technological characteristics to predicate devices.Determined to be substantially equivalent to legally marketed predicate devices (Zeiss FF450 VISUPAC and Topcon Medical Systems' ImageNet Digital Ophthalmic Imaging System) in intended use, indications, and overall technological characteristics.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Approximately 130 patients.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but the evaluation was directed by Stephen Fransen, M.D., Chief Scientific Officer of Inoveon Corp., Oklahoma City, OK, suggesting a US-based study.
      • Retrospective or Prospective: The study describes acquiring multiple field images "on approximately one hundred thirty patients," which implies a prospective data collection rather than retrospective analysis of existing images.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Number of Experts: At least one, Stephen Fransen, M.D. The text also mentions "Dr. Fransen and his team," which suggests more than one person was involved, but the exact number is not specified.
    • Qualifications of Experts: Stephen Fransen, M.D., is a board-certified ophthalmologist and retinal specialist. No specific experience length is mentioned for Dr. Fransen or his team.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not describe a formal adjudication method for establishing ground truth or resolving discrepancies. The evaluation was "directed" by Dr. Fransen, and he and his team "evaluated" the Fundus AutoImager and the images produced. This primarily indicates a single expert (or principal expert with a team) assessment rather than a formal adjudication process (e.g., multi-reader consensus, 2+1, 3+1).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document describes a comparative evaluation between the Fundus AutoImager™ and a predicate device (Zeiss FF450 fundus camera) regarding image quality and operational aspects. However, this was not an MRMC comparative effectiveness study designed to measure the improvement of human readers with AI assistance. The Fundus AutoImager™ itself is described as an "automated ocular fundus imaging device" that aids in diagnosing or monitoring diseases, but the study focuses on its performance in image acquisition and quality compared to another fundus camera, not on how its images improve human diagnostic performance. The effect size of human readers improving with AI vs. without AI assistance is therefore not addressed in this document.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    This device is an imaging system (ophthalmic camera), not an AI algorithm intended for automated diagnosis or interpretation. The document describes the device's ability to capture, store, manipulate, and transmit images as an aid in diagnosing or monitoring diseases, implying human interpretation is still within the loop. Therefore, a standalone (algorithm only) performance study as commonly understood for AI diagnostics was not performed, as it's not applicable to this type of device based on the provided description.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used for evaluating the images was based on the expert assessment/consensus of Stephen Fransen, M.D., a board-certified ophthalmologist and retinal specialist, and his team. The evaluation considered "the quality of images obtained" and their utility for diagnosis/monitoring.

    8. The sample size for the training set

    The document describes a clinical evaluation of the performance of the Fundus AutoImager and does not mention a separate "training set" in the context of an AI model development. The device is a hardware imaging system with automated features, not a conventional machine learning model that undergoes training on a large dataset. Therefore, the sample size for a training set is not applicable or not provided in this context.

    9. How the ground truth for the training set was established

    Since no training set for an AI model is described or applicable to the device as presented, information on how its ground truth was established is not provided.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1