Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K223357
    Device Name
    EyeArt v2.2.0
    Manufacturer
    Date Cleared
    2023-06-16

    (226 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EyeArt is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy and visionthreatening diabetic retinopathy (severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy and/or diabetic macular edema) in eyes of adults diabetes who have not been previously diagnosed with diabetic retinopathy. EyeArt is indicated for use with Canon CR-2 Plus AF, and Topcon NW400 caneras.

    Device Description

    EyeArt is a software as a medical device that consists of three components - Client, Server, and Analysis Computation Engine. A retinal fundus camera, used to capture retinal fundus images of the patient, is connected to a computer where the EyeArt Client software is installed. The EyeArt Client software provides a graphical user interface (GUI) that allows the EyeArt operator to transfer the appropriate fundus images to and receive results from the remote EyeArt Analysis Computation Engine through the EyeArt Server. The EyeArt Analysis Computation Engine is installed on remote computer(s) in a secure data center and uses artificial intelligence algorithms to analyze the fundus images and return results. EyeArt is intended to be used with retinal fundus images of resolution 1.69 megapixels or higher captured using one of the indicated retinal fundus cameras (Canon CR-2 AF, Canon CR-2 Plus AF, and Topcon NW400) with 45 degrees field of view. EyeArt is specified for use with two retinal fundus images per eye: optic nerve head (ONH) centered and macula centered. For each patient eye, the EyeArt results separately indicate whether "more than mild diabetic retinopathy (mtmDR)" and "vision-threatening diabetic retinopathy (vtDR)" are detected.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    The provided document does not explicitly list pre-defined acceptance criteria in a separate table with specific numerical thresholds for sensitivity, specificity, etc. However, the performance metrics reported in the tables and the concluding statement ("The results of this prospective study support a determination of substantial equivalence between EyeArt v2.2.0 and EyeArt v2.1.0 and support the addition of the Topcon NW400 camera to the IFU statement" and "The results of this retrospective study support a determination of substantial equivalence between EyeArt v2.2.0 and EyeArt v2.1.0") imply that the demonstrated performance met the FDA's expectations for substantial equivalence to the predicate device.

    The reported device performance for EyeArt v2.2.0 with the new Topcon NW400 camera and previous Canon cameras for detecting "more than mild diabetic retinopathy (mtmDR)" and "vision-threatening diabetic retinopathy (vtDR)" is summarized below. It's important to note that the comparison is implicit against the performance of the predicate device (EyeArt v2.1.0) and general expectations for diagnostic devices.

    Table of Reported Device Performance (Prospective Study EN-01b):

    MetricmtmDR (Canon CR-2 AF/Plus AF)mtmDR (Topcon NW400)vtDR (Canon CR-2 AF/Plus AF)vtDR (Topcon NW400)
    Sensitivity95.9% [90.4% - 100%] (70/73)94.4% [88.3% - 98.8%] (68/72)96.8% [90.0% - 100%] (30/31)96.8% [89.5% - 100%] (30/31)
    Specificity86.4% [81.2% - 91.1%] (216/250)91.1% [86.8% - 94.8%] (226/248)91.7% [87.7% - 95.2%] (266/290)91.6% [87.5% - 95.1%] (263/287)
    PPV67.3% [55.9% - 77.4%] (70/104)75.6% [64.6% - 85.4%] (68/90)55.6% [39.2% - 72.0%] (30/54)55.6% [38.0% - 72.1%] (30/54)
    NPV98.6% [96.9% - 100%] (216/219)98.3% [96.4% - 99.6%] (226/230)99.6% [98.8% - 100%] (266/267)99.6% [98.5% - 100%] (263/264)
    Best-case Sens. (mtmDR)95.9% [90.5% - 100.0%] (71/74)94.6% [88.6% - 98.8%] (70/74)96.8% [90.9% - 100.0%] (30/31)96.8% [89.5% - 100.0%] (30/31)
    Worst-case Sens. (mtmDR)94.6% [88.9% - 98.8%] (70/74)91.9% [84.8% - 97.3%] (68/74)96.8% [90.0% - 100.0%] (30/31)96.8% [89.5% - 100.0%] (30/31)
    Best-case Spec. (mtmDR)86.5% [81.3% - 91.2%] (218/252)91.3% [87.1% - 94.9%] (230/252)91.8% [87.9% - 95.3%] (269/293)91.8% [87.8% - 95.1%] (269/293)
    Worst-case Spec. (mtmDR)85.7% [80.6% - 90.3%] (216/252)89.7% [85.05% - 93.3%] (226/252)90.8% [86.7% - 94.7%] (266/293)89.8% [85.7% - 93.6%] (263/293)

    Study Information:

    1. Sample Sizes and Data Provenance:

    • Test Set (Prospective Study EN-01b):

      • Accuracy Analysis: 336 eyes from 171 participants.
      • Precision Analysis: 264 eyes from 132 participants.
      • Data Provenance: Prospective, multi-center clinical study conducted in the United States (implied by FDA submission and context).
    • Test Set (Retrospective Study EN-01):

      • Accuracy Analysis: 1310 eyes from 655 participants.
      • Data Provenance: Retrospective, utilizing data already collected from the EyeArt pivotal multi-center clinical study (Protocol EN-01). Geographic origin not explicitly stated but likely United States due to the FDA context.

    2. Number of Experts and Qualifications for Ground Truth:

    • Number of Experts: Not explicitly stated as a count of individual experts, but the reference standard was determined by "experienced and certified graders" at the University of Wisconsin Reading Center (WRC).
    • Qualifications of Experts: "experienced and certified graders" at the University of Wisconsin Reading Center (WRC).

    3. Adjudication Method for the Test Set:

    • The text states the ground truth was determined by "experienced and certified graders at the University of Wisconsin Reading Center (WRC)." It does not specify a numerical adjudication method (e.g., 2+1, 3+1). It implies a consensus or designated "grader" process, but the specifics of how disagreements (if any) were resolved are not detailed.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, an MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not reported. The studies focused on the standalone diagnostic accuracy of the EyeArt device against a clinical reference standard (WRC grading).

    5. Standalone Performance (Algorithm Only):

    • Yes, standalone performance was done. The reported sensitivity, specificity, PPV, and NPV are measures of the EyeArt algorithm's performance in automatically detecting DR without direct human interpretation of the images in the diagnostic loop. The "EyeArt operator" described in the device description assists in image acquisition and receiving results, but the detection itself is algorithmic.

    6. Type of Ground Truth Used:

    • Expert Consensus / Clinical Reference Standard: The ground truth was established by a "Clinical reference standard (CRS)" which was determined by "experienced and certified graders at the University of Wisconsin Reading Center (WRC)" based on dilated 4-widefield stereo fundus images per the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale. This is a form of expert consensus derived from a clinical gold standard imaging modality.

    7. Sample Size for the Training Set:

    • Not specified in the provided text. The document states that the six sites for the prospective study "did not contribute data used for training or development of EyeArt," implying a separate training dataset was used, but its size is not disclosed.

    8. How Ground Truth for Training Set was Established:

    • Not explicitly stated in the provided text. Given the consistency in methodology for the test set, it is highly probable that the ground truth for the training set was established similarly, using expert grading by ophthalmology specialists or a reading center. However, this is an inference, not directly stated for the training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200667
    Device Name
    EyeArt
    Manufacturer
    Date Cleared
    2020-08-03

    (143 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EyeArt is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy and visionthreatening diabetic retinopathy (severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy and/or diabetic macular edema) in eyes of adults diagnosed with diabetes who have not been previously diagnosed with more than mild diabetic retinopathy. EyeArt is indicated for use with Canon CR-2 Plus AF cameras in both primary care and eye care settings.

    Device Description

    EyeArt is a software as a medical device that consists of several components - Client, Server, and Analysis Computation Engine. A retinal fundus camera, used to capture retinal fundus images of the patient, is connected to a computer where the EyeArt Client software is installed. The EyeArt Client software provides a graphical user interface (GUI) that allows the EyeArt operator to transfer the appropriate fundus images to and receive results from the remote EyeArt Analysis Computation Engine through the EyeArt Server. The EyeArt Analysis Computation Engine is installed on remote computer(s) in a secure data center and uses artificial intelligence algorithms to analyze the fundus images and return results. EyeArt is intended to be used with color fundus images of resolution 1.69 megapixels or higher captured using one of the indicated color fundus cameras (Canon CR-2 AF and Canon CR-2 Plus AF) with 45 degrees field of view. EyeArt is specified for use with two color fundus images per eye: optic nerve head (ONH) centered and macula centered.

    For each patient eye, the EyeArt results separately indicate whether more than mild diabetic retinopathy (mtmDR) and vision-threatening diabetic retinopathy (vtDR) are detected. More than mild diabetic retinopathy is defined as the presence of moderate non-proliferative diabetic retinopathy or worse on the International Clinical Diabetic Retinopathy (ICDR) severity scale and/or the presence of diabetic macular edema. Vision-threatening diabetic retinopathy is defined as the presence of severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy on the ICDR severity scale and/or the presence of diabetic macular edema.

    AI/ML Overview

    Here's a breakdown of the EyeArt device's acceptance criteria and the study that proves it meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Device: EyeArt (v2.1.0)
    Indication for Use: To automatically detect more than mild diabetic retinopathy (mtmDR) and vision-threatening diabetic retinopathy (vtDR) in adults diagnosed with diabetes who have not been previously diagnosed with more than mild diabetic retinopathy.

    MetricAcceptance Criteria (Implicit by achieving high performance)Reported Device Performance (Worst Case Across Cohorts/Outcomes)
    Sensitivity (mtmDR)High (e.g., above 90%)92.9% (Enrichment-permitted, Primary Care)
    Specificity (mtmDR)High (e.g., above 85%)85.2% (Enrichment-permitted, Ophthalmology)
    Sensitivity (vtDR)High (e.g., near 90% or 100% for smaller groups)88.9% (Sequential, Ophthalmology)
    Specificity (vtDR)High (e.g., above 89%)89.8% (Enrichment-permitted, Ophthalmology)
    ImageabilityHigh (e.g., above 95%)96.5% (Enrichment-permitted, Ophthalmology, Sequence P1/P2/P3)
    Intra-operator Repeatability (OA - mtmDR)High (e.g., above 90%)93.5% (Cohort P2)
    Intra-operator Repeatability (OA - vtDR)High (e.g., above 96%)96.8% (Cohort P2)
    Inter-operator Reproducibility (OA - mtmDR)High (e.g., above 90%)90.3% (Cohort P1)
    Inter-operator Reproducibility (OA - vtDR)High (e.g., above 96%)96.8% (Cohort P1)

    Note: The document does not explicitly state numerical acceptance criteria thresholds. The reported performance values are the actual outcomes from the clinical study, which are implicitly considered acceptable for substantial equivalence based on the FDA's clearance. The "worst case" reported here refers to the lowest performance observed across the different cohorts for each metric. Confidence intervals are provided in the tables and are generally tight, indicating reliability.


    Study Details Proving Device Meets Acceptance Criteria

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Size:
      • Clinical Efficacy Study: 655 participants (after exclusions from an initial 942 screened), comprising 1290 eyes (assuming 2 eyes per subject, though analyses are eye-level). These 655 participants were divided into two main cohorts:
        • Sequential Enrollment: 235 subjects (45 in primary care, 190 in ophthalmology)
        • Enrichment-Permitted Enrollment: 420 subjects (335 in primary care, 85 in ophthalmology)
      • Precision (Repeatability/Reproducibility) Study: 62 subjects (31 subjects each at 2 US primary care sites), resulting in 186 pairs of images for repeatability analysis (Cohort P1) and 62 subjects for Cohort P2 (3 repeats each).
    • Data Provenance: Prospective, multi-center pivotal clinical trial conducted across 11 US study sites (primary care centers and general ophthalmology centers).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: At least 2 independent graders and an additional adjudication grader (more experienced) were used for each subject's images.
    • Qualifications of Experts: Experienced and certified graders at the Fundus Photography Reading Center (FPRC). They were certified to grade according to the Early Treatment for Diabetic Retinopathy Study severity (ETDRS) scale. Specific experience levels (e.g., "10 years of experience") are not detailed beyond "experienced and certified."

    4. Adjudication Method for the Test Set

    • Method: 2+1 adjudication. Each subject's images were independently graded by 2 experienced and certified graders. In case of significant differences (determined using prespecified significance levels) between the two independent gradings, a more experienced adjudication grader graded the same images to establish the final ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No, a comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance was not reported in this document. The study primarily evaluated the standalone performance of the EyeArt device against expert human grading (FPRC reference standard).

    6. Standalone (i.e., algorithm only without human-in-the-loop performance)

    • Was it done? Yes, the entire clinical testing section (Section D) describes the standalone performance of the EyeArt algorithm. The EyeArt results (positive, negative, or ungradable for mtmDR and vtDR) for each eye were compared directly to the clinical reference standard established by FPRC grading.

    7. Type of Ground Truth Used

    • Type: Expert Consensus Grading (adjudicated) from the Fundus Photography Reading Center (FPRC). This grading was based on dilated 4-wide field stereo fundus imaging and applied the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale.
      • mtmDR Ground Truth: Positive if ETDRS level was 35 or greater (but not equal to 90) or clinically significant macular edema (CSME) grade was CSME present. Negative if ETDRS levels were 10-20 and CSME grade was CSME absent.
      • vtDR Ground Truth: Positive if ETDRS level was 53 or greater (but not equal to 90) or CSME grade was CSME present. Negative if ETDRS levels were 10-47 and CSME grade was CSME absent.

    8. Sample Size for the Training Set

    • The document does not specify the sample size for the training set. It mentions that the EyeArt Analysis Computation Engine uses "an ensemble of clinically aligned machine learning (deep learning) algorithms" but provides no details on their training data.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established. While the "clinically aligned framework" is mentioned, the specific methodology for annotating or establishing ground truth for the training data is not detailed in this submission summary.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1