Search Filters

Search Results

Found 7 results

510(k) Data Aggregation

    K Number
    K240058
    Device Name
    AEYE-DS
    Manufacturer
    Date Cleared
    2024-04-23

    (106 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AEYE-DS is indicated for use by health care providers to automatically detect more than mild diabetic retinopathy (mtmDR) in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. The AEYE-DS is indicated for use with the Topcon NW400 camera and the Optomed Aurora camera.

    Device Description

    AEYE-DS is a retinal diagnostic software device that incorporates an algorithm to evaluate retinal images for diagnostic screening to identify retinal diseases or conditions. Specifically, the AEYE-DS is designed to perform diagnostic screening for the condition of more-than-mild diabetic retinopathy (mtmDR).

    The AEYE-DS is comprised of 5 software components: (1) Client; (2) Service; (3) Analytics; (4) Reporting and Archiving; and (5) System Security.

    The AEYE-DS device is based on the principle of operation, whereby a fundus camera is used to obtain retinal images. The fundus camera is attached to a computer, where the Client module/software is installed. The Client module/software guides the user to acquire the images and enables the user to interact with the server-based analysis software over a secure internet connection. Using the Client module/software, users identify the fundus images per eye to be dispatched to the Service module/software. The Service module/software is installed on a server hosted at a secure datacenter, receives the fundus images and transfers them to the Analytics module/software. The Analytics module/software, which runs alongside the Service module/software, processes the fundus images and returns information on the image quality and the presence or absence of mtmDR to the Service module/software. The Service module/software then returns the results to the Client module/software.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the AEYE-DS device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on establishing substantial equivalence to a predicate device (AEYE-DS K221183), rather than explicitly listing pre-defined, quantitative acceptance criteria for each metric in the same way one might find in a clinical trial protocol. However, we can infer the implicitly accepted performance by comparing the subject device's results to the predicate's and demonstrating robust performance across two studies. The table below presents the key performance metrics reported for the subject device (AEYE-DS K240058 with Optomed Aurora camera) and the predicate device (AEYE-DS K221183 with Topcon NW400 camera).

    MetricAcceptance Criteria (Implied by Predicate Performance)AEYE-DS Device (K240058) with Optomed Aurora (Study 1)AEYE-DS Device (K240058) with Optomed Aurora (Study 2)
    Sensitivity≥ 93%92% [79%; 97%] (Fundus-based & Multi-modality-based)93% [80%; 97%] (Fundus-based)
    90% [77%; 96%] (Multi-modality-based)
    Specificity≥ 91%94% [90%; 96%] (Fundus-based & Multi-modality-based)89% [85%; 92%] (Fundus-based & Multi-modality-based)
    Imageability≥ 99%99% [98%; 100%]99% [97%; 100%]
    PPV≥ 60%68% [54%; 79%]53% [41%; 64%]
    NPV≥ 99%99% [96%; 100%]99% [97%; 100%] (Fundus-based)
    98% [96%; 99%] (Multi-modality-based)

    Note: While PPV in Study 2 (53%) for the subject device is below the predicate's performance (60%), the document attributes this to the actual prevalence of mtmDR+ patients in the study's diabetic population (i.e., 12%), stating that the robustness of the studies is demonstrated by the similar PPV and NPV results across both studies despite this. The overall conclusion is substantial equivalence.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Study 1 Sample Size: 317 subjects
    • Study 2 Sample Size: 362 subjects
    • Data Provenance: Both studies were prospective, multi-center, single-arm, blinded studies conducted at study sites in the United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The ground truth was established by an independent reading center. While the exact number of experts (readers) is not specified, their role in determining the severity of retinopathy and clinically significant diabetic macular edema (DME) according to the Early Treatment for Diabetic Retinopathy Study severity (ETDRS) scale implies a high level of expertise, typical of ophthalmic specialists or certified graders.

    4. Adjudication Method for the Test Set

    The document states that the "Reading Center diagnostic results formed the reference standard (ground truth) for the study." It does not explicitly describe an adjudication method (e.g., 2+1, 3+1) among multiple readers within the reading center. It implies a single, definitive determination by the reading center.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study was done. The studies were designed to evaluate the standalone performance of the AEYE-DS device, not to compare its performance in assisting human readers. The device is intended to "automatically detect" mtmDR.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone (algorithm only) performance evaluation was done. The reported sensitivity, specificity, PPV, and NPV values are for the AEYE-DS device's automated detection of mtmDR.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus / standardized clinical assessment based on:

    • Dilation four widefield color fundus images
    • Lens photography for media opacity assessment
    • Macular optical coherence tomography (OCT) imaging
    • Severity determination according to the Early Treatment for Diabetic Retinopathy Study (ETDRS) scale by an independent reading center.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. The clinical studies (Study 1 and Study 2) are described as the basis for the performance evaluation of the device (i.e., the test set performance). The training of the AI model would have occurred prior to these validation studies.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly describe how the ground truth for the training set was established. However, it is standard practice for AI models in medical imaging to be trained on large datasets where ground truth is established by experienced clinical experts (e.g., ophthalmologists, retina specialists) thoroughly reviewing and annotating images, often with consensus protocols, similar to the method described for the test set's ground truth (ETDRS grading by a reading center). Given the device's predicate status and the detailed description of the ground truth for the test sets, it is highly probable that a rigorous, expert-based process was applied to the training data as well.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223357
    Device Name
    EyeArt v2.2.0
    Manufacturer
    Date Cleared
    2023-06-16

    (226 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EyeArt is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy and visionthreatening diabetic retinopathy (severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy and/or diabetic macular edema) in eyes of adults diabetes who have not been previously diagnosed with diabetic retinopathy. EyeArt is indicated for use with Canon CR-2 Plus AF, and Topcon NW400 caneras.

    Device Description

    EyeArt is a software as a medical device that consists of three components - Client, Server, and Analysis Computation Engine. A retinal fundus camera, used to capture retinal fundus images of the patient, is connected to a computer where the EyeArt Client software is installed. The EyeArt Client software provides a graphical user interface (GUI) that allows the EyeArt operator to transfer the appropriate fundus images to and receive results from the remote EyeArt Analysis Computation Engine through the EyeArt Server. The EyeArt Analysis Computation Engine is installed on remote computer(s) in a secure data center and uses artificial intelligence algorithms to analyze the fundus images and return results. EyeArt is intended to be used with retinal fundus images of resolution 1.69 megapixels or higher captured using one of the indicated retinal fundus cameras (Canon CR-2 AF, Canon CR-2 Plus AF, and Topcon NW400) with 45 degrees field of view. EyeArt is specified for use with two retinal fundus images per eye: optic nerve head (ONH) centered and macula centered. For each patient eye, the EyeArt results separately indicate whether "more than mild diabetic retinopathy (mtmDR)" and "vision-threatening diabetic retinopathy (vtDR)" are detected.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    The provided document does not explicitly list pre-defined acceptance criteria in a separate table with specific numerical thresholds for sensitivity, specificity, etc. However, the performance metrics reported in the tables and the concluding statement ("The results of this prospective study support a determination of substantial equivalence between EyeArt v2.2.0 and EyeArt v2.1.0 and support the addition of the Topcon NW400 camera to the IFU statement" and "The results of this retrospective study support a determination of substantial equivalence between EyeArt v2.2.0 and EyeArt v2.1.0") imply that the demonstrated performance met the FDA's expectations for substantial equivalence to the predicate device.

    The reported device performance for EyeArt v2.2.0 with the new Topcon NW400 camera and previous Canon cameras for detecting "more than mild diabetic retinopathy (mtmDR)" and "vision-threatening diabetic retinopathy (vtDR)" is summarized below. It's important to note that the comparison is implicit against the performance of the predicate device (EyeArt v2.1.0) and general expectations for diagnostic devices.

    Table of Reported Device Performance (Prospective Study EN-01b):

    MetricmtmDR (Canon CR-2 AF/Plus AF)mtmDR (Topcon NW400)vtDR (Canon CR-2 AF/Plus AF)vtDR (Topcon NW400)
    Sensitivity95.9% [90.4% - 100%] (70/73)94.4% [88.3% - 98.8%] (68/72)96.8% [90.0% - 100%] (30/31)96.8% [89.5% - 100%] (30/31)
    Specificity86.4% [81.2% - 91.1%] (216/250)91.1% [86.8% - 94.8%] (226/248)91.7% [87.7% - 95.2%] (266/290)91.6% [87.5% - 95.1%] (263/287)
    PPV67.3% [55.9% - 77.4%] (70/104)75.6% [64.6% - 85.4%] (68/90)55.6% [39.2% - 72.0%] (30/54)55.6% [38.0% - 72.1%] (30/54)
    NPV98.6% [96.9% - 100%] (216/219)98.3% [96.4% - 99.6%] (226/230)99.6% [98.8% - 100%] (266/267)99.6% [98.5% - 100%] (263/264)
    Best-case Sens. (mtmDR)95.9% [90.5% - 100.0%] (71/74)94.6% [88.6% - 98.8%] (70/74)96.8% [90.9% - 100.0%] (30/31)96.8% [89.5% - 100.0%] (30/31)
    Worst-case Sens. (mtmDR)94.6% [88.9% - 98.8%] (70/74)91.9% [84.8% - 97.3%] (68/74)96.8% [90.0% - 100.0%] (30/31)96.8% [89.5% - 100.0%] (30/31)
    Best-case Spec. (mtmDR)86.5% [81.3% - 91.2%] (218/252)91.3% [87.1% - 94.9%] (230/252)91.8% [87.9% - 95.3%] (269/293)91.8% [87.8% - 95.1%] (269/293)
    Worst-case Spec. (mtmDR)85.7% [80.6% - 90.3%] (216/252)89.7% [85.05% - 93.3%] (226/252)90.8% [86.7% - 94.7%] (266/293)89.8% [85.7% - 93.6%] (263/293)

    Study Information:

    1. Sample Sizes and Data Provenance:

    • Test Set (Prospective Study EN-01b):

      • Accuracy Analysis: 336 eyes from 171 participants.
      • Precision Analysis: 264 eyes from 132 participants.
      • Data Provenance: Prospective, multi-center clinical study conducted in the United States (implied by FDA submission and context).
    • Test Set (Retrospective Study EN-01):

      • Accuracy Analysis: 1310 eyes from 655 participants.
      • Data Provenance: Retrospective, utilizing data already collected from the EyeArt pivotal multi-center clinical study (Protocol EN-01). Geographic origin not explicitly stated but likely United States due to the FDA context.

    2. Number of Experts and Qualifications for Ground Truth:

    • Number of Experts: Not explicitly stated as a count of individual experts, but the reference standard was determined by "experienced and certified graders" at the University of Wisconsin Reading Center (WRC).
    • Qualifications of Experts: "experienced and certified graders" at the University of Wisconsin Reading Center (WRC).

    3. Adjudication Method for the Test Set:

    • The text states the ground truth was determined by "experienced and certified graders at the University of Wisconsin Reading Center (WRC)." It does not specify a numerical adjudication method (e.g., 2+1, 3+1). It implies a consensus or designated "grader" process, but the specifics of how disagreements (if any) were resolved are not detailed.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No, an MRMC comparative effectiveness study comparing human readers with AI vs. without AI assistance was not reported. The studies focused on the standalone diagnostic accuracy of the EyeArt device against a clinical reference standard (WRC grading).

    5. Standalone Performance (Algorithm Only):

    • Yes, standalone performance was done. The reported sensitivity, specificity, PPV, and NPV are measures of the EyeArt algorithm's performance in automatically detecting DR without direct human interpretation of the images in the diagnostic loop. The "EyeArt operator" described in the device description assists in image acquisition and receiving results, but the detection itself is algorithmic.

    6. Type of Ground Truth Used:

    • Expert Consensus / Clinical Reference Standard: The ground truth was established by a "Clinical reference standard (CRS)" which was determined by "experienced and certified graders at the University of Wisconsin Reading Center (WRC)" based on dilated 4-widefield stereo fundus images per the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale. This is a form of expert consensus derived from a clinical gold standard imaging modality.

    7. Sample Size for the Training Set:

    • Not specified in the provided text. The document states that the six sites for the prospective study "did not contribute data used for training or development of EyeArt," implying a separate training dataset was used, but its size is not disclosed.

    8. How Ground Truth for Training Set was Established:

    • Not explicitly stated in the provided text. Given the consistency in methodology for the test set, it is highly probable that the ground truth for the training set was established similarly, using expert grading by ophthalmology specialists or a reading center. However, this is an inference, not directly stated for the training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221183
    Device Name
    AEYE-DS
    Manufacturer
    Date Cleared
    2022-11-10

    (199 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AEYE-DS device is indicated for use by health care providers to automatically detect more than mild diabetic retinopathy (mtmDR) in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. The AEYE-DS is indicated for use with the Topcon NW400.

    Device Description

    AEYE-DS is a retinal diagnostic software device that incorporates an algorithm to evaluate ophthalmic images for diagnostic screening to identify retinal diseases or conditions. Specifically, the AEYE-DS is designed to perform diagnostic screening for the condition of more-than-mild diabetic retinopathy (mtmDR).

    The AEYE-DS is comprised of 5 software components: (1) Client; (2) Service; (3) Analytics; (4) Reporting and Archiving; and (5) System Security.

    The AEYE-DS device is based on the main technological principle of Artificial Intelligence (AI) software as a medical device. The software as a medical device uses artificial intelligence technology to analyze specific disease features from fundus retinal images for diagnostic screening of diabetic retinopathy.

    The AEYE-DS device is based on the principle of operation, whereby a fundus camera is used to obtain retinal images. The fundus camera is attached to a computer, where the Client module/software is installed. The Client module/software guides the user to acquire the images and enables the user to interact with the server-based analysis software over a secure internet connection. Using the Client module/software, users identify the fundus images per eye to be dispatched to the Service module/software. The Service module/software is installed on a server hosted at a secure datacenter, receives the fundus images and transfers them to the Analytics module/software. The Analytics module/software, which runs alongside the Service module/software, processes the fundus images and returns information on the image quality and the presence or absence of mtmDR to the Service module/software. The Service module/software then returns the results to the Client module/software.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the AEYE-DS device meets them, based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    The pivotal clinical study evaluated two configurations: 1 image per eye (macula-centered) and 2 images per eye (macula-centered and optic disc-centered). The acceptance criteria for both sensitivity and specificity were pre-defined performance goals.

    Acceptance Criteria and Performance (1 Image Per Eye)

    MetricAcceptance Criteria (Lower One-Sided 97.5% CI Bound)Reported Device Performance (Lower One-Sided 97.5% CI Bound)Met?
    Sensitivity≥ 82%83.3%Yes
    Specificity≥ 87%88.22%Yes

    Acceptance Criteria and Performance (2 Images Per Eye)

    MetricAcceptance Criteria (Lower One-Sided 97.5% CI Bound)Reported Device Performance (Lower One-Sided 97.5% CI Bound)Met?
    Sensitivity≥ 82%85.63%Yes
    Specificity≥ 87%85.18%No

    Additional Performance Metrics (for both 1 and 2 images per eye)

    Metric1 Image Per Eye Performance2 Images Per Eye Performance
    Imageability99.1% [CI: 97.8%; 99.7%]99.1% [CI: 97.8%; 99.7%]
    PPV60.23% [CI: 49.78%; 69.82%]54% [CI: 44.26%; 63.44%]
    NPV98.93% [CI: 97.28%; 99.58%]99.17% [CI: 97.59%; 99.72%]

    Note: While the specificity for 2 images per eye was slightly below the pre-defined performance goal, the document states that this "does not involve any risks" as sensitivity was high and mtmDR+ subjects would not be missed.


    2. Sample Size and Data Provenance

    • Test Set Sample Size:
      • Pivotal Clinical Study: 531 subjects screened and enrolled.
        • For the 1 image per eye analysis, there were 57 mtmDR+ and 405 mtmDR- fully analyzable subjects. The total number of fully analyzable subjects is 462.
        • For the 2 images per eye analysis, the exact number of fully analyzable subjects is not explicitly stated in the summary, but the sensitivity and specificity values are provided for a certain number of images, suggesting the same or a very similar subject pool.
      • Precision Study: 22 participants.
    • Data Provenance: Prospective, multi-center, single-arm, blinded study conducted at 8 study sites in the United States (7 sites) and Israel (1 site). Enrollment from October 2020 through November 2021.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated as a number of individual experts. The professional images (dilated four widefield stereo color fundus images, lens photography, and macular OCT) were sent to an "independent reading center."
    • Qualifications of Experts: The reading center determined the severity of retinopathy and diabetic macular edema (DME) according to the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale. This implies that the experts were highly qualified in retinal imaging and diabetic retinopathy grading, typically ophthalmologists or trained graders with specific expertise in ETDRS.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method like 2+1 or 3+1. It states that "The Reading Center diagnostic results formed the reference standard (ground truth) for the study." This suggests that the Reading Center's determination was considered the definitive ground truth, implying a consensus or expert-driven process within the center to establish this standard.


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The study focused on the standalone performance of the AEYE-DS device against an expert-determined ground truth, not on how human readers' performance might improve with AI assistance.


    6. Standalone Performance Study (Algorithm Only)

    Yes, a standalone (algorithm only) performance study was conducted. The "Clinical Performance Data" section describes how the AEYE-DS device automatically processed fundoscopy images and produced a diagnostic result ("more than mild DR (mtmDR) detected" or "more than mild DR not detected"). These results were then compared to the "reference standard (ground truth)" established by the independent reading center, directly assessing the algorithm's performance without human intervention in the diagnosis.


    7. Type of Ground Truth Used

    The ground truth used was expert consensus / expert reading of multi-modal imaging data. Specifically, it was established by an independent reading center based on:

    • Dilated four widefield stereo color fundus images.
    • Lens photography for media opacity assessment.
    • Macular optical coherence tomography (OCT) imaging.
    • Severity of retinopathy and DME determined according to the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for the training set. The clinical study described is the pivotal clinical study for validation, not the training of the AI model.


    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established, as it focuses on the performance claims from the pivotal clinical study. However, given that it's an AI/ML device, it can be inferred that a similar process of expert grading of images would have been used for the training data, likely by ophthalmologists or trained graders applying recognized clinical standards (e.g., ETDRS).

    Ask a Question

    Ask a specific question about this device

    K Number
    K213037
    Device Name
    IDx-DR v2.3
    Date Cleared
    2022-06-17

    (269 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IDx-DR is indicated for use by healthcare providers to automatically detect more than mild diabetic retimopathy (mtmDR) in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. IDx-DR is indicated for use with the Topcon NW400.

    Device Description

    The IDx-DR device is an autonomous, artificial intelligence (AI)-based system for the automated detection of more than mild diabetic retinopathy (mtmDR). It consists of several component parts: IDx-DR Analysis, IDx-DR Client, and IDx-DR Service. The IDx-DR Analysis software analyzes patient images and determines exam quality and the presence/absence of mtmDR. The IDx-DR Client is a software application running on a computer connected to the fundus camera, allowing users to transfer images and receive results. The IDx-DR Service comprises a general exam analysis service delivery software package with a webserver front-end, database, and logging system, and is responsible for device cybersecurity. The system workflow involves image acquisition using the Topcon NW400, transfer to IDx-DR Service, analysis by IDx-DR Analysis System, and display of results on the IDx-DR Client.

    AI/ML Overview

    The provided text describes a 510(k) submission for IDx-DR v2.3, a diabetic retinopathy detection device. The submission aims to demonstrate substantial equivalence to a predicate device (IDx-DR v2.0).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. A table of acceptance criteria and the reported device performance

    The document implicitly uses the performance of the predicate device (IDx-DR v2.0) as the acceptance criteria for the new version (IDx-DR v2.3). The study's goal is to show that IDx-DR v2.3 performs comparably to or better than IDx-DR v2.0. The primary endpoints are sensitivity, specificity, and "diagnosability." Secondary endpoints are positive prophetic value (PPV) and negative predictive value (NPV).

    Here's a table comparing the performance of the subject device (IDx-DR v2.3) and the predicate device (IDx-DR v2.0) based on "final submission" images, which are the most relevant for diagnostic performance. The document presents ranges for performance, but for the sake of clarity, I've used the point estimates presented in the tables for both the subject and predicate devices. The values in parentheses are the 95% Confidence Intervals.

    CharacteristicPredicate Device (IDx-DR v2.0)Subject Device (IDx-DR v2.3)
    Primary Endpoints
    Diagnosability (Final Sub.)96.35% (94.86%, 97.51%)95.18% (93.51%, 96.52%)
    Sensitivity87.37% (81.93%, 91.66%)87.69% (82.24%, 91.95%)
    Specificity89.53% (86.85%, 91.83%)90.07% (87.42%, 92.32%)
    Secondary Endpoints
    Positive Predictive Value72.69% (66.56%, 78.25%)73.71% (67.55%, 79.25%)
    Negative Predictive Value95.70% (93.71%, 97.20%)95.84% (93.87%, 97.32%)

    The document concludes that "The results of the clinical study support a determination of substantial equivalence between IDx-DR v2.3 and IDx-DR v2.0." This implies that the observed performance of IDx-DR v2.3 falls within an acceptable range, demonstrating non-inferiority or similar performance to the predicate device. Specific numerical acceptance thresholds (e.g., "must be at least X%") are not explicitly stated, but the comparison to the existing cleared device acts as the benchmark.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Data from 892 participants from the pivotal study of the predicate device were used. Of these, images from 850 participants were available for analysis and were diagnosable by the clinical reference standard, making them evaluable for performance.
    • Data Provenance: The data was retrospectively collected from the pivotal study of the predicate device ("IDx-DR v2.0"; Abràmoff et al. Digital Medicine 2018;1:39). The country of origin is not explicitly stated in the provided text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document refers to a "clinical reference standard" and states that IDx-DR has "the ability to perform analysis on the specific disease features that are important to a retina specialist for diagnostic screening of diabetic retinopathy." However, the exact number of experts, their specific qualifications (e.g., number of years of experience, board certification), and their role in establishing the ground truth for the test set are not explicitly detailed in the provided text. It mentions an article by Abràmoff et al. (2018), which likely describes the ground truth establishment for the original pivotal study.

    4. Adjudication method for the test set

    The adjudication method used to establish the clinical reference standard for the test set is not explicitly stated in the provided text. It mentions a "clinical reference standard" but does not detail how it was established (e.g., 2+1, 3+1, etc.).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted. The study evaluated the standalone performance of the algorithm (IDx-DR v2.3) by comparing it against the clinical reference standard, and then comparing its performance to the predicate algorithm (IDx-DR v2.0). There is no mention of human readers assisting the AI, nor is there any data on human reader improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone (algorithm only) performance study was conducted. The study assesses the ability of IDx-DR v2.3 to automatically detect more than mild diabetic retinopathy (mtmDR) and compares its sensitivity, specificity, and diagnosability to the predicate device's standalone performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used for the test set is referred to as a "clinical reference standard" by which participants were "diagnosable." This strongly implies expert consensus by retina specialists, as suggested by the mention of the algorithm identifying "specific disease features that are important to a retina specialist." However, the exact methodology is not detailed within this document.

    8. The sample size for the training set

    The document does not provide information regarding the sample size of the training set used for IDx-DR v2.3. The provided study is a retrospective validation of the modified algorithm using a pre-existing dataset.

    9. How the ground truth for the training set was established

    The document does not provide information on how the ground truth for the training set was established. It focuses solely on the clinical performance testing (validation) of the device using a pre-existing test set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203629
    Device Name
    IDx-DR
    Date Cleared
    2021-06-10

    (181 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IDx-DR is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. IDx-DR is indicated for use with the Topcon NW400.

    Device Description

    The IDx-DR device consists of several component parts. A camera is attached to a computer, where IDx-DR client is installed. Guided by the Client, users acquire two fundus images per eye to be dispatched to IDx-Service. IDx-Service is installed on a server hosted at a secure datacenter. From IDx-Service, images are transferred to IDx-DR Analysis. No information other than the fundus images is required to perform the analysis. IDx-DR Analysis, which runs on dedicated servers hosted in the same secure datacenter as IDx-Service, processes the fundus images and returns information on the exam quality and the presence or absence of mtmDR to IDx-Service. IDx-Service then transports the results to the IDx-DR Client that displays them to the user.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the IDx-DR device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary (K203629) states that the device modifications do not affect clinical performance and refers to the predicate device (DEN180001) for clinical trial details. Therefore, the acceptance criteria and reported device performance are identical to the predicate device. To provide complete information, one would need to refer to the DEN180001 submission. However, based solely on the provided document K203629, the table would look like this:

    Acceptance CriterionReported Device Performance (from K203629)
    Auto-detect more than mild diabetic retinopathy (mtmDR)Not explicitly stated in K203629.
    K203629 states: "The device modifications do not affect clinical performance."
    Performance is considered "Equivalent" to predicate device DEN180001.
    Refer to an eye care professional for mtmDR detectedNot explicitly stated in K203629.
    K203629 states: "The device modifications do not affect clinical performance."
    Performance is considered "Equivalent" to predicate device DEN180001.
    Rescreen in 12 months for mtmDR not detectedNot explicitly stated in K203629.
    K203629 states: "The device modifications do not affect clinical performance."
    Performance is considered "Equivalent" to predicate device DEN180001.
    Insufficient image quality identifiedImplied as an output, but no performance metric given.
    K203629 states: "The device modifications do not affect clinical performance."
    Performance is considered "Equivalent" to predicate device DEN180001.

    Important Note: To get the actual numerical acceptance criteria (e.g., sensitivity, specificity thresholds) and the reported performance values, the DEN180001 submission would need to be reviewed. This document explicitly avoids providing those details for the current submission.

    2. Sample Size Used for the Test Set and Data Provenance

    Since the current submission (K203629) states that "The determination of substantial equivalence is not based on an assessment of clinical performance data" and refers to DEN180001 for clinical trial details, this information is not available in the provided text.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001).

    4. Adjudication Method for the Test Set

    This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance is not mentioned in the furnished K203629 document. The document explicitly states that the substantial equivalence determination is not based on new clinical performance data and refers to the predicate device's clinical trial.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done

    The K203629 document describes the IDx-DR Analysis component as "Software that analyzes the patient's images and determines exam quality and the presence/absence of diabetic retinopathy." This implies a standalone algorithmic assessment. However, the performance metrics of this specific version of the standalone algorithm are not presented in this document, as it relies on the predicate device's clinical performance. The "Outputs" section of Table 1 supports the standalone nature of the output, as it directly states the detection of DR and referral decisions.

    7. The Type of Ground Truth Used

    This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001). Typically, for diabetic retinopathy, ground truth is established by a panel of expert ophthalmologists or retina specialists through consensus reading of images, potentially correlated with other clinical findings.

    8. The Sample Size for the Training Set

    The document does not specify the sample size for the training set. It mentions "Future algorithm improvements will be made under a consistent medically relevant framework" and "A protocol was provided to mitigate the risk of algorithm changes," but no details on training data for the current or previous versions are given.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide details on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200667
    Device Name
    EyeArt
    Manufacturer
    Date Cleared
    2020-08-03

    (143 days)

    Product Code
    Regulation Number
    886.1100
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EyeArt is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy and visionthreatening diabetic retinopathy (severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy and/or diabetic macular edema) in eyes of adults diagnosed with diabetes who have not been previously diagnosed with more than mild diabetic retinopathy. EyeArt is indicated for use with Canon CR-2 Plus AF cameras in both primary care and eye care settings.

    Device Description

    EyeArt is a software as a medical device that consists of several components - Client, Server, and Analysis Computation Engine. A retinal fundus camera, used to capture retinal fundus images of the patient, is connected to a computer where the EyeArt Client software is installed. The EyeArt Client software provides a graphical user interface (GUI) that allows the EyeArt operator to transfer the appropriate fundus images to and receive results from the remote EyeArt Analysis Computation Engine through the EyeArt Server. The EyeArt Analysis Computation Engine is installed on remote computer(s) in a secure data center and uses artificial intelligence algorithms to analyze the fundus images and return results. EyeArt is intended to be used with color fundus images of resolution 1.69 megapixels or higher captured using one of the indicated color fundus cameras (Canon CR-2 AF and Canon CR-2 Plus AF) with 45 degrees field of view. EyeArt is specified for use with two color fundus images per eye: optic nerve head (ONH) centered and macula centered.

    For each patient eye, the EyeArt results separately indicate whether more than mild diabetic retinopathy (mtmDR) and vision-threatening diabetic retinopathy (vtDR) are detected. More than mild diabetic retinopathy is defined as the presence of moderate non-proliferative diabetic retinopathy or worse on the International Clinical Diabetic Retinopathy (ICDR) severity scale and/or the presence of diabetic macular edema. Vision-threatening diabetic retinopathy is defined as the presence of severe non-proliferative diabetic retinopathy or proliferative diabetic retinopathy on the ICDR severity scale and/or the presence of diabetic macular edema.

    AI/ML Overview

    Here's a breakdown of the EyeArt device's acceptance criteria and the study that proves it meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Device: EyeArt (v2.1.0)
    Indication for Use: To automatically detect more than mild diabetic retinopathy (mtmDR) and vision-threatening diabetic retinopathy (vtDR) in adults diagnosed with diabetes who have not been previously diagnosed with more than mild diabetic retinopathy.

    MetricAcceptance Criteria (Implicit by achieving high performance)Reported Device Performance (Worst Case Across Cohorts/Outcomes)
    Sensitivity (mtmDR)High (e.g., above 90%)92.9% (Enrichment-permitted, Primary Care)
    Specificity (mtmDR)High (e.g., above 85%)85.2% (Enrichment-permitted, Ophthalmology)
    Sensitivity (vtDR)High (e.g., near 90% or 100% for smaller groups)88.9% (Sequential, Ophthalmology)
    Specificity (vtDR)High (e.g., above 89%)89.8% (Enrichment-permitted, Ophthalmology)
    ImageabilityHigh (e.g., above 95%)96.5% (Enrichment-permitted, Ophthalmology, Sequence P1/P2/P3)
    Intra-operator Repeatability (OA - mtmDR)High (e.g., above 90%)93.5% (Cohort P2)
    Intra-operator Repeatability (OA - vtDR)High (e.g., above 96%)96.8% (Cohort P2)
    Inter-operator Reproducibility (OA - mtmDR)High (e.g., above 90%)90.3% (Cohort P1)
    Inter-operator Reproducibility (OA - vtDR)High (e.g., above 96%)96.8% (Cohort P1)

    Note: The document does not explicitly state numerical acceptance criteria thresholds. The reported performance values are the actual outcomes from the clinical study, which are implicitly considered acceptable for substantial equivalence based on the FDA's clearance. The "worst case" reported here refers to the lowest performance observed across the different cohorts for each metric. Confidence intervals are provided in the tables and are generally tight, indicating reliability.


    Study Details Proving Device Meets Acceptance Criteria

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Size:
      • Clinical Efficacy Study: 655 participants (after exclusions from an initial 942 screened), comprising 1290 eyes (assuming 2 eyes per subject, though analyses are eye-level). These 655 participants were divided into two main cohorts:
        • Sequential Enrollment: 235 subjects (45 in primary care, 190 in ophthalmology)
        • Enrichment-Permitted Enrollment: 420 subjects (335 in primary care, 85 in ophthalmology)
      • Precision (Repeatability/Reproducibility) Study: 62 subjects (31 subjects each at 2 US primary care sites), resulting in 186 pairs of images for repeatability analysis (Cohort P1) and 62 subjects for Cohort P2 (3 repeats each).
    • Data Provenance: Prospective, multi-center pivotal clinical trial conducted across 11 US study sites (primary care centers and general ophthalmology centers).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: At least 2 independent graders and an additional adjudication grader (more experienced) were used for each subject's images.
    • Qualifications of Experts: Experienced and certified graders at the Fundus Photography Reading Center (FPRC). They were certified to grade according to the Early Treatment for Diabetic Retinopathy Study severity (ETDRS) scale. Specific experience levels (e.g., "10 years of experience") are not detailed beyond "experienced and certified."

    4. Adjudication Method for the Test Set

    • Method: 2+1 adjudication. Each subject's images were independently graded by 2 experienced and certified graders. In case of significant differences (determined using prespecified significance levels) between the two independent gradings, a more experienced adjudication grader graded the same images to establish the final ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No, a comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance was not reported in this document. The study primarily evaluated the standalone performance of the EyeArt device against expert human grading (FPRC reference standard).

    6. Standalone (i.e., algorithm only without human-in-the-loop performance)

    • Was it done? Yes, the entire clinical testing section (Section D) describes the standalone performance of the EyeArt algorithm. The EyeArt results (positive, negative, or ungradable for mtmDR and vtDR) for each eye were compared directly to the clinical reference standard established by FPRC grading.

    7. Type of Ground Truth Used

    • Type: Expert Consensus Grading (adjudicated) from the Fundus Photography Reading Center (FPRC). This grading was based on dilated 4-wide field stereo fundus imaging and applied the Early Treatment for Diabetic Retinopathy Study (ETDRS) severity scale.
      • mtmDR Ground Truth: Positive if ETDRS level was 35 or greater (but not equal to 90) or clinically significant macular edema (CSME) grade was CSME present. Negative if ETDRS levels were 10-20 and CSME grade was CSME absent.
      • vtDR Ground Truth: Positive if ETDRS level was 53 or greater (but not equal to 90) or CSME grade was CSME present. Negative if ETDRS levels were 10-47 and CSME grade was CSME absent.

    8. Sample Size for the Training Set

    • The document does not specify the sample size for the training set. It mentions that the EyeArt Analysis Computation Engine uses "an ensemble of clinically aligned machine learning (deep learning) algorithms" but provides no details on their training data.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not specify how the ground truth for the training set was established. While the "clinically aligned framework" is mentioned, the specific methodology for annotating or establishing ground truth for the training data is not detailed in this submission summary.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN180001
    Device Name
    IDx-DR
    Manufacturer
    Date Cleared
    2018-04-11

    (89 days)

    Product Code
    Regulation Number
    886.1100
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IDx-DR is indicated for use by health care providers to automatically detect more than mild diabetic retinopathy (mtmDR) in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. IDx-DR is indicated for use with the Topcon NW400.

    Device Description

    The IDx-DR consists of several components. A fundus camera is attached to a computer, where the IDx-DR Client is installed. The Client allows the user to interact with the server-based analysis software over a secure internet connection. Using the Client, users identify two fundus images per eye to be dispatched to IDx-Service is installed on a server hosted at a secure datacenter. IDx-DR Analysis, which runs inside IDx-Service, processes the fundus images and returns information on the image quality and the presence or absence of mtmDR to IDx-Service. IDx- Service then returns the results to the IDx-DR Client.

    AI/ML Overview

    Acceptance Criteria and Device Performance for IDx-DR

    This document details the acceptance criteria for the IDx-DR device and summarizes the study conducted to demonstrate its performance.

    1. Acceptance Criteria and Reported Device Performance

    The primary outcomes for the IDx-DR study were sensitivity and specificity for detecting more than mild diabetic retinopathy (mtmDR). Pre-defined performance thresholds were established, and the study results demonstrate the device met these criteria.

    MetricAcceptance Criteria (Threshold)Reported Device Performance (Full Analyzable Set)95% Confidence Interval (Reported)
    Sensitivity85.0%87.4%81.9% - 92.9%
    Specificity82.5%89.5%86.9% - 93.1%
    ImageabilityNot explicitly stated96.1%94.0% - 96.8%
    Positive Predictive Value (PPV)Not explicitly stated72.7%(Implicitly provided as 173/238)
    Negative Predictive Value (NPV)Not explicitly stated95.7%(Implicitly provided as 556/581)

    Note: The reported performance also includes enrichment-corrected sensitivity and specificity, which were also high and met the thresholds.

    2. Sample Size and Data Provenance for Test Set

    • Sample Size (Test Set): 819 participants were fully analyzable in the pivotal clinical study.
    • Data Provenance: The data was collected prospectively from 10 primary care sites across the United States. The target population was adults diagnosed with diabetes who had not been previously diagnosed with diabetic retinopathy. The study population was enriched by targeting enrollment of subjects with elevated Hemoglobin A1c (HbA1C) levels.

    3. Number and Qualifications of Experts for Ground Truth (Test Set)

    • Number of Experts: Three experienced and validated readers.
    • Qualifications of Experts: The readers were certified by the Fundus Photography Reading Center (FPRC) and had expertise in evaluating the severity of retinopathy and diabetic macular edema (DME) according to the Early Treatment for Diabetic Retinopathy Study (ETDRS) scale and Diabetic Retinopathy Clinical Research Network (DRCR) grading paradigm.

    4. Adjudication Method for the Test Set

    The adjudication method used for establishing the ground truth from the FPRC readers was a majority voting paradigm for the four widefield stereo image pairs.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No explicit MRMC comparative effectiveness study involving human readers' improvement with AI vs. without AI assistance was reported. The study focused on the standalone performance of the IDx-DR device against an expert-derived reference standard.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance study was conducted. The reported sensitivity, specificity, PPV, and NPV values are for the IDx-DR algorithm operating autonomously, without human-in-the-loop assistance during the diagnostic process.

    7. Type of Ground Truth Used (Test Set)

    The ground truth used was expert consensus based on comprehensive ophthalmic imaging (dilated four widefield stereo color fundus photography and macular optical coherence tomography (OCT) imaging) read by three experienced and validated readers at the Fundus Photography Reading Center (FPRC). The severity of retinopathy and DME was determined according to the ETDRS scale and DRCR grading paradigm, using a majority voting paradigm.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size used for the training set. It describes the clinical study as a pivotal clinical study with 900 enrolled patients, which formed the basis for evaluating the device's performance, but it does not specify what portion (if any) of this dataset was used for training or validation during the development phase. The language focuses on the "analyzable fraction" of participants for the primary outcomes, implying this was the test set.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide details on how the ground truth was established for the training set. It primarily describes the methodology for establishing the ground truth for the test set used in the pivotal clinical study. It mentions that IDx has provided a full characterization of the technical parameters of the software, including a description of the algorithms, and that IDx will make future algorithm improvements under a consistent medically relevant framework. However, the details of training data ground truth establishment are not discussed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1