K Number
K203594
Device Name
EyeCTester
Date Cleared
2022-09-07

(637 days)

Product Code
Regulation Number
886.1330
Panel
OP
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The EyeCTester Model iOS is a mobile software as a medical device (SaMD) app for adults ages 22 and above. It is intended to serve as an aid to eyecare providers to monitor and detect central visual distortions (maculopathies) as well as central 10-degree visual field scotomas.

EyeCTester is not intended to screen or diagnose, but it is intended to alert a healthcare provider of any changes in a patient's central visual status. The device is intended for both at-home and clinical use. The EyeCTester is indicated to be used only with compatible mobile devices.

Device Description

The EyeCTester - Monitoring Application comprises a survey and a series of tests to provide remote monitoring of the patient's visual parameters from the patient's home. The EyeCTester -Monitoring application is available to patients with a prescription from the healthcare provider managing their vision. Prescribed, routine patient testing via the app allows providers to monitor vision health in the interim period between visits to the physician's clinical practice. The tool is not intended to replace the need for an in-person eye exam with a professional eve care provider. EyeCTester is not intended to diagnose the patient; diagnosis and management is the responsibility of the qualified team prescribing and interpreting the app measurements.

Prior to regular, at-home use, patients will receive training in the clinic and will then undergo training within the app and verify their understanding of how to complete testing. Pop-ups will appear throughout the app reminding the patient that the app's purpose is to monitor changes in vision, not to provide a diagnosis.

The app uses an Amsler Grid test to detect changes in visual distortions / scotomas / field cuts within the central 10 degrees of vision. The test prompts patients to outline and define distorted and/or missing areas on a series of grids while staring at a pulsating fixation point. At the end of the test, the grids are combined to display the patient's responses. The results of the assessments are summarized in a report that is provided to the prescribing healthcare provider to be interpreted and evaluated for any changes over time.

AI/ML Overview

The provided text describes the EyeCTester - Monitoring Application, a mobile software as a medical device (SaMD) app intended to aid eyecare providers in monitoring and detecting central visual distortions (maculopathies) and central 10-degree visual field scotomas. The device is not intended for screening or diagnosis but to alert healthcare providers of changes in a patient's central visual status.

Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

Acceptance Criteria and Reported Device Performance

The acceptance criteria are not explicitly laid out in a table format with specific quantitative thresholds. Instead, the document describes the results of clinical testing as demonstrating repeatability, consistency with the paper analogue, and a low human error rate. The overall conclusion is that the device "is demonstrated to be as safe and effective and perform as well as the identified predicate device."

Thus, the implicit acceptance criteria are:

  1. Repeatability: The device should consistently provide similar results when a test is performed multiple times on the same subject.
  2. Consistency with Standard of Care (Paper Amsler Grid): The device's results should be comparable to those obtained using the traditional paper Amsler Grid.
  3. Low Human Error Rate: Users should be able to complete the test with minimal errors.
  4. Safety and Effectiveness: The device should be safe for its intended users and effective in its stated purpose (aiding in monitoring and detection of visual distortions/scotomas).

Based on the "Clinical Testing" section, here's how the device performed against these implicit criteria:

Acceptance CriterionReported Device Performance
Repeatability"These studies demonstrated that a patient may use the EyeCTester application to gather psychophysiological measurements of central and paracentral vision parameters related to normative functioning of the human visual system, and these measurements are useful for remote monitoring in between clinic visits."
"The data from this clinical study provide affirmation that the EyeCTester app-based test is repeatable and consistent with the results of the paper analogue."
Consistency with Standard of Care"Clinical testing was performed to validate the visual parameter assessments of the EyeCTester compared to the standard of care modality (paper Amsler Grid)..."
"The data from this clinical study provide affirmation that the EyeCTester app-based test is repeatable and consistent with the results of the paper analogue. The Amsler Grid testing was consistent and repeatable across the two testing methods."
Low Human Error Rate"An additional metric assessed during the study was rate of human error in completing the test. It was confirmed that the human error rate was below 1% in the EyeCTester app test."
Safety and Effectiveness (Overall Summary)"Overall, the clinical evaluations support the use of the EyeCTester for testing patients using the Amsler Grid test."
"Clinical and Human Factors testing demonstrate that the device performs as intended."
"The EyeCTester - Monitoring Application is demonstrated to be as safe and effective and perform as well as the identified predicate device."

Study Details from the Provided Text:

  1. Sample size used for the test set and the data provenance:

    • Sample Size: The document does not specify the exact number of participants (sample size) in the clinical studies. It mentions "participants without visual defects" in the Human Factors study and "healthy volunteers and patients with neuro-ophthalmic, retinal, and other diseases" in the clinical study.
    • Data Provenance: The clinical studies were "prospective, randomized, cross-over, 2-site studies." The country of origin is not explicitly stated, but the submission is to the U.S. FDA, and the owner and consultant are based in Houston, Texas, suggesting a U.S. context.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It mentions the study compared the device to the "standard of care modality (paper Amsler Grid)" and involved "eyecare providers" for interpretation, but doesn't detail expert involvement in ground truth establishment for the comparative study itself.
  3. Adjudication method for the test set:

    • The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for the test set results. The comparison was against the "paper Amsler Grid," implying a direct comparison of results rather than a complex multi-reader adjudication process for ground truth.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study involving human readers and AI assistance is not described. The study focuses on the device's performance compared to the paper Amsler Grid, not on how AI assistance improves human reader performance. The device itself is described as an "aid to eyecare providers" and provides a "report that is provided to the prescribing healthcare provider to be interpreted and evaluated." It acts as a tool for monitoring, not necessarily as an AI assisting in the interpretation process itself, though it gathers data that providers interpret.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The "Clinical Testing" section describes a study to "validate the visual parameter assessments of the EyeCTester compared to the standard of care modality (paper Amsler Grid)." This implies evaluating the device's output against a known standard.
    • While the device is intended to be an "aid" to providers and not for diagnosis, the clinical testing seems to evaluate the data collected by the device itself and its consistency with the paper Amsler Grid. The "human error rate" mentioned is in completing the test by the patient, not necessarily the algorithm's performance. The "report" is then provided to the provider for interpretation. Therefore, it appears a standalone performance evaluation of the data generation by the algorithm was conducted, assessed by its repeatability and consistency with the paper grid.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • The ground truth for the clinical study was established by comparing the EyeCTester results to the "standard of care modality (paper Amsler Grid)." This suggests the paper Amsler Grid's results served as the reference or ground truth for the comparison. It is not explicitly stated whether this ground truth was further validated by expert consensus, pathology, or outcomes data.
  7. The sample size for the training set:

    • The document does not mention a training set sample size. It describes "software development and testing" and "human factors validation study" and "clinical testing," but no specifics on a separate training phase or dataset for an AI model. Given the description of the device as primarily an Amsler Grid test, it might not involve complex machine learning models that require distinct training sets, or if it does, the details are not provided. The comparison here is against a traditional method.
  8. How the ground truth for the training set was established:

    • Since a training set is not explicitly mentioned, the method for establishing its ground truth is also not provided.

§ 886.1330 Amsler grid.

(a)
Identification. An Amsler grid is a device that is a series of charts with grids of different sizes that are held at 30 centimeters distance from the patient and intended to rapidly detect central and paracentral irregularities in the visual field.(b)
Classification. Class I (general controls). The device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter, subject to the limitations in § 886.9. The device is also exempt from the current good manufacturing practice requirements of the quality system regulation in part 820 of this chapter, with the exception of § 820.180, with respect to general requirements concerning records, and § 820.198, with respect to complaint files.