Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K203594
    Device Name
    EyeCTester
    Date Cleared
    2022-09-07

    (637 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Houston, Texas 77071

    Re: K203594

    Trade/Device Name: EyeCTester Regulation Number: 21 CFR 886.1330 Regulation
    Houston, Texas 77071

    Re: K203594

    Trade/Device Name: EyeQTester Regulation Number: 21 CFR 886.1330 Regulation
    | Class I |
    | Classification Number(s): | 21 CFR 886.1330
    |
    | Regulation Number(s) | 21 CFR 886.1330
    | 21 CFR 886.1330

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The EyeCTester Model iOS is a mobile software as a medical device (SaMD) app for adults ages 22 and above. It is intended to serve as an aid to eyecare providers to monitor and detect central visual distortions (maculopathies) as well as central 10-degree visual field scotomas.

    EyeCTester is not intended to screen or diagnose, but it is intended to alert a healthcare provider of any changes in a patient's central visual status. The device is intended for both at-home and clinical use. The EyeCTester is indicated to be used only with compatible mobile devices.

    Device Description

    The EyeCTester - Monitoring Application comprises a survey and a series of tests to provide remote monitoring of the patient's visual parameters from the patient's home. The EyeCTester -Monitoring application is available to patients with a prescription from the healthcare provider managing their vision. Prescribed, routine patient testing via the app allows providers to monitor vision health in the interim period between visits to the physician's clinical practice. The tool is not intended to replace the need for an in-person eye exam with a professional eve care provider. EyeCTester is not intended to diagnose the patient; diagnosis and management is the responsibility of the qualified team prescribing and interpreting the app measurements.

    Prior to regular, at-home use, patients will receive training in the clinic and will then undergo training within the app and verify their understanding of how to complete testing. Pop-ups will appear throughout the app reminding the patient that the app's purpose is to monitor changes in vision, not to provide a diagnosis.

    The app uses an Amsler Grid test to detect changes in visual distortions / scotomas / field cuts within the central 10 degrees of vision. The test prompts patients to outline and define distorted and/or missing areas on a series of grids while staring at a pulsating fixation point. At the end of the test, the grids are combined to display the patient's responses. The results of the assessments are summarized in a report that is provided to the prescribing healthcare provider to be interpreted and evaluated for any changes over time.

    AI/ML Overview

    The provided text describes the EyeCTester - Monitoring Application, a mobile software as a medical device (SaMD) app intended to aid eyecare providers in monitoring and detecting central visual distortions (maculopathies) and central 10-degree visual field scotomas. The device is not intended for screening or diagnosis but to alert healthcare providers of changes in a patient's central visual status.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly laid out in a table format with specific quantitative thresholds. Instead, the document describes the results of clinical testing as demonstrating repeatability, consistency with the paper analogue, and a low human error rate. The overall conclusion is that the device "is demonstrated to be as safe and effective and perform as well as the identified predicate device."

    Thus, the implicit acceptance criteria are:

    1. Repeatability: The device should consistently provide similar results when a test is performed multiple times on the same subject.
    2. Consistency with Standard of Care (Paper Amsler Grid): The device's results should be comparable to those obtained using the traditional paper Amsler Grid.
    3. Low Human Error Rate: Users should be able to complete the test with minimal errors.
    4. Safety and Effectiveness: The device should be safe for its intended users and effective in its stated purpose (aiding in monitoring and detection of visual distortions/scotomas).

    Based on the "Clinical Testing" section, here's how the device performed against these implicit criteria:

    Acceptance CriterionReported Device Performance
    Repeatability"These studies demonstrated that a patient may use the EyeCTester application to gather psychophysiological measurements of central and paracentral vision parameters related to normative functioning of the human visual system, and these measurements are useful for remote monitoring in between clinic visits."
    "The data from this clinical study provide affirmation that the EyeCTester app-based test is repeatable and consistent with the results of the paper analogue."
    Consistency with Standard of Care"Clinical testing was performed to validate the visual parameter assessments of the EyeCTester compared to the standard of care modality (paper Amsler Grid)..."
    "The data from this clinical study provide affirmation that the EyeCTester app-based test is repeatable and consistent with the results of the paper analogue. The Amsler Grid testing was consistent and repeatable across the two testing methods."
    Low Human Error Rate"An additional metric assessed during the study was rate of human error in completing the test. It was confirmed that the human error rate was below 1% in the EyeCTester app test."
    Safety and Effectiveness (Overall Summary)"Overall, the clinical evaluations support the use of the EyeCTester for testing patients using the Amsler Grid test."
    "Clinical and Human Factors testing demonstrate that the device performs as intended."
    "The EyeCTester - Monitoring Application is demonstrated to be as safe and effective and perform as well as the identified predicate device."

    Study Details from the Provided Text:

    1. Sample size used for the test set and the data provenance:

      • Sample Size: The document does not specify the exact number of participants (sample size) in the clinical studies. It mentions "participants without visual defects" in the Human Factors study and "healthy volunteers and patients with neuro-ophthalmic, retinal, and other diseases" in the clinical study.
      • Data Provenance: The clinical studies were "prospective, randomized, cross-over, 2-site studies." The country of origin is not explicitly stated, but the submission is to the U.S. FDA, and the owner and consultant are based in Houston, Texas, suggesting a U.S. context.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It mentions the study compared the device to the "standard of care modality (paper Amsler Grid)" and involved "eyecare providers" for interpretation, but doesn't detail expert involvement in ground truth establishment for the comparative study itself.
    3. Adjudication method for the test set:

      • The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for the test set results. The comparison was against the "paper Amsler Grid," implying a direct comparison of results rather than a complex multi-reader adjudication process for ground truth.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study involving human readers and AI assistance is not described. The study focuses on the device's performance compared to the paper Amsler Grid, not on how AI assistance improves human reader performance. The device itself is described as an "aid to eyecare providers" and provides a "report that is provided to the prescribing healthcare provider to be interpreted and evaluated." It acts as a tool for monitoring, not necessarily as an AI assisting in the interpretation process itself, though it gathers data that providers interpret.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The "Clinical Testing" section describes a study to "validate the visual parameter assessments of the EyeCTester compared to the standard of care modality (paper Amsler Grid)." This implies evaluating the device's output against a known standard.
      • While the device is intended to be an "aid" to providers and not for diagnosis, the clinical testing seems to evaluate the data collected by the device itself and its consistency with the paper Amsler Grid. The "human error rate" mentioned is in completing the test by the patient, not necessarily the algorithm's performance. The "report" is then provided to the provider for interpretation. Therefore, it appears a standalone performance evaluation of the data generation by the algorithm was conducted, assessed by its repeatability and consistency with the paper grid.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The ground truth for the clinical study was established by comparing the EyeCTester results to the "standard of care modality (paper Amsler Grid)." This suggests the paper Amsler Grid's results served as the reference or ground truth for the comparison. It is not explicitly stated whether this ground truth was further validated by expert consensus, pathology, or outcomes data.
    7. The sample size for the training set:

      • The document does not mention a training set sample size. It describes "software development and testing" and "human factors validation study" and "clinical testing," but no specifics on a separate training phase or dataset for an AI model. Given the description of the device as primarily an Amsler Grid test, it might not involve complex machine learning models that require distinct training sets, or if it does, the details are not provided. The comparison here is against a traditional method.
    8. How the ground truth for the training set was established:

      • Since a training set is not explicitly mentioned, the method for establishing its ground truth is also not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K180895
    Device Name
    Alleye
    Date Cleared
    2018-06-27

    (83 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    San Diego, CA 92104

    Re: K180895 Trade/Device Name: Alleye Regulation Number: 21 CFR 886.1330 Regulation
    NAME OF SUBJECT DEVICE

    Alleye

    COMMON NAME

    Grid, Amsler

    DEVICE CLASSIFICATION

    21 CFR 886.1330

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Alleye is a mobile medical software application indicated for the detection and characterization of metamorphopsia, a visual distortion, in patients with age-related macular degeneration (AMD) and as an aid in the monitoring of the progression of this condition in respect of metamorphopsia. It is intended to be used by patients who have the capability to regularly perform a simple self-test at home.

    Device Description

    Alleye is a digital technology visual function test, consisting of two different items: a mobile app for patients and a web interface for eye care professionals. Alleye implements an alignment hyperacuity task that helps patients with age-related macular degeneration (AMD) to assess their vision at home. This allows the timely detection of significant changes in vision function, enabling the regular monitoring of the disease progression and/or monitoring the visual function associated with ongoing treatments. Two elements provide feedback about the Alleye test: a score and a colored circle. The score value reflects the visual performance in the dots alignment, whereas the color indicates whether the performance has worsened considerably (red) or remained stable or improved (green) compared to the previous test.

    AI/ML Overview

    The provided text describes the Alleye device, a mobile medical software application indicated for the detection and characterization of metamorphopsia in patients with age-related macular degeneration (AMD) and as an aid in monitoring the progression of this condition.

    Here's an analysis of the acceptance criteria and the studies conducted:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for Alleye are not explicitly stated as quantitative targets in the provided document. Instead, the document focuses on demonstrating substantial equivalence to a predicate device (myVisionTrack Model 0005) through various clinical and non-clinical studies. The device performance is described in terms of its ability to detect worsening visual function and its reliability.

    Acceptance Criteria (Implied)Reported Device Performance
    Preamble: Substantially equivalent to predicate device.Preamble: Alleye is deemed substantially equivalent to K143211 (myVisionTrack) based on similar intended use and technological characteristics (mobile app for hyperacuity self-testing).
    Functional: Reliability in metamorphopsia testing.Test-Retest Study: Evaluated the reliability of metamorphopsia testing with Alleye.
    Clinical Efficacy: Aid in monitoring disease progression/need for injection.Longitudinal Study: Demonstrated that patient self-testing with Alleye allows the detection of worsening visual function prior to follow-up visits, aiding in monitoring for the need of an injection.
    Usability: User-friendliness for elderly AMD patients.Usability Study: Oral feedback and System Usability Scale (SUS) results confirmed user-friendliness for elderly subjects with AMD.
    Software Quality: Compliance with medical device software standards.Software Verification: Complied with IEC 62304 Medical Device Software - Software Life Cycle Processes.
    Safety and Effectiveness: Does not raise new issues.Conclusion: Based on validation and clinical evaluation, the device does not raise new issues of safety or effectiveness compared to the predicate. It is considered safe and effective for its intended use.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test-Retest Study:
      • Sample Size: 26 healthy subjects and 60 subjects with age-related macular degeneration (AMD).
      • Data Provenance: Not explicitly stated, but the study was conducted to evaluate reliability. It implies clinical, prospective data.
    • Clinical Evaluation – Longitudinal Study:
      • Sample Size: 60 patients diagnosed with wet AMD who were undergoing pro re nata intravitreal injection (IVI) of anti-VEGF for treatment. They completed at least a 3-month follow-up, providing 1506 Alleye measurements over 150 follow-up periods.
      • Data Provenance: Not explicitly stated, but implies prospective clinical data from patients undergoing treatment.
    • Usability Study:
      • Sample Size: Elderly subjects with AMD (specific number not provided, but at least two clinical studies were conducted).
      • Data Provenance: Implies prospective data collection through user interaction and feedback.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not explicitly state the number or qualifications of experts used to establish ground truth for the test sets.

    • In the Longitudinal Study, the "need of an injection at the next follow-up visit" served as a clinical outcome, which would typically be determined by an ophthalmologist or retina specialist based on their clinical assessment (e.g., OCT scans, visual acuity changes, signs of active disease). However, the specific methodology for establishing this "ground truth" (i.e., whether an injection was truly needed) and the role of experts in that determination is not detailed.
    • For the Test-Retest Study, the "reliability of the metamorphopsia testing" itself is the outcome, suggesting a measure of internal consistency within the device's readings rather than comparison against an external expert-determined ground truth for metamorphopsia.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth within the described studies. Clinical decisions in the longitudinal study would likely follow standard medical practice by the treating ophthalmologists.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in the provided text. The Alleye device is positioned as a patient self-testing tool for monitoring, not as an AI-assisted diagnostic tool for human readers.

    6. If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, the studies described investigate the standalone performance of the Alleye application (algorithm only) as a patient self-testing tool.

    • The Test-Retest Study evaluated the reliability of the Alleye's metamorphopsia testing.
    • The Longitudinal Study evaluated the Alleye's ability to detect worsening visual function.
    • The Usability Study evaluated the app's user-friendliness.

    In all these cases, the "performance" refers to the Alleye device's capabilities without direct human interpretive input during the testing process itself. The interpretation of the Alleye results by eye care professionals is an intended use, but the studies focus on the patient-facing component.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    • Test-Retest Study: The ground truth for this study was the internal consistency/reproducibility of the Alleye's own measurements over time in the same individuals. It's about how reliably the device detects metamorphopsia, not necessarily how accurately it matches an external ground truth for metamorphopsia presence.
    • Clinical Evaluation – Longitudinal Study: The ground truth appears to be clinical outcomes data related to the "need for an injection at the next follow-up visit," which would be a physician's clinical decision based on a comprehensive assessment (e.g., visual acuity, OCT, physical exam). The study shows Alleye's ability to predict or indicate this need, implying these clinical decisions serve as the reference.
    • Usability Study: The ground truth for usability was user feedback (oral feedback and System Usability Scale scores) from the target patient population.

    8. The Sample Size for the Training Set

    The document does not provide information about the sample size used for the training set of the Alleye algorithm. As a Predicate device (myVisionTrack) is mentioned and the focus is on substantial equivalence, it's possible that the "algorithm" for hyperacuity testing is based on established principles rather than requiring extensive de novo machine learning training data in the context of this submission. However, if machine learning was used, the training data details are not disclosed here.

    9. How the Ground Truth for the Training Set Was Established

    Since information about a specific training set and its sample size is not provided, the method for establishing its ground truth is also not detailed in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K143211
    Date Cleared
    2015-03-20

    (130 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Reading, MA 01864

    Re: K143211

    Trade/Device Name: myVisionTrack® Model 0005 Regulation Number: 21 CFR 886.1330
    | Common Name: | Home vision function monitor |
    | Regulation Number: | 21 CFR 886.1330

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The my VisionTrack® Model 0005 is intended for the detection of central 3 degrees metamorphopsia (visual distortion) in patients with maculopathy, including age-related macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia. It is intended to be used by patients who have the capability to regularly perform a simple self-test at home. The myVisionTrack® Model 0005 is not intended to diagnose; diagnosis is the responsibility of the prescribing eye-care professional.

    Device Description

    The myVisionTrack® Model 0005 is a vision function test provided as a downloadable app on to the user's supplied cell phone or tablet. The myVisionTrack® Model 0005 implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home. If a significant worsening of vision function is detected the physician will be notified and provided access to the vision self-test results so that they can decide whether the patient needs to be seen sooner than their next already scheduled appointment.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the myVisionTrack® Model 0005, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document primarily focuses on demonstrating substantial equivalence to a predicate device (myVisionTrack® Model 0003) rather than defining explicit, quantitative acceptance criteria for the new device. However, based on the comparative study, we can infer the performance expectation.

    Acceptance Criteria (Inferred)Reported Device Performance
    myVisionTrack® Model 0005 performance not significantly different from myVisionTrack® Model 0003 performance.Cross-sectional study concluded that the performance of Model 0005 (4AFC) is not significantly different from Model 0003 (3AFC).
    Test variability across different device platforms (iPod Touch, iPad Air, iPhone 6+) should be comparable to or smaller than the inherent mVT™ test variability over time (0.10 logRM).Test variability across different devices was comparable to or smaller than 0.10 logRM. Mean results for iPod Touch, iPad Air, and iPhone 6+ were -2.11 logRM, -2.07 logRM, and -2.07 logRM, respectively (F=0.047, p>0.95), indicating no significant difference.
    Self-test usability: users should effectively self-test and find the device user-friendly.100% of participants completed the self-test with a training demo. 90% met the criteria for completing without issues. 90% understood accessing "More" screen. 80% completed self-test in
    Ask a Question

    Ask a specific question about this device

    K Number
    K121738
    Date Cleared
    2013-02-22

    (254 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Richardson, TX 75082

    Re: K121738

    Trade/Device Name: myVisionTrack™ Model 0003 Regulation Number: 21 CFR 886.1330
    886.1605; K05350; Product Code: HPT

    • Amsler Grid, a Class I Exempt Preamendments Medical Device (21 CFR . 886.1330
      -----|-------------------------------------------------|-----|
      | Regulation
      Number | 21 CFR 886.1330
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The my Vision TrackTM is intended for the detection of central 3 degrees metamorphopsia (visual distortion) in patients with maculopathy, including agerelated macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia. It is intended to be used by patients who have the capability to regularly perform a simple self-test at home. The myVisionTrackTM is not intended to diagnose; diagnosis is the responsibility of the prescribing eye-care professional.

    Device Description

    The myVisionTrack™ is a vision function test provided on a commercially available cell phone. The myVisionTrack™ implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home. This enables regular monitoring of disease progression, and for timely detection of significant changes in vision function. If a significant worsening of vision function is detected the physician will be notified and provided access to the vision self-test results so that they can decide whether the patient needs to be seen sooner than their next already scheduled appointment.

    AI/ML Overview

    The provided document describes the myVisionTrack™ Model 0003, a device intended for the detection and characterization of central 3 degrees metamorphopsia in patients with maculopathy, including age-related macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia.

    However, the document does not explicitly state specific acceptance criteria (e.g., sensitivity, specificity thresholds) for the device's performance. Instead, it focuses on demonstrating substantial equivalence to predicate devices and verifying that patients can effectively use the device for self-monitoring.

    Here's an analysis based on the information provided, highlighting what is available and what is not:

    1. Table of Acceptance Criteria and Reported Device Performance

    As noted above, no explicit acceptance criteria thresholds (like specific sensitivity or specificity values) are provided in the document. The study's conclusion is that the device is "as safe, as effective and performs at least as safely and effectively as the predicate devices."

    The document mentions that the study "did show a significant difference between those patients with mild-to-moderate non-proliferative DR (NPDR) and those with very severe NPDR or proliferative DR (PDR), whereas traditional clinic-based visual acuity and contrast sensitivity tests were not able to detect a significant difference." This implies a performance benefit in detecting disease severity, but no specific metrics or acceptance criteria are given for this.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 36 individuals.
    • Data Provenance: The study was a "6-month Clinical Study" and "6-month longitudinal study" performed by VAS (Vital Art and Science Incorporated). The document does not explicitly state the country of origin but implies it was conducted by the submitter (Vital Art and Science Incorporated, Richardson, TX, USA). It was a prospective longitudinal study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not explicitly stated. The ground truth was based on "clinical judgment" and "traditional clinic-based visual acuity and contrast sensitivity tests" for assessing disease condition (NPDR vs. PDR). It is reasonable to assume these judgments were made by qualified ophthalmologists or eye care professionals, but specific numbers and qualifications are not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document mentions "clinical judgment" for assessing the disease condition, but how multiple experts (if any) arrived at a consensus is not described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or conducted in the context of human readers improving with or without AI assistance. The study focused on the device's ability to monitor changes and patient compliance.

    6. Standalone Performance (Algorithm Only without Human-in-the-loop Performance)

    • Standalone Performance: Yes, the described study appears to be a standalone performance study. The myVisionTrack™ device "implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home." The results are then analyzed by the device's algorithm, and "If a significant worsening of vision function is detected the physician will be notified." The study tested "effective self-monitoring... using myVisionTrack™" and demonstrated its ability to detect differences in disease states and generate notifications based on a "0.2 logMAR notification rule." This rule is an algorithmic threshold applied to the device's output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for defining "significant change of disease condition" was based on clinical judgment by medical professionals and comparisons to traditional clinic-based visual acuity and contrast sensitivity tests.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: The document does not specify a separate training set or its sample size. The description of the clinical study appears to relate to the validation of the device's performance, not necessarily the training of its algorithm (which is a "shape discrimination hyperacuity test"). The algorithms are described as using an "adaptive staircase algorithm," which is a testing methodology, not typically a machine learning training process that requires a training set in the conventional sense.

    9. How the Ground Truth for the Training Set Was Established

    • Establishment of Ground Truth for Training Set: Not applicable, as a distinct training set for a machine learning algorithm is not described. The "shape discrimination hyperacuity test" is a known psychophysical testing method. The "adaptive staircase algorithm" is a method for efficiently finding thresholds. The document states "Numerous published studies have shown that patients with AMD and other forms of maculopathy have significantly poorer results as compared to normal subjects on the shape discrimination test," indicating that the underlying principle is well-established in scientific literature.
    Ask a Question

    Ask a specific question about this device

    K Number
    K050350
    Manufacturer
    Date Cleared
    2005-04-29

    (77 days)

    Product Code
    Regulation Number
    886.1605
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    under K014044: Product Code HOQ

    • Amsler Grid, a Class I Exempt Preamendments Medical Device (21 CFR 886.1330
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PreView PHP™ is intended for use in the detection and monitoring the progression of Agerelated Macular Degeneration (AMD) including, but not limited to, the detection of choroidal neovascularization (CNV).

    The PreView Preferential Hyperacuity Perimeter (PreView PHP™) is intended for use in The detection and characterization of central and paracentral metamorphopsia (visual distortion) in patients with age-related macular degeneration , as an aid in monitoring progression of disease factors causing metamorphopsia including but not limited to progression of assouse ration (CNV). It is intended to be used in the office of a licensed eye care practitioner in patients with stable fixation.

    Device Description

    The PreView PHP™ system is an interactive software driven device that provides a series of horizontal and vertical linear images to the macular region of the eye to detect abnormalities of the central and paracentral visual field that will detect and monitor progression of age related macular degeneration including detection of choroidal neovascularization. The changes in macular and near macular function are identified by the device thus enabling the reader to detect intermediate and advancing changes in macular degeneration and associated diseases to provide the capability for earlier intervention.

    The PreView PHP™ is a specialized perimeter, and applies the concept of the static and automated permeter in the detection of visual field defects. The device incorporates the theory of hyperacuity to address more highly specific central and paracentral visual fields. Because of hyperacuity, perception of more finite relative spatial localization. Hyperacuity is defined as the ability to perceive a difference in the relative spatial localization of points on the central field, more specific distortions or misalignments within the central and paracentral field can be mapped with greater accuracy. The device monitors and manages the progressive changes associated with advancing macular degeneration and differentiates the different stages of AMD including but not limited to choroidal neovascularization.

    The PreView PHP™ system is designed for use with standard off-the-shelf PC units in the office of the practitioner. It is aimed to detect advancing changes of AMD-related lesions including but not limited to choroidal neovascularlization.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the PreView PHP™ device, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Sensitivity to detect recent onset of CNV ≥ 80%82% (95% CI: 72% to 92%)
    Specificity to differentiate recent onset of CNV from intermediate stage AMD ≥ 80%88% (95% CI: 81% to 96%)

    Study Details:

    1. Sample Size used for the test set and data provenance:

      • The document does not explicitly state the sample size of the test set patients. However, it does mention that the study was a "prospective, comparative, concurrent, non-randomized multicenter study."
      • Data Provenance: The document states the manufacturer is Notal Vision, Inc. from Tel Aviv, Israel, and the clinical investigation was also conducted in Israel. Based on the mention of "multicenter study," it suggests data was collected from multiple sites.
    2. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

      • The document does not provide information on the specific number of experts or their qualifications that established the ground truth for the test set.
    3. Adjudication method for the test set:

      • The document does not specify an adjudication method.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done. If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • The provided text does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The study focused on the standalone performance of the PreView PHP™ device.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone study was done. The clinical investigation assessed the ability of the Preferential Hyperacuity Perimeter (PreView PHP™) itself to detect recent onset of choroidal neovascularization (CNV) and differentiate it from an intermediate stage of AMD. The reported sensitivity and specificity are for the device's performance directly.
    6. The type of ground truth used:

      • The ground truth for the test set was based on untreated CNV from AMD diagnosed within the last 60 days or an intermediate stage of AMD (at least 1 large druse or at least 20 medium-size drusen) with no evidence of geographic atrophy or other macular diseases. This implies a clinical diagnosis by ophthalmologists, likely supported by other diagnostic methods (which are not explicitly detailed in the provided text for ground truth establishment, though the referenced predicate devices include technologies like the Heidelberg Retina Angiograph FA/ICGA).
    7. The sample size for the training set:

      • The document does not specify a separate training set or its sample size. The clinical investigation appears to be a validation study.
    8. How the ground truth for the training set was established:

      • As no separate training set is explicitly mentioned, how its ground truth was established is not provided. The device likely predates or was developed concurrently with modern machine learning paradigms requiring distinct training and test sets in the same way.
    Ask a Question

    Ask a specific question about this device

    K Number
    K014044
    Manufacturer
    Date Cleared
    2002-03-04

    (87 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Name: | Amsler Grid |

    CLASSIFICATION Name and Reference:

    21 CFR 886.1330
    Psychophysical Test is substantially equivalent to an Amsler Grid, a Class I Exempt medical device (21 CFR 886.1330
    20852

    Re: K014044

    Trade Name: Macular Computerized Psychophysical Test (MCPT) Regulation Number: CFR 886.1330

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Notal Vision Macular Computerized Psychophysical Test (MCPT) is indicated for The early detection of central and paracentral irregularities (vision abnormalities) in the visual field, most commonly associated with macular degeneration.

    Device Description

    The device is a computerized interactive software device that provides the user with a series of images oriented in vertical and horizontal planes on a imaging screen that is designed to identify irregularities in the central and paracentral macular field of the human visual system. The device analyzes the image presentation in the process of identifying visual field abnormalities and stores the analysis in a computer server. The device is housed in a PC computer.

    AI/ML Overview

    The Notal Vision Macular Computerized Psychophysical Test (MCPT) is indicated for the early detection of central and paracentral irregularities (vision abnormalities) in the visual field, most commonly associated with macular degeneration.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state quantitative "acceptance criteria" for the MCPT in terms of specific sensitivity or specificity targets. Instead, the study's primary objective was to demonstrate that MCPT is equal to or better than the Amsler grid in detecting AMD-related retinal abnormalities. The reported performance compared the MCPT directly against the Amsler Grid.

    MetricMCPT PerformanceAmsler Grid PerformanceP-Value (MCPT vs Amsler)Conclusion (based on P-Value)
    Overal Sensitivity68.4%25.6%0.0000001MCPT significantly better
    Overall Specificity81.8%100%(Implicitly worse)Amsler grid better

    Note: While the overall P-value for sensitivity strongly favors MCPT, the P-value for Category 1 (healthy subjects) shows MCPT having significantly lower specificity compared to the Amsler Grid.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: The document does not explicitly state a total sample size for the test set. However, a breakdown of categories is provided:
      • Category 1 (Healthy): 33 subjects
      • Category 2 (AMD): 51 subjects
      • Category 3 (AMD): 20 subjects
      • Category 4 (AMD): 27 subjects
      • Category 5 (AMD): 19 subjects
      • Total subjects across all categories = 150 subjects.
    • Data Provenance: The study was a "prospective single blinded randomized clinical investigation conducted at multiple investigational sites." The document does not specify the country of origin of the data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    The document does not provide information on the number of experts used to establish ground truth or their qualifications. The categories are described as "individuals with defined age-related macular degeneration (AMD) at different stages, and the control subjects were those with normal healthy eyes," implying a clinical diagnosis as ground truth, but not the specific experts or their process.

    4. Adjudication Method for the Test Set:

    The document does not describe any specific adjudication method for the test set. The categories (AMD at different stages and healthy controls) were "defined," suggesting a pre-established clinical diagnosis, but the process of reaching that definition is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    Yes, a comparative effectiveness study was done. The study compared the MCPT to the Amsler Grid.

    • Effect Size of Human Readers Improve with AI vs. without AI assistance: This study compares an automated device (MCPT) against a traditional manual test (Amsler grid), not the improvement of human readers with AI assistance versus without AI assistance. Therefore, an effect size of human improvement cannot be determined from this document as it's not a reader study with AI assistance. The study focuses on the standalone performance of the MCPT and the Amsler grid themselves.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:

    Yes, a standalone study was performed. The "MCPT" (Macular Computerized Psychophysical Test) is described as an "interactive software device" that "analyzes the image presentation in the process of identifying visual field abnormalities." The results presented for MCPT are its direct performance, acting as an algorithm-only assessment, which is then compared against the Amsler grid.

    7. Type of Ground Truth Used:

    The ground truth used was clinical diagnosis/classification of "defined age-related macular degeneration (AMD) at different stages" and "normal healthy eyes." This falls under clinical diagnosis rather than pathology or outcomes data specifically.

    8. Sample Size for the Training Set:

    The document does not mention a separate training set or its sample size. The clinical investigation described appears to be for validation/testing.

    9. How the Ground Truth for the Training Set Was Established:

    Since no training set is described, information on how its ground truth was established is not available in the provided document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1