Search Filters

Search Results

Found 16 results

510(k) Data Aggregation

    K Number
    K222484
    Device Name
    Retitrack
    Date Cleared
    2023-05-09

    (265 days)

    Product Code
    Regulation Number
    886.1510
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ophthalmoscope, Laser Scanning
    and Perimeter (MYC and HPT)
    21 CFR 886.1570, Class II
    21 CFR 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Retitrack™ is intended for recording, viewing, and analyzing temporal characteristics of fixation and saccadic responses when viewing a visual stimulus. The Retitrack™ is intended for use by healthcare practitioners in healthcare settings (e.g., physician's office, clinic, laboratory).

    Device Description

    The Retitrack™ is a monocular, bench-top saccadometer that incorporates scanning laser ophthalmoscope (SLO) technology and eye tracking software to record, view, measure, and analyze eye motion. The Retitrack™ is comprised of an optical head containing an illumination system and an optical system; a base unit with a computer, electronics, and a power distribution system; connections for external input/output devices (e.g., monitor, keyboard, mouse, and storage media); a patient forehead and chin rest; and operational software.

    The Retitrack™ interacts with the patient by directing light from an infrared (840 nm) superluminescent diode (SLD) into the patient's eye. The only parts of the device that contact the patient are the forehead and chin rest with adjustable temple pads and an optional attachable head strap to stabilize the patient's head.

    The Retitrack™ uses the SLD light to scan the patient's retina in two dimensions while the patient is viewing a visual stimulus. The optical imaging system detects the reflected (or returned) light from the retina and creates high-resolution, digital retinal video sequences over time. The eye tracking software uses eye motion corrected frames to measure the translational retinal movement over time. The device displays the analysis of the eye motion results and saves the retinal video and a report. The Retitrack™ does not provide a diagnosis or treatment recommendation.

    The Retitrack™ has separate tests that measure fixation stability (including microsaccades and drift) and visually guided horizontal saccade tracking. The Retitrack™ can be programmed by the user with specific visual stimuli presentations, including a single fixed stimulus to measure fixation stability or two alternating stimuli in different orientations to measure horizontal saccades. For the fixation stability test, the Retitrack™ analyzes the fixation responses, including microsaccade amplitude, microsaccade frequency, microsaccade velocity, drift velocity, and drift ratio over time. For the saccade tracking tests, the Retitrack™ analyzes the saccadic responses, including duration, amplitude, target accuracy, latency, and velocity.

    AI/ML Overview

    The provided text describes the Retitrack™ device and its performance testing to demonstrate substantial equivalence to a predicate device. However, it does not explicitly state "acceptance criteria" in the format of a table or provide specific values for the device to meet. Instead, it describes various performance tests and their outcomes, implying that successful completion of these tests serves as the criteria for acceptance.

    Therefore, the following information is extracted and presented based on what is available in the text, and where specific acceptance criteria are not provided, the reported performance is described as the outcome of the validation.

    Acceptance Criteria and Device Performance

    Since explicit quantitative acceptance criteria for all aspects are not provided, the "Acceptance Criteria" column will describe the objective of the test, and the "Reported Device Performance" will detail the findings.

    Acceptance Criteria (Objective of Test)Reported Device Performance
    Verify compliance with safety standards (e.g., IEC 60601-1, IEC 60601-1-2, IEC 60825-1, ANSI Z80.36)Device demonstrated compliance with all listed standards, including IEC 60601-1:2005 + AMD2:2012 + AMD2:2020, IEC 60601-1-2:2014, IEC 60825-1:2014, and ANSI Z80.36-2021. It is classified as Group 1 scanning instrument (light hazard $\leq$ 1.32 mW at the eye) and Class 1 laser product.
    Software verification and validation (function, GUI, analysis algorithm, usability)Software functions, graphical user interface (GUI), analysis algorithm, and usability were verified and validated with representative intended users in a simulated use environment. (No specific metrics provided, but implied successful).
    Eye movement measurement accuracy and tracking performance (bench testing)Demonstrated accuracy and tracking performance. (No specific metrics provided, but implied successful). Spatial resolution reported as 200 videos for fixation stability and >300 videos for horizontal saccade tracking.
    *   **Data Provenance:** The document does not specify the country of origin for the human subject data. It also does not explicitly state whether the study was retrospective or prospective, but the description of "human subjects" and "recorded... while pupil videos were recorded simultaneously" implies a prospective data collection.
    

    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
    * The document does not provide information on the number of experts used or their qualifications for establishing ground truth for the test set. The ground truth appears to be based on the device's ability to accurately measure expected responses or on comparative analysis with another tracking method, rather than expert consensus on a diagnosis or interpretation.

    1. Adjudication Method for the Test Set:

      • The document does not specify any adjudication method for the test set. The validation seems to rely on quantitative measurement comparisons and correlations rather than subjective interpretations requiring adjudication.
    2. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI vs. without AI assistance. The device is an "Eye Movement Monitor" and the studies reported focus on its measurement accuracy and equivalence to other tracking methods, not on assisting human interpretation of images or data.
    3. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):

      • Yes, standalone performance was assessed. The device itself is an automated measurement tool. The performance tests described (e.g., "Fixation and saccade measurements were successfully measured for all subjects," "linear relationship... found between the expected response and the measured retinal response," "good agreement between the pupil and retinal tracking methods") refer to the algorithm's direct measurement capabilities without human interpretation as part of the primary output.
    4. Type of Ground Truth Used:

      • The ground truth appears to be based on:
        • Expected responses: For saccade amplitude and velocity, the device's measurements were compared against "expected response" (likely defined by the stimulus presented).
        • Comparative method: For retinal vs. pupil tracking, the ground truth for comparison was the "pupil videos... processed with a standalone pupil tracking algorithm."
      • This is not typical "expert consensus" or "pathology" ground truth as might be seen for diagnostic imaging devices. It's an engineering and physiological measurement validation.
    5. Sample Size for the Training Set:

      • The document does not provide information on the sample size used for the training set.
    6. How the Ground Truth for the Training Set Was Established:

      • The document does not provide information on how the ground truth for an implied training set (if any for the analysis algorithm's development) was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K153181
    Device Name
    MAIA
    Manufacturer
    Date Cleared
    2016-06-08

    (218 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    HLI |
    | FDA Regulation Number: | 886.1605
    |
    | FDA Regulation Number: | 886.1120, 886.1605
    PD1 |
    | FDA Regulation
    Number | 886.1605
    | 886.1605
    PD1 |
    | FDA Regulation
    Number | 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Centervue MAIA is intended for:
    • measuring macular sensitivity,
    • measuring fixation stability and the locus of fixation,
    • providing infrared retinal imaging, and
    • aiding visual rehabilitation.
    It contains a reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects.

    Device Description

    A previous version of the CenterVue MAIA, a device for macular integrity assessment, has been cleared by FDA under K133758 on 23 April 2014. The present submission relates to a revised version of the MAIA device in which the only difference between the subject device and the MAIA device cleared under K133758 is in the software, where a new function called "Fixation Training" (FT) has been introduced to aid visual rehabilitation of patients with unstable fixation. The FT is independent from the functions available in the device cleared under K133758 and it does not interfere or modify the original functions in any way. No other design changes are being introduced by this revision to the MAIA device.
    The FT is intended for visual rehabilitation, to help Vision Rehabilitation Specialists train patients with unstable fixation to improve their fixation stability.
    A FT session consists of asking the patient to move his/her gaze according to the trainer's instructions and to an audible signal, so to attempt fixation of the internal visual target using a specific retinal area, which is identified by the trainer ahead of the training session. The center of such area is called Fixation Training Target (FTT).
    During the FT session, the MAIA retinal tracker continuously determines the position of the fixation point and provides an audible feedback to the patient in the form of pulses of a certain repetition frequency. The number of pulses / sec (i.e. the repetition frequency) is inversely proportional to the distance between the patient's fixation point at that time and the FTT; when such distance falls below one degree, the sound becomes continuous. Optionally, before starting the FT session, operators are able to replace the continuous sound with an MP3 audio file.
    The MAIA device interacts with the patient by directing illumination into the patient's eye. The chin-rest and head-rest are the only parts of the device that contact the patient. The chin-rest includes a patient proximity sensor and is motorized for height adjustment. The biocompatibility of the patient-contacting materials, which are the same as used in the previous version of the subject device (K133758) has been established.
    The MAIA device operates as a 'stand-alone' device and does not need to interface with other medical devices.

    AI/ML Overview

    Here's an analysis of the provided text regarding the MAIA device, focusing on acceptance criteria and study details:

    The document is a 510(k) premarket notification for the CenterVue MAIA device, which is an ophthalmoscope and perimeter. The submission is for a revised version of a previously cleared device (K133758), with the only change being the introduction of a new "Fixation Training" (FT) software function.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a table of acceptance criteria with corresponding performance results for the new Fixation Training (FT) software function. Instead, it relies on demonstrating that the new software does not alter the safety or effectiveness of the previously cleared device and meets relevant software and risk management standards.

    However, the comparison table (Table 1) provides performance specifications for various features of the MAIA device and its predicates. While not "acceptance criteria" in the sense of a numerical pass/fail for the specific new feature, these are the performance characteristics being maintained or compared.

    Here's a table based on the provided predicate comparison, focusing on relevant aspects mentioned:

    Feature/CriterionPrimary Predicate Device (PD1) (K133758)Secondary Predicate Device (PD2) (K061768)Subject Device (MAIA with FT)Comparison / Implied Acceptance of Performance (for FT)
    For Fixed Hardware/Existing Functionality
    Retinal imaging systemLine Scanning OphthalmoscopeFundus cameraLine Scanning OphthalmoscopeSame as PD1. Assumed performance is maintained.
    Background luminance perimetry4 asb4 asb4 asbSame as PD1 & PD2. Assumed performance is maintained.
    Stimuli sizeGoldmann IIIGoldmann I-VGoldmann IIISame as PD1. Assumed performance is maintained.
    Minimum pupil size2.5 mm4.0 mm2.5 mmSame as PD1. Assumed performance is maintained.
    Maximum luminance1000 asb400 asb1000 asbSame as PD1. Assumed performance is maintained.
    Stimuli dynamic range36 dB20 dB36 dBSame as PD1. Assumed performance is maintained.
    Imaging field36° x 36°45° circular (diameter)36° x 36°Same as PD1. Performance maintained.
    Imaging and tracking speed25 Hz25 Hz25 HzSame as PD1 & PD2. Performance maintained.
    Imaging resolution1024 x 1024768 x 5761024 x 1024Same as PD1. Performance maintained.
    Perimetry field30° x 30°40° circular (diameter)30° x 30°Same as PD1. Performance maintained.
    Perimetric grids10° macular, 6° macular, 10-2, customizable within fieldCustomizable within field10° macular, 6° macular, 10-2, customizable within fieldSame as PD1, equivalent to PD2. Performance maintained.
    Imaging wavelength for eye tracking850 nm> 800 nm850 nmSame as PD1, equivalent to PD2. Performance maintained.
    For New Fixation Training (FT) Software
    Means for identification of FTTNot availableManually by eye practitioner using IR retinal imageManually by eye practitioner using IR retinal imageSame as PD2. Implied acceptance of this method.
    Fixation stability indicesP1, P2 and BCEAP1, P2 and BCEAP1, P2 and BCEASame as PD1 & PD2. Assumed these are still accurately calculated by the device after FT use.
    Feedback to patient during FTNot availableRepetition frequency of audible pulsesRepetition frequency of audible pulsesSame as PD2. Implied acceptance of this feedback mechanism.
    Software Standards ComplianceIEC 60601-1:2005, IEC 60601-1-2:2007, ISO 12866:1999, ISO 15004-1:2006, ISO 15004-2:2007, ISO 14971:2007, ISO 62304:2006Not explicitly stated for specific FTISO 62304: 2006, ISO 14971: 2007Compliance with these software and risk management standards for the FT function.

    The crucial "study" for the new FT function is its compliance with software development and risk management standards, rather than a clinical performance study with numerical criteria. The document states: "the Fixation Training software meets the requirements of: ISO 62304: 2006, ISO 14971: 2007." This is the primary demonstration of its acceptability.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not describe a specific clinical test set with a sample size for the new Fixation Training (FT) software. The submission focuses on the software's compliance with standards and its technological similarity to a feature in a secondary predicate device (Nidek MP1).

    The core of the submission is that the FT function is independent, does not interfere with existing functions, and no other design changes were made. Therefore, the detailed studies for the underlying hardware and existing functions (macular sensitivity, fixation stability measurement, retinal imaging) from the K133758 clearance are implicitly reused.

    There is no mention of country of origin for new data or whether any hidden data was retrospective or prospective, as no new clinical data is presented for the FT feature.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    Not Applicable. Since no new clinical test set is described for the FT function, there is no mention of experts establishing a ground truth for such a set. The acceptance of the FT function hinges on its compliance with international software and risk management standards and its functionality being similar to an existing predicate device.

    4. Adjudication Method for the Test Set

    Not Applicable. No new clinical test set requiring adjudication is described for the FT function.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No. The document does not describe an MRMC comparative effectiveness study for the new Fixation Training (FT) software. The submission is centered on substantial equivalence to predicate devices and software standard compliance, not on demonstrating improved human reader performance with or without AI assistance. The FT function itself is for patient rehabilitation, not for aiding human readers in diagnosis.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    The Fixation Training (FT) is a software feature that provides real-time feedback to the patient based on fixation point tracking. While the tracking itself is an algorithm, the FT functionality is described with the patient in the loop, acting on the audible feedback. The document states: "the MAIA retinal tracker continuously determines the position of the fixation point and provides an audible feedback to the patient." This is an algorithm-driven feature intended for use with a human (patient) in the loop (for visual rehabilitation). It is not a standalone diagnostic algorithm for interpretation by an expert.

    7. The Type of Ground Truth Used

    For the new Fixation Training (FT) software:

    • Software Design/Functionality: The "ground truth" for accepting the FT feature is its compliance with ISO 62304: 2006 (Medical device software – Software life cycle processes) and ISO 14971: 2007 (Medical devices – Application of risk management to medical devices). This represents a ground truth that the software is developed safely and effectively according to recognized standards.
    • Functional Equivalence: The comparison to the Nidek MP1 (PD2) for the visual rehabilitation/fixation training aspects also acts as a form of "ground truth" for functional equivalence, showing that similar technology is already marketed and cleared.

    For the core device functions (macular sensitivity, fixation stability measurement, retinal imaging), the ground truth for their original clearance (K133758) would have been established through clinical data, expert consensus, and comparison to other cleared devices, but these details are not provided in this document as it's a submission for an updated feature, not the initial clearance.

    8. The Sample Size for the Training Set

    Not Applicable / Not Provided. The document does not describe any machine learning or AI algorithm that would require a "training set" in the conventional sense for the new Fixation Training (FT) feature. The FT function appears to be based on deterministic algorithms for tracking and feedback, rather than a learned model.

    9. How the Ground Truth for the Training Set Was Established

    Not Applicable / Not Provided. As no training set is described, there's no mention of how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K152729
    Manufacturer
    Date Cleared
    2016-06-06

    (258 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    www.nidek.com

    Date Prepared: September 4, 2015

    Classification: 21 CFR§ 886.1120, Class II - 21 CFR§ 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Microperimeter MP-3 is indicated for use as: Color retinography Fixation examiner Fundus-related microperimetry

    Device Description

    The Microperimeter MP-3performs the following basic functions: Color retinography Fixation examiner Fundus-related microperimetry

    AI/ML Overview

    The provided text does not contain detailed acceptance criteria or a study that proves the device meets specific acceptance criteria in the way typically expected for a medical device efficacy study (e.g., sensitivity, specificity, accuracy targets).

    Instead, the document is a 510(k) premarket notification summary for the Nidek Microperimeter MP-3, asserting substantial equivalence to a predicate device (MP-1 MICROPERIMETER). The "testing" referred to is primarily bench testing to demonstrate that the modified device (MP-3) meets its functional specifications, performance requirements, and complies with applicable international standards for safety and electrical compatibility, and that its performance is "as well as" the predicate device.

    Here's an attempt to answer your questions based on the provided text, highlighting where information is not available:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or detailed performance metrics. It focuses on the device's functional integrity and compliance with safety standards, and equivalence in performance to the predicate device.

    Acceptance Criteria (Implied)Reported Device Performance
    Functional SpecificationsMeets functional specifications for Color retinography, Fixation examiner, Fundus-related microperimetry.
    Performance RequirementsMeets performance requirements.
    Safety Standards ComplianceComplies with IEC 60601-1, IEC 60601-1-2, ISO 15004-1, ISO 15004-2, ISO 12866, ISO 10940. Specifically, light hazard compliance with ISO 15004-1 and ISO 15004-2, and voluntary ISO 12866.
    Equivalence to PredicatePerforms "as well as" the predicate device (MP-1 MICROPERIMETER). Minor differences (automatic alignment/focusing, broader background/stimulus luminance ranges) do not raise new safety or efficacy issues.
    Intended Use/IndicationsDoes not affect the intended use or indications for use (Color retinography, Fixation examiner, Fundus-related microperimetry).
    Fundamental Scientific TechnologyDoes not alter the fundamental scientific technology.

    2. Sample size used for the test set and the data provenance

    The document refers to "bench testing" and "all necessary safety tests" and "all the necessary performance tests." It does not specify a sample size, test set, or data provenance (e.g., country of origin, retrospective/prospective clinical data). This suggests that the testing was likely internal engineering and quality assurance testing rather than a clinical study with human subjects.


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided. Given that the testing mentioned is primarily "bench testing" and "functional specifications," it's unlikely that external experts were involved in establishing "ground truth" in a clinical sense.


    4. Adjudication method for the test set

    This information is not provided.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    There is no indication of an MRMC comparative effectiveness study, AI assistance, or human reader improvement in the provided text. The device described does not appear to be an AI-driven diagnostic aid that would typically involve such a study design. It's a diagnostic instrument for acquiring images and microperimetry data.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This concept is not applicable to the device described. The Microperimeter MP-3 is an instrument operated directly by a human. The "automatic alignment and focusing" mentioned are features of the device's operation, not a standalone AI algorithm generating interpretations.


    7. The type of ground truth used

    The concept of "ground truth" in a clinical diagnostic sense (e.g., pathology, outcomes data) is not explicitly addressed. The testing focused on verifying the device's functional integrity, compliance with technical standards, and performance against its own specifications and the predicate device's established performance. For example, light hazard compliance would be against ISO standards, and image acquisition would be verified against internal specifications for image quality.


    8. The sample size for the training set

    This information is not applicable as the device is not described as having an AI component that would require a "training set."


    9. How the ground truth for the training set was established

    This information is not applicable for the same reason as above.

    Ask a Question

    Ask a specific question about this device

    K Number
    K150320
    Device Name
    COMPASS
    Manufacturer
    Date Cleared
    2015-06-30

    (141 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | FDA Regulation Number: | 21 CFR 886.1570; 886.1605
    Classification Name 1: | Perimeter, Automatic, Ac-powered |
    | Regulation Number 1: | 886.1605
    Classification Name: | Perimeter, Automatic, Ac-powered |
    | Regulation Number: | 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CenterVue COMPASS is intended for taking digital images of a human retina without the use of a mydriatic agent and for measuring retinal sensitivity, fixation stability and the locus of fixation. It contains a reference database that is a quantitative tool for the comparison of retinal sensitivity to a database of known normal subjects.

    Device Description

    The CenterVue COMPASS is a scanning ophthalmoscope combined with an automatic perimeter that allows the acquisition of images of the retina, as well as the measurement of retinal threshold sensitivity and the analysis of fixation. The device works with a dedicated software application, operates as a standalone unit, integrates a dedicated tablet, a joystick, a push-button and is provided with an external power supply. COMPASS operates in non-mydriatic conditions, i.e. without the need of pharmacological dilation and is intended for prescription use only.

    The Centervue COMPASS device operates on the following principles:

    • An anterior segment alignment system is included, which uses two infrared LEDs with a centroid wavelength of 940 nm and two cameras, whereas the former illuminate the external eye by diffusion and the latter allow a stereoscopic reconstruction of the pupil's position, used for automated alignment purposes via pupil tracking;
    • An infrared imaging system captures live monochromatic images of the central retina over a circular field of view of 60° in diameter, by an horizontal line from an infrared LED with a centroid wavelength of 850 nm and by an oscillating mirror which scans the line to uniformly illuminate the retina; such images are in turn used for auto-focusing purposes and to track eye movements, providing a measure of a patient's fixation characteristics and allowing active compensation of the position of perimetric stimuli;
    • A concurrent color imaging system allows the capture of color images of the central retina over a circular field of view of 60° in diameter, using a white LED and a blue LED combined to obtain a white light illuminating the retina by the same scan mechanism;
    • A fixation target projecting onto the retina a fixation target obtained from a green LED;
    • A stimuli projector, projecting onto the retina white light Goldmann stimuli at variable intensity and allowing measurements of threshold sensitivity at multiple locations, according to a patient's subjective response to the light stimulus projected at a certain location.

    The COMPASS device interacts with the patient by directing infrared, white, blue and green wavelength illumination into the patient's eye and by recording a patient's confirmation that a certain light stimulus has been perceived or not.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the supporting study for the CenterVue COMPASS device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA clearance letter (K150320) primarily focuses on establishing "substantial equivalence" to predicate devices, rather than explicit numerical acceptance criteria for clinical performance that might be found in a performance goal document for a novel device. However, the clinical study serves to demonstrate this equivalence. The key performance comparison is between the CenterVue COMPASS and the Humphrey HFA-II.

    Acceptance Criteria (Implied for Substantial Equivalence to HFA-II)Reported Device Performance (CenterVue COMPASS)
    Equivalence in retinal threshold sensitivity measurements for both normal and pathological subjects compared to the Humphrey HFA-II.Mean differences in thresholds between COMPASS and HFA-II in both subject groups (normal and pathological) were found to be equivalent to those reported for the Humphrey HFA between SITA Standard and full threshold.
    No significant adverse events during clinical testing.No adverse event was reported during the study.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • 200 normal subjects
      • 120 subjects with pathology affecting the visual field (specifically glaucoma)
      • Total: 320 subjects
    • Data Provenance: The document does not explicitly state the country of origin. It indicates the manufacturer is in Padova, Italy, and the study was conducted to support FDA clearance in the USA, suggesting the study likely occurred in conjunction with the manufacturer's operations or clinical sites. The study is presented as prospective clinical testing ("Measurements have been obtained...").

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    The document does not specify the number of experts or their qualifications for establishing the "ground truth" for the test set (i.e., whether subjects were truly "normal" or had "glaucoma"). It only states that subjects were categorized as "normal" or with "pathology affecting the visual field (in particular glaucoma)." This implies a clinical diagnosis was used, but the specific process or number of experts for this diagnosis is not detailed.

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method for the test set in terms of expert review or consensus. The study compares the performance of the COMPASS directly to the predicate device (Humphrey HFA-II) on the same subjects, rather than assessing the COMPASS's ability to classify against a pre-established ground truth determined by multiple experts.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not done. This device is primarily a diagnostic instrument for measuring retinal sensitivity and imaging, not an AI-assisted diagnostic aid for interpretation by human readers. The clinical study compares the device's measurements to another device, not human performance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, the clinical study presents data on the standalone performance of the CenterVue COMPASS device in measuring retinal threshold sensitivity. It directly compares the measurements obtained by the COMPASS to those obtained by the Humphrey HFA-II. The device operates as a standalone unit for acquiring images and measuring retinal sensitivity.

    7. The Type of Ground Truth Used

    The "ground truth" in this context is the measurement of retinal threshold sensitivity as determined by the accepted standard, the Humphrey HFA-II. The study aims to demonstrate that the COMPASS's measurements are "equivalent" to those of the HFA-II, specifically that the mean differences in thresholds are comparable to known differences within the HFA-II platform (SITA Standard vs. full threshold). The classification of subjects as "normal" or with "glaucoma" would have been based on clinical diagnosis, implicitly serving as a form of "expert consensus" or "clinical diagnosis" ground truth for subject selection, but not for the specific performance metric being evaluated (threshold sensitivity differences).

    8. The Sample Size for the Training Set

    The document describes a "reference database" that was developed to serve as a quantitative tool for comparison of retinal sensitivity to known normal subjects.

    • Reference Database Sample Size: 200 eyes of 200 normal subjects.
    • The age range of this population was 20 - 86 years (50.6 ± 15.2).

    9. How the Ground Truth for the Training Set was Established

    The ground truth for the "training set" (referred to as the "reference database" in the document) was established by obtaining threshold sensitivity data from 200 subjects confirmed to be "normal." The specific criteria or expert qualifications for determining "normalcy" are not detailed in this summary, but it implies a clinical assessment of individuals free from visual field pathology. The perimetric settings used to gather this data are listed (24-2 grid, 4-2 strategy, Goldmann III stimulus, etc.).

    Ask a Question

    Ask a specific question about this device

    K Number
    K143211
    Date Cleared
    2015-03-20

    (130 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Model 0003 supplied by Vital Art and Science Incorporated; Perimeter, Automatic, AC-Powered, 21 CFR 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The my VisionTrack® Model 0005 is intended for the detection of central 3 degrees metamorphopsia (visual distortion) in patients with maculopathy, including age-related macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia. It is intended to be used by patients who have the capability to regularly perform a simple self-test at home. The myVisionTrack® Model 0005 is not intended to diagnose; diagnosis is the responsibility of the prescribing eye-care professional.

    Device Description

    The myVisionTrack® Model 0005 is a vision function test provided as a downloadable app on to the user's supplied cell phone or tablet. The myVisionTrack® Model 0005 implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home. If a significant worsening of vision function is detected the physician will be notified and provided access to the vision self-test results so that they can decide whether the patient needs to be seen sooner than their next already scheduled appointment.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the myVisionTrack® Model 0005, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document primarily focuses on demonstrating substantial equivalence to a predicate device (myVisionTrack® Model 0003) rather than defining explicit, quantitative acceptance criteria for the new device. However, based on the comparative study, we can infer the performance expectation.

    Acceptance Criteria (Inferred)Reported Device Performance
    myVisionTrack® Model 0005 performance not significantly different from myVisionTrack® Model 0003 performance.Cross-sectional study concluded that the performance of Model 0005 (4AFC) is not significantly different from Model 0003 (3AFC).
    Test variability across different device platforms (iPod Touch, iPad Air, iPhone 6+) should be comparable to or smaller than the inherent mVT™ test variability over time (0.10 logRM).Test variability across different devices was comparable to or smaller than 0.10 logRM. Mean results for iPod Touch, iPad Air, and iPhone 6+ were -2.11 logRM, -2.07 logRM, and -2.07 logRM, respectively (F=0.047, p>0.95), indicating no significant difference.
    Self-test usability: users should effectively self-test and find the device user-friendly.100% of participants completed the self-test with a training demo. 90% met the criteria for completing without issues. 90% understood accessing "More" screen. 80% completed self-test in
    Ask a Question

    Ask a specific question about this device

    K Number
    K133758
    Manufacturer
    Date Cleared
    2014-04-23

    (133 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | FDA Regulation Number 1: | 21 CFR 886.1605
    |
    | FDA Regulation Number 1: | 21 CFR 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Macular Integrity Assessment (MAIA™) is intended for measuring macular sensitivity, fixation stability and the locus of fixation, as well as providing infrared retinal imaging. It contains a reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects.

    Device Description

    MAIA™ integrates in one device an automated perimeter and an ophthalmoscope, providing:

    • images of the central retina over a field of view of 36° x 36°, acquired under infrared illumination and a confocal imaging set-up;
    • . recordings of eye movements obtained by "tracking" retinal details in the live retinal video, acquired at 25 fps, providing a measure of a patient's fixation capabilities;
    • . measurements of differential light sensitivity (or threshold sensitivity) at multiple locations in the macula, obtained as in fundus perimetry by recording a patient's subjective response (see / do not see) to a light stimulus projected at a certain location on the retina;
      MAIA™ works with no pupil dilation (non-mydriatic).
      MAIA™ integrates a computer for control and data processing and a touch-screen display and it is provided with a power cord and a push-button. MAIA™ works with a dedicated software application running on a custom Linux O.S.
      MAIA is composed of:
      1. An optical head;
      1. A chin-rest and head-rest;
      1. A base, including a touch-screen display.
        The optical head comprises:
    • An infrared source at 845 nm (SLD) 1.
      1. A line-scanning confocal imaging system of the retina. The line, generated by means of an anamorphic lens, is scanned on the retina while the back-reflected light is de-scanned and revealed by a linear CCD sensor;
      1. A projection system comprising visible LEDs to generate Goldmann stimuli and background at controlled luminance values;
    • A fixation target in the shape of a red circle (two different dimensions available); 4.
      1. An auto-focus system.
        The base of the MAIA includes:
      1. A 3-axis robot that moves the optical head;
      1. An embedded PC that hosts the control software and related interface ports;
      1. The power supply.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    The provided document doesn't explicitly state "acceptance criteria" in a quantitative performance metric table. Instead, it details a precision study to demonstrate the consistency and reliability of the device's measurements for macular sensitivity. The study's results are presented as "Precision results:" and "Individual Grid Point Results:". These values represent the device's demonstrated performance in terms of repeatability and reproducibility.

    Table of Acceptance Criteria (Implied by Precision Study) and Reported Device Performance

    Performance MetricImplied Acceptance Criteria (Typically "Good" or "Acceptable" Precision)Reported Device Performance (Overall Mean - Normal Subjects)Reported Device Performance (Overall Mean - Pathology Subjects)
    Overall Mean SensitivityN/A (Baseline for comparison)29.7 dB23.5 dB
    Overall Std DeviationN/A (Variety of subjects)1.14 dB4.23 dB
    Repeatability SD*Low (indicating consistent results within a session)0.42 dB0.75 dB
    Reproducibility SD**Low (indicating consistent results across different operators/devices)0.96 dB0.75 dB

    Table of Individual Grid Point Results (Implied Acceptance Criteria and Reported Device Performance)

    Group/ParameterImplied Acceptance Criteria (Typically "Good" or "Acceptable" Precision)Reported Device Performance (Repeatability SD)Reported Device Performance (Reproducibility SD)
    Normal
    MinimumLow0.941.06
    MedianLow1.401.80
    MaximumLow (e.g., typically expected to be within a certain range for diagnostic utility)2.432.70
    Pathology
    MinimumLow1.331.33
    MedianLow2.362.43
    MaximumLow (e.g., typically expected to be within a certain range for diagnostic utility)3.163.24
    • Repeatability SD: Estimate of the standard deviation among measurements taken on the same subject using the same operator and device in the same testing session with repositioning.
      ** Reproducibility SD: Estimate of the standard deviation among measurements taken on the same subject using different operators and devices, including repeatability.

    The document implicitly suggests that these precision values are acceptable and demonstrate that the device performs consistently, which is a key aspect of meeting its intended purpose for measuring macular sensitivity.


    Study Details

    1. Sample sized used for the test set and the data provenance:

    • Test Set Sample Size: 24 subjects (12 with normal eyes and 12 with retinal pathologies). Each subject was tested on one eye only.
      • Each subject/eye was tested 3 times within a session.
    • Data Provenance: The subjects were enrolled at two different clinical sites. The document doesn't specify countries, but the manufacturer is based in Italy. The study appears to be prospective for the purpose of this precision testing.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: One "ophthalmologist" per site (implied by "Diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist"). The total number of ophthalmologists across the two sites is not explicitly stated but would be at least two (one per site).
    • Qualifications of Experts: Ophthalmologists. No specific years of experience are provided, but they conducted a "complete eye examination" including "dilated funduscopic examination and pertinent history."

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • The document does not describe an adjudication method for the diagnoses of pathology. It states that "Diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist". This suggests a single expert's diagnosis was used to classify subjects into normal or pathology groups, rather than a consensus or adjudication process for the test set.

    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not done. This submission is for a perimetry device that measures visual function, not an AI diagnostic tool that assists human readers in interpreting images. The closest related activity is the precision study of the device itself.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The device performs automated perimetry, which involves a patient's subjective response ("subjective response (see / do not see) to a light stimulus"). Therefore, it's not a purely standalone algorithm without human-in-the-loop in the context of interpretation, but the measurements themselves are automated. The precision study evaluates the standalone performance of the device's measurement capabilities without human interpretation variability being a primary variable (though operator influence is considered in reproducibility).

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For classifying subjects into "normal" or "pathology" groups for the precision study, the ground truth was based on a clinical diagnosis by an ophthalmologist, specifically a "complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history."
    • For the data collected within the study itself (macular sensitivity measurements), the device produces its own quantitative data which is then assessed for precision rather than against an external ground truth for each specific measurement.

    7. The sample size for the training set:

    While not explicitly called a "training set" for an AI model, the document refers to a "Reference Database" which is analogous to a training or normalisation set.

    • Reference Database Sample Size: 494 eyes of 270 normal subjects.

    8. How the ground truth for the training set was established:

    • For the "Reference Database" (normal subjects), the ground truth was established by defining "normal subjects" from whom threshold sensitivity data was obtained. The criteria for being considered "normal" are not explicitly detailed beyond being "normal subjects," but typically this implies healthy individuals without ocular pathology. This data was used to create a "reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects."
    Ask a Question

    Ask a specific question about this device

    K Number
    K130648
    Date Cleared
    2013-07-23

    (134 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | Classification: | 21 CFR 892.2050
    21 CFR 886.1605
    | 886.1605
    |
    | | 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM Glaucoma Workplace is a FORUM software application intended for the management, display, and analysis of visual field data. The FORUM Glaucoma Workplace is indicated as an aid to the detection, measurement, and management of progression of visual field loss.

    Device Description

    FORUM Glaucoma Workplace is a FORUM software application that provides a means to review and analyze data from various visual field examinations to identify statistically significant and progressive visual field loss. FORUM Glaucoma Workplace utilizes Humphrey® Field Analyzer (HFA) algorithms and databases including STATPAC and Guided Progression Analysis (GPA) to process visual field data and generate visual field reports. GPA compares visual field test results of follow-up tests to an established baseline over time and determines if there is statistically significant change.

    The following are the main functionalities of FORUM Glaucoma Workplace:

    • . Data retrieval and report storage
    • . Managing, analyzing and displaying visual field exams
    • Creation of visual field reports .

    FORUM Glaucoma Workplace retrieves HFA visual field test data from the FORUM Archive. uses the HFA algorithms and databases to process the visual field raw data, then generates and displays visual field reports. The reports generated by FORUM Glaucoma Workplace are stored as DICOM Encapsulated PDFs in the FORUM Archive.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the FORUM Glaucoma Workplace device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state quantitative (e.g., sensitivity, specificity, accuracy) acceptance criteria with numerical targets. Instead, the acceptance criteria are generally framed around demonstrating functional equivalence to predicate devices and adherence to design specifications. The performance testing focused on verifying that the device performs as intended and that its generated reports are equivalent to those from predicate devices.

    Acceptance Criterion (Implicit)Reported Device Performance
    Functional Equivalence: Management, display, and analysis of visual field dataFORUM Glaucoma Workplace provides management, analysis, and display of visual field exams, creating reports. It utilizes Humphrey® Field Analyzer (HFA) algorithms and databases (STATPAC and GPA) to process visual field data and generate reports. It retrieves HFA visual field test data, processes it, generates and displays reports. The reports contain the same information as previously provided on the HFA instrument and utilize the same algorithms and databases.
    Functional Equivalence: Detection, measurement, and management of progression of visual field loss (GPA functionality)FORUM Glaucoma Workplace contains the same GPA algorithms and databases as offered in the Humphrey® Field Analyzer II and II-i. It enables progression analyses, compares follow-up test results to baselines, and determines statistically significant change, providing "Possible Progression" or "Likely Progression" messages consistent with the predicate. It offers the same GPA report types (Full GPA, GPA Summary, GPA Last Three Follow-up, SFA GPA).
    Report Equivalence: Generated visual field reports (Single Field Analysis, Overview, GPA) match those of predicate devicesThe visual field reports (Single Field Analysis, Overview, and Guided Progression Analysis) generated on the HFA II-i were compared to the reports generated by FORUM Glaucoma Workplace using the same test data to verify that the results contained in both reports were equivalent. This comparison was successful.
    Software Performance: Reliability, stability, and proper operation across supported operating systemsVerification and validation activities, including tests accompanying development (code inspections), module and integration testing, and system verification, were performed. The client and server operating systems were evaluated. Results determined suitability for various Windows client and server operating systems, including Windows XP, Windows 7, Windows Server 2003, and Windows Server 2008 R2. "Verification and validation activities were successfully completed and prove that the product FORUM Glaucoma Workplace meets its requirements and performs as intended."
    Usability/Clinical Functionality: Meets user requirements in a clinical environmentValidation of clinical functionalities was completed by ophthalmologists using the software with representative data and executing test cases simulating clinical use. They completed questionnaires rating various aspects of the software. (No specific rating results are provided, but the overall conclusion indicates successful completion).

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Description: The verification testing involved comparing visual field reports generated on the HFA II-i to reports generated by FORUM Glaucoma Workplace using the same test data. Clinical functionality validation used "representative data (sample data that is representative of true clinical cases)."
    • Sample Size for Test Set: Not explicitly stated. The document mentions "the same test data" for report comparison and "representative data" for clinical validation, but no specific number of cases or patients is provided for either.
    • Data Provenance:
      • For the report comparison: The data originated from the HFA II-i, which is a predicate device.
      • For clinical functionality validation: "representative data (sample data that is representative of true clinical cases)". The country of origin of this specific data is not stated or implied. However, the validation participants were ophthalmologists in "two countries."
      • Retrospective or Prospective: Not explicitly stated, but the mention of "same test data" and "representative data" suggests it was likely retrospective (pre-existing data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Unclear how many individual ophthalmologists participated in the clinical functionality validation. It states "ophthalmologists in two countries."
    • Qualifications of Experts: Ophthalmologists. No specific experience level (e.g., "10 years of experience") is provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly described. For the report comparison, it states "to verify that the results contained in both reports were equivalent," implying direct comparison. For clinical validation, ophthalmologists "completed questionnaires rating the various aspects of the software," which doesn't suggest a formal adjudication process for establishing a ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The submission describes functional equivalence testing and clinical validation, but not a comparative effectiveness study designed to measure human reader performance with and without AI assistance.
    • Effect Size: Not applicable, as no MRMC study was conducted.

    6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)

    • Was a standalone study done? Yes, implicitly. The core of the "Performance Data" section describes verification that the FORUM Glaucoma Workplace's algorithms and data processing produce results "equivalent" to the predicate HFA II-i algorithms. This involved comparing the outputs of the software (reports) directly against the predicate device's outputs using the "same test data." This is a form of standalone performance assessment, as it focuses on the algorithm's output matching a known, accepted standard.

    7. Type of Ground Truth Used

    • The ground truth for the comparison of reports and algorithms was based on the outputs and accepted performance of predicate devices (Humphrey® Field Analyzer II and II-i, and their GPA/STATPAC algorithms). Essentially, the "ground truth" was established by the existing, legally marketed and deemed safe/effective predicate technologies.

    8. Sample Size for the Training Set

    • Not applicable / Not explicitly stated. The FORUM Glaucoma Workplace primarily implements existing, validated HFA algorithms (STATPAC and GPA). The text does not describe a new machine learning algorithm that would require a distinct "training set" in the conventional sense of AI/ML development. It leverages established algorithms and databases.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable / Not explicitly stated. As there's no mention of a new machine learning model being trained by the applicant, the concept of a "training set ground truth" is not relevant in this submission. The algorithms themselves (STATPAC, GPA) were developed and validated years prior by the manufacturer of the Humphrey Field Analyzer.
    Ask a Question

    Ask a specific question about this device

    K Number
    K121043
    Manufacturer
    Date Cleared
    2013-02-22

    (323 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    . § 886.1570 & § 886.1605)
    Classification Name | Optical coherence tomography and microperimeter
    (21 C.F.R. § 886.1570 & § 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Optos spectral OCT/SLO with Microperimetry is indicated for use for in vivo viewing, axial cross section, and three dimensional imaging and measurement of posterior ocular structures including retina, macula, retinal nerve fiber layer and optic disk. It is used as a diagnostic device to aid in the detection and management of ocular diseases affecting the posterior of the eye. In addition, cornea sclera and conjunctiva can be imaged with the system by changing the focal position.

    The additional microperimetry functionality is indicated for use as a fixation examiner locating the patient's fixation site and by using the patient's subjective answer to light stimuli, generating a sensitivity map of the inspected retinal region.

    Device Description

    The Optos OCT/SLO Microperimeter is a non-contact, non-invasive, high-resolution device that is an add-on module to the FDA-cleared Optos Optical Coherence Tomography/Scanning Laser Ophthalmoscope (OCT/SLO). The OCT/SLO is a computerized instrument that employs non-invasive, low-coherence interferometry to acquire simultaneous high-resolution cross-sectional OCT and confocal images of ocular structure, including retinal nerve fiber layer, macula and optic disc of the eye. The light source used for this is a super luminescent diode (SLD).

    The microperimetry test runs simultaneously with the confocal ophthalmoscope (SLO) and provides real-time tracking of retinal motion and patient fixation during the exam. Additionally, the patient's subjective response by depressing a button to light stimuli generates a sensitivity map for the inspected retinal region. The variable light stimuli are generated by an organic light emitting diode (OLED).

    AI/ML Overview

    The provided document describes two clinical evaluations for the Optos Spectral OCT/SLO Microperimeter device: a Precision Study and an Agreement Study.

    Here's an analysis of the acceptance criteria and the studies that prove the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document implicitly defines acceptance criteria through the provided "Repeatability SD Limit*" which is derived from ISO 5725-1 and ISO 5725-6. The "Agreement study" also describes an acceptable range of agreement with the predicate device.

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Precision Study)Reported Device Performance (Agreement Study)
    Repeatability SD (Normal)Repeatability SD ≤ 1.49 dB (Upper 95% limit for difference between repeated results, based on 2.8 * Repeatability SD)0.531 dBNot applicable
    Repeatability SD (Pathology)Repeatability SD ≤ 1.91 dB (Upper 95% limit for difference between repeated results, based on 2.8 * Repeatability SD)0.682 dBNot applicable
    Agreement (Attenuation)Within ± 1 attenuation step, or ± 2 steps at attenuation scale extremes, when compared to the predicate device (Nidek MP-1).Not applicableApproximates to ± 1 attenuation step (excluding extremes), ± 2 steps at extremes. Note: Systematic difference of 1 ½ steps (3dB's) for normal eyes and ¼ step (0.5dB's) for diseased eyes. Agreement not consistent across all values, with ranges provided in the description. Measurements are not interchangeable.

    2. Sample Sizes and Data Provenance:

    • Precision Study:

      • Sample Size (Test Set): 12 subjects total (4 normal, 4 with early/intermediate AMD, Geographic Atrophy, Diabetic Retinopathy, Macular Edema, Retinal Vein Occlusion, Central Serous Retinopathy, Pattern Dystrophy, Epiretinal Membrane, or Macular Hole - total 8 diseased subjects, likely 1 eye per subject), with 3 replicates each.
      • Data Provenance: Not explicitly stated, but given the "in-house" nature of the agreement study and the UK submitter, it's likely originating from the UK or a similar region. It appears to be a prospective study based on the "replicates with repositioning at the start of each test" methodology.
    • Agreement Study:

      • Sample Size (Test Set): 40 eyes (20 normal, 20 diseased).
      • Data Provenance: "An in-house study was conducted," implying a prospective study conducted by the manufacturer. Country of origin not explicitly stated, but likely the UK given the submitter.

    3. Number and Qualifications of Experts for Ground Truth:

    • The document does not specify the number of experts or their qualifications for establishing ground truth in either study. Instead, the studies focus on quantitative measurements and their consistency/agreement, rather than a subjective assessment of clinical findings by experts. The "diseased" and "normal" classifications infer a pre-existing diagnosis, but the process of confirming these diagnoses for the study subjects is not detailed.

    4. Adjudication Method:

    • The document does not mention any adjudication method for establishing ground truth in either study. The precision study focuses on repeatability of measurements, and the agreement study compares device measurements to a predicate device.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC comparative effectiveness study is reported in the provided text. The studies focus on device performance (precision, agreement) and not on human reader improvement with or without AI assistance.

    6. Standalone Performance (Algorithm Only):

    • Yes, the studies describe standalone performance. Both the Precision Study and the Agreement Study evaluate the device's ability to produce consistent and comparable measurements without explicit human interpretation being part of the primary outcome assessment. The "patient's subjective answer to light stimuli" is input for the microperimetry functionality, but the reported performance metrics are related to the device's measurement and tracking capabilities.

    7. Type of Ground Truth Used:

    • The ground truth for the Precision Study is implicitly defined by the physical state of the eye (normal vs. specific pathologies). The study measures the device's consistency in reporting attenuation values for these known states.
    • The ground truth for the Agreement Study is the measurement obtained from the predicate device (Nidek MP-1 Microperimeter). The study assesses how closely the Optos device's measurements align with those of an already marketed device.

    8. Sample Size for the Training Set:

    • The document does not mention a training set or its sample size. The studies described are clinical evaluations for a 510(k) submission, typically focused on verification and validation of the finished device and its algorithms, rather than the development and training of new machine learning models.

    9. How the Ground Truth for the Training Set was Established:

    • As no training set is mentioned, this information is not applicable.
    Ask a Question

    Ask a specific question about this device

    K Number
    K121738
    Date Cleared
    2013-02-22

    (254 days)

    Product Code
    Regulation Number
    886.1330
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    CFR 886.1605; K091579; Product Code: HPT

    • PreView PHPTM supplied by Notal Vision; Perimeter, Automatic
      , AC-Powered, . 21 CFR Part 886.1605; K05350; Product Code: HPT
    • Amsler Grid, a Class I Exempt Preamendments
      CFR 886.1330 | 21 CFR 886.1605
      | 21 CFR 886.1605
      | 21 CFR 886.1605
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The my Vision TrackTM is intended for the detection of central 3 degrees metamorphopsia (visual distortion) in patients with maculopathy, including agerelated macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia. It is intended to be used by patients who have the capability to regularly perform a simple self-test at home. The myVisionTrackTM is not intended to diagnose; diagnosis is the responsibility of the prescribing eye-care professional.

    Device Description

    The myVisionTrack™ is a vision function test provided on a commercially available cell phone. The myVisionTrack™ implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home. This enables regular monitoring of disease progression, and for timely detection of significant changes in vision function. If a significant worsening of vision function is detected the physician will be notified and provided access to the vision self-test results so that they can decide whether the patient needs to be seen sooner than their next already scheduled appointment.

    AI/ML Overview

    The provided document describes the myVisionTrack™ Model 0003, a device intended for the detection and characterization of central 3 degrees metamorphopsia in patients with maculopathy, including age-related macular degeneration and diabetic retinopathy, and as an aid in monitoring progression of disease factors causing metamorphopsia.

    However, the document does not explicitly state specific acceptance criteria (e.g., sensitivity, specificity thresholds) for the device's performance. Instead, it focuses on demonstrating substantial equivalence to predicate devices and verifying that patients can effectively use the device for self-monitoring.

    Here's an analysis based on the information provided, highlighting what is available and what is not:

    1. Table of Acceptance Criteria and Reported Device Performance

    As noted above, no explicit acceptance criteria thresholds (like specific sensitivity or specificity values) are provided in the document. The study's conclusion is that the device is "as safe, as effective and performs at least as safely and effectively as the predicate devices."

    The document mentions that the study "did show a significant difference between those patients with mild-to-moderate non-proliferative DR (NPDR) and those with very severe NPDR or proliferative DR (PDR), whereas traditional clinic-based visual acuity and contrast sensitivity tests were not able to detect a significant difference." This implies a performance benefit in detecting disease severity, but no specific metrics or acceptance criteria are given for this.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 36 individuals.
    • Data Provenance: The study was a "6-month Clinical Study" and "6-month longitudinal study" performed by VAS (Vital Art and Science Incorporated). The document does not explicitly state the country of origin but implies it was conducted by the submitter (Vital Art and Science Incorporated, Richardson, TX, USA). It was a prospective longitudinal study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not explicitly stated. The ground truth was based on "clinical judgment" and "traditional clinic-based visual acuity and contrast sensitivity tests" for assessing disease condition (NPDR vs. PDR). It is reasonable to assume these judgments were made by qualified ophthalmologists or eye care professionals, but specific numbers and qualifications are not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document mentions "clinical judgment" for assessing the disease condition, but how multiple experts (if any) arrived at a consensus is not described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described or conducted in the context of human readers improving with or without AI assistance. The study focused on the device's ability to monitor changes and patient compliance.

    6. Standalone Performance (Algorithm Only without Human-in-the-loop Performance)

    • Standalone Performance: Yes, the described study appears to be a standalone performance study. The myVisionTrack™ device "implements a shape discrimination hyperacuity (SDH) vision test which allows patients to perform their own vision test at home." The results are then analyzed by the device's algorithm, and "If a significant worsening of vision function is detected the physician will be notified." The study tested "effective self-monitoring... using myVisionTrack™" and demonstrated its ability to detect differences in disease states and generate notifications based on a "0.2 logMAR notification rule." This rule is an algorithmic threshold applied to the device's output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for defining "significant change of disease condition" was based on clinical judgment by medical professionals and comparisons to traditional clinic-based visual acuity and contrast sensitivity tests.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: The document does not specify a separate training set or its sample size. The description of the clinical study appears to relate to the validation of the device's performance, not necessarily the training of its algorithm (which is a "shape discrimination hyperacuity test"). The algorithms are described as using an "adaptive staircase algorithm," which is a testing methodology, not typically a machine learning training process that requires a training set in the conventional sense.

    9. How the Ground Truth for the Training Set Was Established

    • Establishment of Ground Truth for Training Set: Not applicable, as a distinct training set for a machine learning algorithm is not described. The "shape discrimination hyperacuity test" is a known psychophysical testing method. The "adaptive staircase algorithm" is a method for efficiently finding thresholds. The document states "Numerous published studies have shown that patients with AMD and other forms of maculopathy have significantly poorer results as compared to normal subjects on the shape discrimination test," indicating that the underlying principle is well-established in scientific literature.
    Ask a Question

    Ask a specific question about this device

    K Number
    K092187
    Device Name
    MAIA, MODEL 1
    Manufacturer
    Date Cleared
    2010-05-27

    (310 days)

    Product Code
    Regulation Number
    886.1605
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K092187

    Trade/Device Name: Macular Integrity Assessment (MAIA ") Device Regulation Number: 21 CFR 886.1605

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Macular Integrity Assessment (MAIA™) is indicated for measuring macular sensitivity, fixation stability and the locus of fixation, as well as providing infrared retinal imaging. It contains a reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects.

    Device Description

    The MAIA™ is a confocal, line scanning, infrared, ophthalmoscope, combined with a system for visible light projection to obtain perimetric measurements, using "fundus perimetry" (also "microperimetry"). MAIA™ integrates in one device an automated perimeter and an ophthalmoscope, providing: - images of the central retina over a field of view of 36° x 36°, acquired under infrared . illumination; - recordings of eye movements obtained by "tracking" retinal details in the live retinal images and . providing a quantitative analysis of fixation characteristics; - measurements of differential light sensitivity (or threshold sensitivity) at multiple locations in . the macula, obtained by recording a patient's subjective response (see / don't see) to a light stimulus projected at a certain location over the retina; - comparison of measured threshold sensitivity with a reference database obtained from normal . subjects, indicating whether measured thresholds are above or below certain percentiles. MAIA 100 works with no pupil dilation (non-mydriatic). MAIA™ integrates a computer for control and data processing and a touch-screen display and it is provided with a power cord and a push-button. MAIA™ works with a dedicated software application running on a custom Linux O.S.

    AI/ML Overview

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the MAIA™ device are not explicitly stated in terms of specific performance thresholds (e.g., "accuracy must be >X%"). Instead, the study focuses on demonstrating the precision (repeatability and reproducibility) of the device's measurements for macular sensitivity. The reported device performance is presented as standard deviations (SD) for both overall mean thresholds and individual grid point thresholds.

    MetricAcceptance Criteria (Implicit: demonstrate acceptable precision)Reported Device Performance (Normal Eyes)Reported Device Performance (Pathology Eyes)
    Overall Mean Threshold-29.7 dB (Mean)23.5 dB (Mean)
    Overall Standard Deviation-1.14 dB4.23 dB
    Repeatability SD*-0.42 dB0.75 dB
    Reproducibility SD**-0.96 dB0.75 dB
    Individual Grid Point Results-
    Repeatability SD (Minimum)-0.941.33
    Repeatability SD (Median)-1.402.36
    Repeatability SD (Maximum)-2.433.16
    Reproducibility SD (Minimum)-1.061.33
    Reproducibility SD (Median)-1.802.43
    Reproducibility SD (Maximum)-2.703.24
    • estimate of the standard deviation among measurements taken on the same operator and device in the same testing session with repositioning.
      ** estimate of the standard deviation among measurements taken on the same subject using different operators and devices, including repeatability.

    The study's conclusion states that "all testing deemed necessary was conducted on the MAIA™ to ensure that the device is safe and effective for its intended use," implying that these precision results met the internal acceptance benchmarks for demonstrating substantial equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    The "test set" in this context refers to the subjects used in the precision study.

    • Sample Size:
      • Normal Subjects: 12 subjects (each tested on one eye only).
      • Pathology Subjects: 12 subjects (each tested on one eye only).
      • Each subject/eye was tested 3 times within a session (3 repeated measures).
    • Data Provenance: The subjects were enrolled at two different clinical sites. The document does not specify the country of origin of these clinical sites, but the company is based in Italy. The study appears to be prospective, as subjects were enrolled for the purpose of this precision study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not describe the establishment of ground truth for the test set (the precision study participants) in terms of expert consensus for specific macular sensitivity measurements. Instead, for the pathology group, the "diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history." The number and specific qualifications (e.g., years of experience) of these ophthalmologists are not specified.

    4. Adjudication Method for the Test Set

    No adjudication method is described for the test set. The study focuses on the device's precision in measuring macular sensitivity, rather than on a diagnostic performance where multiple expert opinions would need to be adjudicated.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study was done. The document does not mention human readers or AI assistance in the context of the MAIA™ device's operation or evaluation in this submission. The device itself is an automated perimeter and ophthalmoscope.

    6. Standalone (Algorithm Only) Performance

    The device itself is a standalone algorithm-based system for measuring macular sensitivity and fixation. The precision study evaluates the performance of this system independently. There is no "human-in-the-loop" component described that would alter or assist the device's primary measurements of macular sensitivity and fixation.

    7. Type of Ground Truth Used

    • For the Precision Study (Test Set):
      • For normal subjects, the implication is that they had no known retinal pathology.
      • For pathology subjects, the ground truth for their pathological status was established by "a complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history." The specific values of macular sensitivity measured by the MAIA™ are the output being evaluated for precision, not compared against an external "ground truth" measurement for sensitivity.
    • For the Reference Database (Training Set - described in section 9): The ground truth for the reference database was established by measuring threshold sensitivity in subjects deemed "normal subjects" (see point 9).

    8. Sample Size for the Training Set

    The document mentions a "reference database" that serves as the equivalent of a training or reference set for the device's normative comparison.

    • Sample Size: 494 eyes of 270 normal subjects.

    9. How the Ground Truth for the Training Set Was Established

    The "ground truth" for the reference database (training set) was established by measuring threshold sensitivity data from:

    • "Normal subjects": These subjects were enrolled at 4 different clinical sites.
    • Age Range: 21-86 years (mean 43, std. dev. 15).
    • Recruitment: Among the clinics' personnel and relatives of the clinics' regular patients.

    The implication is that these subjects were screened and determined to be without ocular pathology affecting macular sensitivity, thus providing a "normal" baseline for comparison. The specific criteria for deeming a subject "normal" (e.g., visual acuity, fundus examination results) are not detailed beyond "normal subjects."

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2