Search Results
Found 1 results
510(k) Data Aggregation
(133 days)
CENTERVUE MACULAR INTEGRITY ASSESSMENT
The Macular Integrity Assessment (MAIA™) is intended for measuring macular sensitivity, fixation stability and the locus of fixation, as well as providing infrared retinal imaging. It contains a reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects.
MAIA™ integrates in one device an automated perimeter and an ophthalmoscope, providing:
- images of the central retina over a field of view of 36° x 36°, acquired under infrared illumination and a confocal imaging set-up;
- . recordings of eye movements obtained by "tracking" retinal details in the live retinal video, acquired at 25 fps, providing a measure of a patient's fixation capabilities;
- . measurements of differential light sensitivity (or threshold sensitivity) at multiple locations in the macula, obtained as in fundus perimetry by recording a patient's subjective response (see / do not see) to a light stimulus projected at a certain location on the retina;
MAIA™ works with no pupil dilation (non-mydriatic).
MAIA™ integrates a computer for control and data processing and a touch-screen display and it is provided with a power cord and a push-button. MAIA™ works with a dedicated software application running on a custom Linux O.S.
MAIA is composed of: -
- An optical head;
-
- A chin-rest and head-rest;
-
- A base, including a touch-screen display.
The optical head comprises:
- A base, including a touch-screen display.
- An infrared source at 845 nm (SLD) 1.
-
- A line-scanning confocal imaging system of the retina. The line, generated by means of an anamorphic lens, is scanned on the retina while the back-reflected light is de-scanned and revealed by a linear CCD sensor;
-
- A projection system comprising visible LEDs to generate Goldmann stimuli and background at controlled luminance values;
- A fixation target in the shape of a red circle (two different dimensions available); 4.
-
- An auto-focus system.
The base of the MAIA includes:
- An auto-focus system.
-
- A 3-axis robot that moves the optical head;
-
- An embedded PC that hosts the control software and related interface ports;
-
- The power supply.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The provided document doesn't explicitly state "acceptance criteria" in a quantitative performance metric table. Instead, it details a precision study to demonstrate the consistency and reliability of the device's measurements for macular sensitivity. The study's results are presented as "Precision results:" and "Individual Grid Point Results:". These values represent the device's demonstrated performance in terms of repeatability and reproducibility.
Table of Acceptance Criteria (Implied by Precision Study) and Reported Device Performance
Performance Metric | Implied Acceptance Criteria (Typically "Good" or "Acceptable" Precision) | Reported Device Performance (Overall Mean - Normal Subjects) | Reported Device Performance (Overall Mean - Pathology Subjects) |
---|---|---|---|
Overall Mean Sensitivity | N/A (Baseline for comparison) | 29.7 dB | 23.5 dB |
Overall Std Deviation | N/A (Variety of subjects) | 1.14 dB | 4.23 dB |
Repeatability SD* | Low (indicating consistent results within a session) | 0.42 dB | 0.75 dB |
Reproducibility SD** | Low (indicating consistent results across different operators/devices) | 0.96 dB | 0.75 dB |
Table of Individual Grid Point Results (Implied Acceptance Criteria and Reported Device Performance)
Group/Parameter | Implied Acceptance Criteria (Typically "Good" or "Acceptable" Precision) | Reported Device Performance (Repeatability SD) | Reported Device Performance (Reproducibility SD) |
---|---|---|---|
Normal | |||
Minimum | Low | 0.94 | 1.06 |
Median | Low | 1.40 | 1.80 |
Maximum | Low (e.g., typically expected to be within a certain range for diagnostic utility) | 2.43 | 2.70 |
Pathology | |||
Minimum | Low | 1.33 | 1.33 |
Median | Low | 2.36 | 2.43 |
Maximum | Low (e.g., typically expected to be within a certain range for diagnostic utility) | 3.16 | 3.24 |
- Repeatability SD: Estimate of the standard deviation among measurements taken on the same subject using the same operator and device in the same testing session with repositioning.
** Reproducibility SD: Estimate of the standard deviation among measurements taken on the same subject using different operators and devices, including repeatability.
The document implicitly suggests that these precision values are acceptable and demonstrate that the device performs consistently, which is a key aspect of meeting its intended purpose for measuring macular sensitivity.
Study Details
1. Sample sized used for the test set and the data provenance:
- Test Set Sample Size: 24 subjects (12 with normal eyes and 12 with retinal pathologies). Each subject was tested on one eye only.
- Each subject/eye was tested 3 times within a session.
- Data Provenance: The subjects were enrolled at two different clinical sites. The document doesn't specify countries, but the manufacturer is based in Italy. The study appears to be prospective for the purpose of this precision testing.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: One "ophthalmologist" per site (implied by "Diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist"). The total number of ophthalmologists across the two sites is not explicitly stated but would be at least two (one per site).
- Qualifications of Experts: Ophthalmologists. No specific years of experience are provided, but they conducted a "complete eye examination" including "dilated funduscopic examination and pertinent history."
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document does not describe an adjudication method for the diagnoses of pathology. It states that "Diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist". This suggests a single expert's diagnosis was used to classify subjects into normal or pathology groups, rather than a consensus or adjudication process for the test set.
4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not done. This submission is for a perimetry device that measures visual function, not an AI diagnostic tool that assists human readers in interpreting images. The closest related activity is the precision study of the device itself.
5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device performs automated perimetry, which involves a patient's subjective response ("subjective response (see / do not see) to a light stimulus"). Therefore, it's not a purely standalone algorithm without human-in-the-loop in the context of interpretation, but the measurements themselves are automated. The precision study evaluates the standalone performance of the device's measurement capabilities without human interpretation variability being a primary variable (though operator influence is considered in reproducibility).
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For classifying subjects into "normal" or "pathology" groups for the precision study, the ground truth was based on a clinical diagnosis by an ophthalmologist, specifically a "complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history."
- For the data collected within the study itself (macular sensitivity measurements), the device produces its own quantitative data which is then assessed for precision rather than against an external ground truth for each specific measurement.
7. The sample size for the training set:
While not explicitly called a "training set" for an AI model, the document refers to a "Reference Database" which is analogous to a training or normalisation set.
- Reference Database Sample Size: 494 eyes of 270 normal subjects.
8. How the ground truth for the training set was established:
- For the "Reference Database" (normal subjects), the ground truth was established by defining "normal subjects" from whom threshold sensitivity data was obtained. The criteria for being considered "normal" are not explicitly detailed beyond being "normal subjects," but typically this implies healthy individuals without ocular pathology. This data was used to create a "reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects."
Ask a specific question about this device
Page 1 of 1