Search Results
Found 1 results
510(k) Data Aggregation
(273 days)
The TrueField Analyzer is an automated perimeter used to aid in measurement of visual field abnormalities.
For the assessment of visual field abnormalities.
The TrueField Analyzer is an automated perimeter that is used to aid in measurement of visual field abnormalities. It is an objective device that monitors involuntary responses in the patient's pupils to a series of multi-focal visual stimuli presented to the eyes. The system presents stimuli and monitors the pupil responses in both eyes independently and concurrently.
The device includes:
- a bilateral image display system for providing individual visual stimulus to the patient's eyes (both eyes are concurrently and independently stimulated)
- A pair of video cameras for monitoring the patient's pupils again concurrently and independently
- A personal computer equipped to run Windows XP Professional Service Pack 2 operating system.
- The TrueField Software system. The TrueField Software automatically manages the stimulus presentation and video data acquisition, ensuring synchronization between the display and video image acquisition; data analysis, storage and presentation of results for review.
The TrueField Analyzer uses a different fundamental technology to the predicate device. It combines standard multi-focal stimulus and analysis technology (as used in other perimetry devices, for example K003442, K983983) with computerized pupil monitoring (for example K920937) allowing the device to objectively measure the visual field map of a patient. In doing so it is substantially equivalent to the predicate device (K954167).
Based on the provided 510(k) summary for the TrueField Analyzer, here's a detailed breakdown regarding acceptance criteria and the study (or lack thereof) that supports its performance:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly state specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy targets) for the TrueField Analyzer's performance in diagnosing visual field abnormalities. Instead, the performance data section focuses on demonstrating the device's conformance to product specifications (electrical safety, EMC, IR radiation safety) and a comparative table of technical specifications with the predicate device (Humphrey Field Analyser - HFA-II).
The core performance claim relies on "substantial equivalence" to the predicate device, implying that its performance is implicitly accepted as equivalent to a device already deemed safe and effective.
Here's a table summarizing the technical specifications that are presented as "performance data" in comparison to the predicate, which indirectly serve as a basis for proving its functionality and equivalence:
| Feature/Criteria | TrueField Analyzer Performance | HFA-II (Predicate) Performance (for comparison) |
|---|---|---|
| General | ||
| Intended Clinical Purpose | Visual field examination / to measure visual field defects | Visual field examination / to measure visual field defects |
| Product Code | HPT | HPT |
| Regulation | 886.1605 | 886.1605 |
| Device Class | I | I |
| Technical/Operational | ||
| Visual System Stimulus | Sparse-stimulus multifocal stimulus | Single spot of variable luminance and size |
| Measurement Technology | Video camera based pupil measurement | User feedback (button press) |
| Visual Function Assessment | Regression based multifocal analysis | Threshold or suprathreshold sensitivity to spots |
| Visual Field Defect Assessment | Population sample normal database comparison | Population sample Normal database comparison |
| Stimulus Luminance | 290 cd/m² | 0.025 - 3,183 cd/m² (or 0.08 – 10,000 apostilbs) |
| Background Luminance | 10 cd/m² | 10 cd/m² (31.5 apostilbs) |
| Number of Stimuli Locations | 24 T30-24, 40 T30-40, 60 T30-60, 24 T10-24, 44 O30-44 | 54 Central 24-2, 76 Central 30-2, 68 Central 10-2, 68 Peripheral 30/60-2 |
| Eccentricity Limits of Std Test Area | $\pm$ 30 degrees | $\pm$ 24 degrees (for central 24-2, common test) |
| Stimulus Spot Size | 4, 11 or 14 degrees arc angle (segments) | 0.43 degrees (Goldmann standard size III spot) |
| Stimulus Spot Spacing | ~7.5 to 12.5 degrees (cortically scaled) | Uniform 6° grid spacing (standard patterns) |
| Proportion of Visual Field Test Area Sampled | 88% | < 0.5% (for standard 24-2 pattern) |
| Total Test Time | 4 to 5 minutes for both eyes | 5 to 15+ minutes per eye (dependent on test) |
It's crucial to note that the document does not present quantitative results from a specific clinical study aimed at proving diagnostic accuracy (e.g., sensitivity, specificity, agreement with clinical diagnosis) against predefined acceptance criteria for visual field assessment. The "Performance Data" section primarily addresses safety and technical characteristics, and then performs a feature-by-feature comparison to the predicate device to establish substantial equivalence.
2. Sample size used for the test set and the data provenance
The document does not describe a specific clinical test set for diagnostic performance or accuracy. The performance data primarily refers to conformance with product specifications (electrical safety, EMC, IR radiation safety) and a feature comparison. There is no mention of a patient cohort or a dataset used to "test" the device's ability to measure visual field abnormalities from a diagnostic perspective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Since no specific clinical test set for diagnostic accuracy is described, there is no mention of experts establishing ground truth. The device's "Visual Field Defect Assessment" is stated to use "Population sample normal database comparison," which likely refers to an internal reference database rather than expert-adjudicated ground truth for a test set.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable, as no dedicated clinical test set with human ground truth assessment is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
There is no mention of an MRMC comparative effectiveness study involving human readers with or without AI assistance. The TrueField Analyzer is described as an "objective device that monitors involuntary responses in the patient's pupils," implying a direct measurement by the device itself, rather than an AI assisting human interpretation of images. The device's technology is a departure from the predicate device which relies on "User feedback (button press)."
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The TrueField Analyzer is described as an "automated perimeter" and an "objective device" that "automatically manages the stimulus presentation and video data acquisition; data analysis, storage and presentation of results for review." This implies a standalone performance mode for the measurement and initial analysis of visual fields. However, the document does not report specific performance metrics (e.g., sensitivity, specificity, accuracy) for this standalone performance in the context of diagnosing visual field defects. The performance data focuses on technical and safety specifications, and comparison of features to a predicate.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document mentions "Population sample normal database comparison" for visual field defect assessment. This suggests the ground truth for identifying abnormalities is based on statistical deviation from a normative database of healthy individuals, rather than expert consensus on specific cases, pathology, or long-term outcomes for a test cohort.
8. The sample size for the training set
The document does not specify a "training set" sample size. While the device uses a "Population sample normal database" for comparison, the size and nature of this database are not disclosed, nor is it referred to as a "training set" in the context of an AI/machine learning model. The technology, while automated, is described in terms of "regression based multifocal analysis" rather than deep learning that would typically involve a distinct training set.
9. How the ground truth for the training set was established
Not applicable, as a distinct "training set" with established ground truth as commonly understood in modern AI/ML device submissions is not described. The "Population sample normal database" would have been established by performing tests on a group of healthy individuals to define the normal range of visual field responses, but the methodology for this is not detailed in the provided text.
Ask a specific question about this device
Page 1 of 1