Search Results
Found 8 results
510(k) Data Aggregation
(310 days)
HPT
The Macular Integrity Assessment (MAIA™) is indicated for measuring macular sensitivity, fixation stability and the locus of fixation, as well as providing infrared retinal imaging. It contains a reference database that is a quantitative tool for the comparison of macular sensitivity to a database of known normal subjects.
The MAIA™ is a confocal, line scanning, infrared, ophthalmoscope, combined with a system for visible light projection to obtain perimetric measurements, using "fundus perimetry" (also "microperimetry"). MAIA™ integrates in one device an automated perimeter and an ophthalmoscope, providing: - images of the central retina over a field of view of 36° x 36°, acquired under infrared . illumination; - recordings of eye movements obtained by "tracking" retinal details in the live retinal images and . providing a quantitative analysis of fixation characteristics; - measurements of differential light sensitivity (or threshold sensitivity) at multiple locations in . the macula, obtained by recording a patient's subjective response (see / don't see) to a light stimulus projected at a certain location over the retina; - comparison of measured threshold sensitivity with a reference database obtained from normal . subjects, indicating whether measured thresholds are above or below certain percentiles. MAIA 100 works with no pupil dilation (non-mydriatic). MAIA™ integrates a computer for control and data processing and a touch-screen display and it is provided with a power cord and a push-button. MAIA™ works with a dedicated software application running on a custom Linux O.S.
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria for the MAIA™ device are not explicitly stated in terms of specific performance thresholds (e.g., "accuracy must be >X%"). Instead, the study focuses on demonstrating the precision (repeatability and reproducibility) of the device's measurements for macular sensitivity. The reported device performance is presented as standard deviations (SD) for both overall mean thresholds and individual grid point thresholds.
Metric | Acceptance Criteria (Implicit: demonstrate acceptable precision) | Reported Device Performance (Normal Eyes) | Reported Device Performance (Pathology Eyes) |
---|---|---|---|
Overall Mean Threshold | - | 29.7 dB (Mean) | 23.5 dB (Mean) |
Overall Standard Deviation | - | 1.14 dB | 4.23 dB |
Repeatability SD* | - | 0.42 dB | 0.75 dB |
Reproducibility SD** | - | 0.96 dB | 0.75 dB |
Individual Grid Point Results | - | ||
Repeatability SD (Minimum) | - | 0.94 | 1.33 |
Repeatability SD (Median) | - | 1.40 | 2.36 |
Repeatability SD (Maximum) | - | 2.43 | 3.16 |
Reproducibility SD (Minimum) | - | 1.06 | 1.33 |
Reproducibility SD (Median) | - | 1.80 | 2.43 |
Reproducibility SD (Maximum) | - | 2.70 | 3.24 |
- estimate of the standard deviation among measurements taken on the same operator and device in the same testing session with repositioning.
** estimate of the standard deviation among measurements taken on the same subject using different operators and devices, including repeatability.
The study's conclusion states that "all testing deemed necessary was conducted on the MAIA™ to ensure that the device is safe and effective for its intended use," implying that these precision results met the internal acceptance benchmarks for demonstrating substantial equivalence.
2. Sample Size Used for the Test Set and Data Provenance
The "test set" in this context refers to the subjects used in the precision study.
- Sample Size:
- Normal Subjects: 12 subjects (each tested on one eye only).
- Pathology Subjects: 12 subjects (each tested on one eye only).
- Each subject/eye was tested 3 times within a session (3 repeated measures).
- Data Provenance: The subjects were enrolled at two different clinical sites. The document does not specify the country of origin of these clinical sites, but the company is based in Italy. The study appears to be prospective, as subjects were enrolled for the purpose of this precision study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document does not describe the establishment of ground truth for the test set (the precision study participants) in terms of expert consensus for specific macular sensitivity measurements. Instead, for the pathology group, the "diagnosis of retinal pathology was made by a complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history." The number and specific qualifications (e.g., years of experience) of these ophthalmologists are not specified.
4. Adjudication Method for the Test Set
No adjudication method is described for the test set. The study focuses on the device's precision in measuring macular sensitivity, rather than on a diagnostic performance where multiple expert opinions would need to be adjudicated.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study was done. The document does not mention human readers or AI assistance in the context of the MAIA™ device's operation or evaluation in this submission. The device itself is an automated perimeter and ophthalmoscope.
6. Standalone (Algorithm Only) Performance
The device itself is a standalone algorithm-based system for measuring macular sensitivity and fixation. The precision study evaluates the performance of this system independently. There is no "human-in-the-loop" component described that would alter or assist the device's primary measurements of macular sensitivity and fixation.
7. Type of Ground Truth Used
- For the Precision Study (Test Set):
- For normal subjects, the implication is that they had no known retinal pathology.
- For pathology subjects, the ground truth for their pathological status was established by "a complete eye examination by an ophthalmologist, including dilated funduscopic examination and pertinent history." The specific values of macular sensitivity measured by the MAIA™ are the output being evaluated for precision, not compared against an external "ground truth" measurement for sensitivity.
- For the Reference Database (Training Set - described in section 9): The ground truth for the reference database was established by measuring threshold sensitivity in subjects deemed "normal subjects" (see point 9).
8. Sample Size for the Training Set
The document mentions a "reference database" that serves as the equivalent of a training or reference set for the device's normative comparison.
- Sample Size: 494 eyes of 270 normal subjects.
9. How the Ground Truth for the Training Set Was Established
The "ground truth" for the reference database (training set) was established by measuring threshold sensitivity data from:
- "Normal subjects": These subjects were enrolled at 4 different clinical sites.
- Age Range: 21-86 years (mean 43, std. dev. 15).
- Recruitment: Among the clinics' personnel and relatives of the clinics' regular patients.
The implication is that these subjects were screened and determined to be without ocular pathology affecting macular sensitivity, thus providing a "normal" baseline for comparison. The specific criteria for deeming a subject "normal" (e.g., visual acuity, fundus examination results) are not detailed beyond "normal subjects."
Ask a specific question about this device
(150 days)
HPT
The Guided Progression Analysis for the Humphrey® Field Analyzer II (HFA II) and Humphrey® Field Analyzer II- i series is a software analysis module that is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, glaucoma. It is also intended to compare change over time and determine if statistically significant change has occurred.
The Carl Zeiss Meditec, Inc. Guided Progression Analysis is a software analysis module for the Humphrey® Field Analyzer II (HFA II) and Humphrey® Field Analyzer II - i series (HFA II - i) that assists practitioners with the detection, measurement, and management of progression of visual field loss. It aids in assessing change over time, including change from baseline and rate of change. It is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, glaucoma.
The Carl Zeiss Meditec, Inc. Guided Progression Analysis (GPA) is a software package for the Humphrey Field Analyzer II and II - i series that is designed to help practitioners identify progressive visual field loss in glaucoma patients. GPA compares the visual field test results of up to 14 follow-up tests to an established baseline over time and determines if there is statistically significant change. The GPA printout highlights any changes from baseline that represent larger than expected clinical variability, and it provides simple plain-language messages such as "Possible Progression" or "Likely Progression" whenever changes show consistent and statistically significant loss. The GPA printout also presents the Visual Field Index (VFI), a global index which reports a measure of the patient's remaining useful vision in the form of a percentage, as well as the VFI Rate of Progression plot which provides a trend analysis of the patient's overall visual field history and indicates a 3-5 year projection of the VFI regression line if the current trend continued.
The provided FDA 510(k) summary for the Guided Progression Analysis (GPA) for the Humphrey® Field Analyzer II and II - i series does not detail specific acceptance criteria in a quantitative table format or a standalone study with a predefined set of performance metrics that the device had to meet. Instead, the submission relies on:
- Substantial Equivalence: Demonstrating that the GPA software is functionally equivalent to predicate devices and does not raise new questions regarding safety and effectiveness.
- Clinical Literature Review: Citing published research that discusses GPA's development and its successful use in identifying statistically significant visual field progression, particularly referencing its incorporation of metrics from the Early Manifest Glaucoma Trial (EMGT).
- Sponsored Study on Test-Retest Variability: A study to quantify perimetric test-retest variability in glaucoma subjects, which was used to establish limits for change at different significance levels based on test-retest variability in glaucomatous visual fields. This allows GPA to indicate when change exceeds normal test-retest variability.
Therefore, a table of explicit acceptance criteria and corresponding reported device performance, in the traditional sense of a validation study with pre-defined thresholds, cannot be directly extracted or constructed from the provided text. The "performance" is described in terms of its ability to identify statistically significant change based on established variability data, rather than specific sensitivity/specificity figures against an external gold standard.
However, I can provide a summary of the information available in the document regarding the study that underpins the device's claims.
Acceptance Criteria and Reported Device Performance
As stated, explicit acceptance criteria (e.g., minimum sensitivity, specificity, or accuracy targets) are not provided in this 510(k) summary. The "performance" is intrinsically linked to its ability to identify changes beyond normal test-retest variability.
Criterion Type | Acceptance Criteria (Not explicitly stated in the document) | Reported Device Performance (as described in the document) |
---|---|---|
Efficacy in detecting progression | Implicit: Ability to identify statistically significant visual field progression | GPA incorporates visual field progression metrics successfully used in EMGT. It determines if statistically significant change has occurred by comparing follow-up tests to a baseline and highlighting changes that represent larger than expected clinical variability. Provides plain-language messages like "Possible Progression" or "Likely Progression". |
Statistical Robustness | Implicit: Ability to distinguish true change from test-retest variability | Results from a sponsored study established limits for change at different significance levels based on test-retest variability in glaucomatous visual fields. GPA indicates when change at a given test location exceeds this test-retest variability. |
Aid in management | Implicit: Provides actionable information for clinicians | Presents Visual Field Index (VFI), VFI Rate of Progression plot, and trend analysis with 3-5 year projection to aid in estimating future visual status. |
-
Sample size used for the test set and the data provenance:
- Sample Size: 363 qualified glaucoma subjects.
- Data Provenance: Data was collected across a worldwide nine-site study. It is not specified if it was retrospective or prospective, but the description ("Each subject was tested four times within one month") suggests a prospective data collection for the purpose of establishing test-retest variability.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not mention the use of experts to establish a "ground truth" for the test set in the traditional sense of defining disease progression. The study focused on quantifying perimetric test-retest variability in glaucoma subjects. The "ground truth" or reference for this study was the inherent variability of visual field measurements themselves, not an expert-determined clinical diagnosis of progression.
-
Adjudication method for the test set:
- Adjudication methods (like 2+1 or 3+1) are typically used when experts are determining a ground truth for a diagnostic outcome. Since the study focused on quantifying test-retest variability rather than an expert-adjudicated ground truth for progression, no adjudication method is described or implied in the provided text.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study is described in this 510(k) summary. The device, Guided Progression Analysis (GPA), is a software analysis module designed to assist practitioners, but its performance is described in terms of its algorithmic output based on statistical analysis of visual field data, not human reader performance with or without AI (in this case, "AI" refers to the GPA algorithm). The summary indicates that "the results allow HFA GPA to indicate when the change... exceeds the test-retest variability," which implies the software's direct output.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the core study mentioned (quantifying perimetric test-retest variability) and the subsequent development of GPA to use this data to identify statistically significant changes represent a standalone algorithmic function. The GPA software, without human intervention in its analysis, compares visual field test results to a baseline and determines statistical significance of changes, providing messages like "Possible Progression" or "Likely Progression." This is an algorithm-only function based on statistical rules derived from the variability study.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" used for the development and validation of the statistical thresholds within GPA was data on perimetric test-retest variability. This means the system's ability to declare "progression" is benchmarked against the statistically expected noise and fluctuations in visual field measurements in glaucoma patients, as determined by the sponsored study. It is not an expert consensus on true progression, pathology, or long-term outcomes data, although the underlying clinical effectiveness of detecting visual field progression is supported by reference to studies like EMGT.
-
The sample size for the training set:
- The document references the Early Manifest Glaucoma Trial (EMGT) literature (citations 3 and 4) as providing the "visual field progression metrics successfully used" in GPA. The EMGT study design document (cited as "Ophthalmology 1999; 106:2144-2153") would contain details about its sample size, which served as a foundational dataset for the conceptual framework of progression analysis incorporated into GPA. However, a specific "training set" sample size for the development of this particular software version (GPA for HFA II) is not explicitly stated in the document beyond the reference to EMGT and the "363 qualified glaucoma subjects" used for the test-retest variability study. The 363 subjects constituted a dataset for establishing variability limits, which are essentially statistical parameters used by the algorithm. It is unclear if these 363 subjects' data was used for training a machine learning model vs. establishing statistical thresholds.
-
How the ground truth for the training set was established:
- Given the reliance on EMGT, the "ground truth" for the principles of progression analysis would have been established within the EMGT, likely through clinical outcomes and expert evaluation of visual field series over time in relation to glaucoma diagnosis and treatment. For the test-retest variability study (the 363 subjects), the "ground truth" was derived from repeated measurements from the same subjects to quantify the inherent variability, rather than a clinical ground truth of progression. The document does not describe a machine learning-style training set with an explicitly adjudicated ground truth for progression; instead, it highlights the incorporation of established clinical knowledge and statistical principles from previous large-scale studies.
Ask a specific question about this device
(204 days)
HPT
The Foresce Home is intended for use in the detection and characterization of central and paracentral metamorphopsia (visual distortion) in patients with age-related macular degeneration, as an aid in monitoring progression of disease factors causing metamorphopsia including but not limited to choroidal neovascularization (CNV). It is intended to be used at home for patients with stable fixation.
The Foresee Home system is an interactive software driven device that provides a series of linear images to the macular and peri-macular region of the eye. The changes in macular and near macular function can be quantified by the device thus enabling the reader to detect early changes in macular degeneration and associated diseases to allow earlier intervention.
The Foresee Home applies the concept of the static and automated perimeter in the detection of visual field defects. The technology, based on hyperacuity, is used for highly specific quantification of central and paracentral visual fields defects. Hyperacuity (also termed "Vernier acuity") is defined as the ability to perceive a difference in the relative spatial localization of two or more visual stimuli. Hyperacuity threshold may be as low as 3-6 sec of arc and the hyperacuity stimuli are highly resistant to retinal image degradation and thus suitable for assessing retinal function in patients with opaque media as well. Retinal pigment epithelium (RPE) elevation, such as that which occurs in AMD, causes a shift in the regular position of photoreceptors. It is hypothesized that such a shift causes an object to be perceived at a different location from its true location in space.
The analysis engine of the device tries to define areas in the visual field that are suspected as being related to CNV. Such areas are called CNV related zones. Although these zones are called 'CNV-related,' they simply indicate areas of greater metamorphopsia and can often occur in non-CNV lesions."
Note that the response on this indicator is only indicative of the presence or absence of significant metamorphopsia that may exist in conditions NOT associated with CNV (such as geographic atrophy or drusen).
The Foresee Home is intended to be used in a home environment following training given by a qualified healthcare professional. The user interface and interaction with the device is similar to office Preview PHP. The results of each testing session, the test reports, similar to these generated by the Preview PHP system, will be transmitted electronically directly to the healthcare professional. Test reports will not be displayed on the monitor in the patient's home, but rather will be used by the healthcare professional in the same fashion as it is currently employed with the in-office Preview PHP. Thus, the only difference between the Preview PHP system and the Foresee Home is that the Foresee Home unit is placed in the patient's home environment to facilitate testing and the test report is then transmitted to the healthcare professional.
It should be noted that the Foresee Home is not intended to provide automated interpretation, evaluation, treatment decisions, or to be used as a substitute for professional healthcare judgment.
Here's a breakdown of the acceptance criteria and study information for the Foresee Home™ device, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Minimal percent positive agreement > 80% with gold standard | 81.5% |
Minimal percent negative agreement > 80% with gold standard | 87.7% |
Capability of users to perform the test by themselves after clinic training | 98.5% |
Correlation between unsupervised home simulated environment tests and supervised clinic tests | 93.85% |
Study Details
2. Sample Sizes and Data Provenance
The provided document refers to a clinical study submitted in K050350 for the Preview PHP (the predicate device) to validate the positive and negative agreement. However, the specific sample size for this test set is not explicitly stated in the provided text.
Regarding data provenance, the document does not specify the country of origin. The study for Preview PHP was likely retrospective or prospective, but this detail is not provided. The usability study for Foresee Home™ and the comparison between unsupervised and supervised tests were likely prospective, given their nature of evaluating user capability and test correlation, but this isn't explicitly stated.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
The document states that the Preview PHP results were compared to "gold standards, i.e., color fundus photographs and fluorescein angiographics." It does not specify the number of experts used to establish the ground truth from these images, nor their specific qualifications (e.g., "radiologist with 10 years of experience").
4. Adjudication Method (Test Set)
The adjudication method used to establish the ground truth for the test set is not described in the provided text.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study being done. The document does not discuss the effect size of how much human readers improve with AI vs. without AI assistance, as the device is primarily for home monitoring and subsequent review by a healthcare professional, not for direct AI-assisted reading in real-time by a human.
6. Standalone Performance Study
A standalone performance study (algorithm only without human-in-the-loop performance) was effectively done as part of the predicate device's (Preview PHP) validation. The document states: "The clinical study submitted in K050350 for the Preview PHP was designed to validate that that the minimal percent positive agreement and minimal percent negative agreement are greater than 80%." This implies an evaluation of the device's ability to detect and characterize metamorphopsia (as reflected in positive and negative agreement) independent of human interpretation during the actual test, with human interpretation occurring after the test data is generated.
7. Type of Ground Truth Used (Test Set)
The type of ground truth used for the clinical study of the predicate device (Preview PHP) was expert consensus based on clinical imaging: "gold standards, i.e., color fundus photographs and fluorescein angiographics."
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size for a training set. The descriptions focus on the validation of the device's performance against clinical gold standards, without detailing any machine learning training processes or associated datasets.
9. How Ground Truth for the Training Set Was Established
As no training set is discussed or implied within the context of machine learning, there is no information on how ground truth for a training set was established. The device is described as applying "the concept of the static and automated perimeter in the detection of visual field defects" and using "hyperacuity" for quantification, suggesting a more rule-based or signal processing approach rather than a deep learning model requiring a large labeled training set.
Ask a specific question about this device
(273 days)
HPT
The TrueField Analyzer is an automated perimeter used to aid in measurement of visual field abnormalities.
For the assessment of visual field abnormalities.
The TrueField Analyzer is an automated perimeter that is used to aid in measurement of visual field abnormalities. It is an objective device that monitors involuntary responses in the patient's pupils to a series of multi-focal visual stimuli presented to the eyes. The system presents stimuli and monitors the pupil responses in both eyes independently and concurrently.
The device includes:
- a bilateral image display system for providing individual visual stimulus to the patient's eyes (both eyes are concurrently and independently stimulated)
- A pair of video cameras for monitoring the patient's pupils again concurrently and independently
- A personal computer equipped to run Windows XP Professional Service Pack 2 operating system.
- The TrueField Software system. The TrueField Software automatically manages the stimulus presentation and video data acquisition, ensuring synchronization between the display and video image acquisition; data analysis, storage and presentation of results for review.
The TrueField Analyzer uses a different fundamental technology to the predicate device. It combines standard multi-focal stimulus and analysis technology (as used in other perimetry devices, for example K003442, K983983) with computerized pupil monitoring (for example K920937) allowing the device to objectively measure the visual field map of a patient. In doing so it is substantially equivalent to the predicate device (K954167).
Based on the provided 510(k) summary for the TrueField Analyzer, here's a detailed breakdown regarding acceptance criteria and the study (or lack thereof) that supports its performance:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text does not explicitly state specific quantitative acceptance criteria (e.g., sensitivity, specificity, accuracy targets) for the TrueField Analyzer's performance in diagnosing visual field abnormalities. Instead, the performance data section focuses on demonstrating the device's conformance to product specifications (electrical safety, EMC, IR radiation safety) and a comparative table of technical specifications with the predicate device (Humphrey Field Analyser - HFA-II).
The core performance claim relies on "substantial equivalence" to the predicate device, implying that its performance is implicitly accepted as equivalent to a device already deemed safe and effective.
Here's a table summarizing the technical specifications that are presented as "performance data" in comparison to the predicate, which indirectly serve as a basis for proving its functionality and equivalence:
Feature/Criteria | TrueField Analyzer Performance | HFA-II (Predicate) Performance (for comparison) |
---|---|---|
General | ||
Intended Clinical Purpose | Visual field examination / to measure visual field defects | Visual field examination / to measure visual field defects |
Product Code | HPT | HPT |
Regulation | 886.1605 | 886.1605 |
Device Class | I | I |
Technical/Operational | ||
Visual System Stimulus | Sparse-stimulus multifocal stimulus | Single spot of variable luminance and size |
Measurement Technology | Video camera based pupil measurement | User feedback (button press) |
Visual Function Assessment | Regression based multifocal analysis | Threshold or suprathreshold sensitivity to spots |
Visual Field Defect Assessment | Population sample normal database comparison | Population sample Normal database comparison |
Stimulus Luminance | 290 cd/m² | 0.025 - 3,183 cd/m² (or 0.08 – 10,000 apostilbs) |
Background Luminance | 10 cd/m² | 10 cd/m² (31.5 apostilbs) |
Number of Stimuli Locations | 24 T30-24, 40 T30-40, 60 T30-60, 24 T10-24, 44 O30-44 | 54 Central 24-2, 76 Central 30-2, 68 Central 10-2, 68 Peripheral 30/60-2 |
Eccentricity Limits of Std Test Area | ± 30 degrees | ± 24 degrees (for central 24-2, common test) |
Stimulus Spot Size | 4, 11 or 14 degrees arc angle (segments) | 0.43 degrees (Goldmann standard size III spot) |
Stimulus Spot Spacing | ~7.5 to 12.5 degrees (cortically scaled) | Uniform 6° grid spacing (standard patterns) |
Proportion of Visual Field Test Area Sampled | 88% |
Ask a specific question about this device
(77 days)
HPT
The PreView PHP™ is intended for use in the detection and monitoring the progression of Agerelated Macular Degeneration (AMD) including, but not limited to, the detection of choroidal neovascularization (CNV).
The PreView Preferential Hyperacuity Perimeter (PreView PHP™) is intended for use in The detection and characterization of central and paracentral metamorphopsia (visual distortion) in patients with age-related macular degeneration , as an aid in monitoring progression of disease factors causing metamorphopsia including but not limited to progression of assouse ration (CNV). It is intended to be used in the office of a licensed eye care practitioner in patients with stable fixation.
The PreView PHP™ system is an interactive software driven device that provides a series of horizontal and vertical linear images to the macular region of the eye to detect abnormalities of the central and paracentral visual field that will detect and monitor progression of age related macular degeneration including detection of choroidal neovascularization. The changes in macular and near macular function are identified by the device thus enabling the reader to detect intermediate and advancing changes in macular degeneration and associated diseases to provide the capability for earlier intervention.
The PreView PHP™ is a specialized perimeter, and applies the concept of the static and automated permeter in the detection of visual field defects. The device incorporates the theory of hyperacuity to address more highly specific central and paracentral visual fields. Because of hyperacuity, perception of more finite relative spatial localization. Hyperacuity is defined as the ability to perceive a difference in the relative spatial localization of points on the central field, more specific distortions or misalignments within the central and paracentral field can be mapped with greater accuracy. The device monitors and manages the progressive changes associated with advancing macular degeneration and differentiates the different stages of AMD including but not limited to choroidal neovascularization.
The PreView PHP™ system is designed for use with standard off-the-shelf PC units in the office of the practitioner. It is aimed to detect advancing changes of AMD-related lesions including but not limited to choroidal neovascularlization.
Here's a breakdown of the acceptance criteria and study details for the PreView PHP™ device, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Sensitivity to detect recent onset of CNV ≥ 80% | 82% (95% CI: 72% to 92%) |
Specificity to differentiate recent onset of CNV from intermediate stage AMD ≥ 80% | 88% (95% CI: 81% to 96%) |
Study Details:
-
Sample Size used for the test set and data provenance:
- The document does not explicitly state the sample size of the test set patients. However, it does mention that the study was a "prospective, comparative, concurrent, non-randomized multicenter study."
- Data Provenance: The document states the manufacturer is Notal Vision, Inc. from Tel Aviv, Israel, and the clinical investigation was also conducted in Israel. Based on the mention of "multicenter study," it suggests data was collected from multiple sites.
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- The document does not provide information on the specific number of experts or their qualifications that established the ground truth for the test set.
-
Adjudication method for the test set:
- The document does not specify an adjudication method.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done. If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The provided text does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The study focused on the standalone performance of the PreView PHP™ device.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone study was done. The clinical investigation assessed the ability of the Preferential Hyperacuity Perimeter (PreView PHP™) itself to detect recent onset of choroidal neovascularization (CNV) and differentiate it from an intermediate stage of AMD. The reported sensitivity and specificity are for the device's performance directly.
-
The type of ground truth used:
- The ground truth for the test set was based on untreated CNV from AMD diagnosed within the last 60 days or an intermediate stage of AMD (at least 1 large druse or at least 20 medium-size drusen) with no evidence of geographic atrophy or other macular diseases. This implies a clinical diagnosis by ophthalmologists, likely supported by other diagnostic methods (which are not explicitly detailed in the provided text for ground truth establishment, though the referenced predicate devices include technologies like the Heidelberg Retina Angiograph FA/ICGA).
-
The sample size for the training set:
- The document does not specify a separate training set or its sample size. The clinical investigation appears to be a validation study.
-
How the ground truth for the training set was established:
- As no separate training set is explicitly mentioned, how its ground truth was established is not provided. The device likely predates or was developed concurrently with modern machine learning paradigms requiring distinct training and test sets in the same way.
Ask a specific question about this device
(175 days)
HPT
NovaVision™ is intended for the diagnosis and improvement of visual functions in patients with impaired vision that may result from trauma, stroke, inflammation, surgical removal of brain tumors or brain surgery, and may also be used to improve visual function in patients with amblyopia.
NovaVision™ consists of two computer software programs: (1) one intended for health care professionals - for the precise diagnosis of patients' visual deficiencies, the development of patient-specific therapy programs, and the analysis of results of patient therapy (NovaVision™ Diagnosis Software and Training Program); and (2) one intended for patients - therapeutic software for use by patients in their homes to train and improve impaired visual functions (NovaVision™-Therapy).
The NovaVision Model 2.0 (K023623) is a diagnostic and therapeutic device intended for the diagnosis and improvement of visual functions. This response summarizes the available information regarding its acceptance criteria and supporting studies.
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary does not explicitly state quantitative acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, accuracy, or a specific threshold for improvement). Instead, it relies on demonstrating substantial equivalence to predicate devices (DynaVision 2000 and AA-1 System for the Treatment of Amblyopia) based on similar intended use, target population, functionality, and demonstrated safety and effectiveness in clinical studies.
Acceptance Criteria (Inferred from Substantial Equivalence Basis) | Reported Device Performance |
---|---|
Diagnosis of patients' visual deficiencies: Equivalent ability to collect and interpret diagnostic information concerning visual deficits. | "Two clinical studies have confirmed the effectiveness and reliability of NovaVision™ Diagnosis Software and Training Program in diagnosing patients' visual deficiencies..." |
Improvement of visual functions: Equivalent ability to improve impaired visual functions. | "...and five clinical studies have confirmed the safety and effectiveness of patient use of NovaVision™-Therapy to improve visual functions." |
Safety and Effectiveness: No new questions of safety and effectiveness compared to predicate devices. | "NovaVision™ has been demonstrated to be as safe and effective as the Dynavision 2000." |
Amblyopia treatment: Correlates with improvement of vision in patients with amblyopia. | "Moreover, reference to a third-party clinical study strongly correlates with the capabilities of NovaVision™-Therapy to improve the vision of patients with amblyopia." |
2. Sample Sizes and Data Provenance
The provided document does not specify the exact sample sizes used for the test sets in the various clinical studies. It also does not explicitly state the country of origin or whether the studies were retrospective or prospective.
3. Number and Qualifications of Experts for Ground Truth
The document does not provide details on the number or qualifications of experts used to establish ground truth for the test sets.
4. Adjudication Method
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) used for establishing ground truth in the test sets.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided information does not mention a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance, nor does it specify any effect size for such a study.
6. Standalone (Algorithm Only) Performance Study
The document indicates that NovaVision™ consists of "two computer software programs," implying a software-only component for diagnosis and therapy. The statement "Two clinical studies have confirmed the effectiveness and reliability of NovaVision™ Diagnosis Software and Training Program in diagnosing patients' visual deficiencies" and "five clinical studies have confirmed the safety and effectiveness of patient use of NovaVision™-Therapy to improve visual functions" suggests that studies were conducted to assess the performance of the software. However, it does not explicitly differentiate between standalone (algorithm only) performance studies and human-in-the-loop studies. Given the nature of the device (presenting visual stimuli and interpreting patient responses), it is likely that the "device performance" inherently involves the algorithm's capability.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for a device intended for "diagnosis and improvement of visual functions," ground truth would likely involve:
- For diagnosis: Clinical assessments of visual deficiencies by healthcare professionals (e.g., ophthalmologists or neurologists), potentially using established diagnostic criteria or other validated tests.
- For improvement: Objective measures of visual function (e.g., visual field tests, visual acuity) before and after therapy, or patient-reported outcomes, as assessed by healthcare professionals.
8. Sample Size for the Training Set
The document does not provide information regarding the sample size used for the training set. The 510(k) summary focuses on clinical performance data rather than detailing the machine learning model development process.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
(61 days)
HPT
The Kasha Visual Field System should be used to test the visual field of the human eye.
Not Found
I am sorry, but the provided text does not contain the detailed information necessary to answer your request about acceptance criteria and the study that proves the device meets them. The document is a 510(k) clearance letter for the "Kasha Visual Field System" from the FDA, dated May 28, 1997.
This letter primarily states that the device has been found substantially equivalent to a predicate device and can, therefore, be marketed. It does not include:
- A table of acceptance criteria or reported device performance.
- Details about sample sizes, data provenance, or study design.
- Information on expert ground truth establishment, adjudication methods, or MRMC studies.
- Details about standalone algorithm performance.
- The type of ground truth used.
- Sample size or ground truth establishment for the training set.
The document focuses on the regulatory clearance based on substantial equivalence, rather than providing the performance study details for the device.
Ask a specific question about this device
(90 days)
HPT
Ask a specific question about this device
Page 1 of 1