Search Results
Found 16 results
510(k) Data Aggregation
(155 days)
The CIRRUS™ HD-OCT is a non-contact, high resolution tomographic and biomicroscopic imaging device. It is indicated for in-vivo viewing, axial cross-sectional, and three-dimensional imaging and measurement of anterior and posterior ocular structures, including cornea, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, and optic nerve head.
The CIRRUS™ HD-OCT Reference Database is a quantitative tool used for the comparison of retinal nerve fiber layer thickness, macular thickness, ganglion cell plus inner plexiform layer thickness, and optic nerve head measurements to a database of healthy subjects.
CIRRUS™ HD-OCT AngioPlex angiography is indicated as an aid in the visualization of vascular structures of the retina and choroid.
The CIRRUS™ HD-OCT is indicated for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
The subject device is a computerized instrument that acquires and analyses cross-sectional tomograms of anterior ocular structures (including comea, retinal nerve fiber layer, macula, and optic disc). It employs non-invasive, non-contact, low-coherence interferometry to obtain these high-resolution images. CIRRUS 6000 has a 100kHz scan rate for all structural and angiography scans.
The subject device uses the same optical system, and principle of operation as the previously cleared CIRRUS 6000 (K222200) except for the reference database functionality.
The subject device contains a newly acquired reference database which was collected on K222200. This study data compares macular thickness, ganglion cell thickness, optic disc and RNFL measurements to a reference range of healthy eyes as guided by the age of the patient and /or optic disc size. Reference database outputs are available on Macular Cube 200x200, and Optic Disc Cube 200x20 scan patterns. All other technical specifications have remained the same as the predicate K222200.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly met by the successful development of the CIRRUS™ HD-OCT Reference Database (RDB) and its ability to provide normative data for comparison. The study aims to establish these reference limits.
| Acceptance Criteria Category | Specific Criteria (Inferred from study purpose) | Reported Device Performance (Summary of RDB Establishment) |
|---|---|---|
| Reference Database Functionality | Device can generate a normative reference database for key ocular parameters (Macular Thickness, Ganglion Cell Thickness, ONH parameters, RNFL thickness). | CIRRUS™ 6000 RDB for macular thickness and optic nerve head scan values was developed. Reference limits were established for Macular Thickness, Ganglion Cell Thickness, Optic Nerve Head parameters, and Retinal Nerve Fiber Layer thickness values. |
| Statistical Validity of RDB | Reference limits are calculated using appropriate statistical methods (regression analysis) and incorporate relevant covariates (age, optic disc size). | Reference range limits were calculated by regression analysis for the 1st, 95th, and 99th percentiles. Age was used as a covariate for Macular Thickness and Ganglion Cell Thickness. Age and Optic Disc Size were used as covariates for ONH parameters and RNFL thickness. |
| Clinical Applicability of RDB | The RDB allows for effective comparison of a patient's measurements to that of healthy subjects, aiding in the assessment and management of ocular diseases. | The RDB was created to help clinicians assess and effectively compare a patient's measurements to that of healthy subjects, representative of the general population. The device provides color-coded indicators based on RDB limits. |
| Image Quality / Scan Acceptability | Only high-quality scans are included in the reference database. | Only the scans that met the pre-determined image quality criteria were included in analysis. |
| Safety | No adverse events or device effects during RDB development. | There were no adverse events or adverse device effects recorded during the study. |
Study Details
-
Sample Size and Data Provenance:
- Test Set (for RDB establishment): 870 subjects had one eye included in the analysis from an initial enrollment of 1000 subjects.
- Data Provenance: Prospective, multi-site study conducted at eight (8) clinical sites across the USA.
- Test Set (for RDB establishment): 870 subjects had one eye included in the analysis from an initial enrollment of 1000 subjects.
-
Number of Experts and Qualifications for Ground Truth:
- The document does not specify the number or qualifications of experts used to establish the ground truth for the test set regarding the "healthiness" of the subjects. The eligibility and exclusion criteria (e.g., "presence of any clinicant vitreal, retinal optic nerve, or choroidal disease in the study eye, including glaucoma or suspected glaucoma. This was assessed based on clinical examination and fundus photography.") imply that ophthalmologists or optometrists would have made these clinical judgments, but the specific number or their experience level is not detailed.
-
Adjudication Method for the Test Set:
- The document does not explicitly describe an adjudication method for determining the "healthy" status of the subjects. It states that inclusion/exclusion was "assessed based on clinical examination and fundus photography" by unnamed personnel at the clinical sites. There is no mention of a consensus process, independent review, or other adjudication for the ground truth.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was done to assess how human readers improve with AI vs. without AI assistance. The study focuses solely on establishing the normative reference database for the device's measurements. The RDB itself is a tool to be used by clinicians, but its impact on clinical decision-making or reader performance was not evaluated in this submission.
-
Standalone Performance:
- This is a standalone performance study in the sense that the device, equipped with the new reference database, generates the normative values and compares patient data to them. It's the performance of the device's RDB calculation and display, not an AI algorithm performing diagnostic tasks without human input.
-
Type of Ground Truth Used:
- Clinical Ground Truth: The ground truth for defining "healthy subjects" was based on extensive clinical examination, fundus photography, and adherence to strict inclusion/exclusion criteria (e.g., no known ocular disease, specific visual acuity, IOP, refraction limits). This represents a clinically defined healthy population.
-
Sample Size for the Training Set:
- The term "training set" is not explicitly used in the context of a machine learning model, as the primary objective was to establish a statistical reference database. The entire dataset of 870 subjects (with qualified scans) was used to develop the reference database. So, the sample size for developing the reference database was 870 subjects.
-
How Ground Truth for the Training Set Was Established:
- The "ground truth" for the subjects included in the reference database was established by defining them as "healthy subjects" through rigorous inclusion and exclusion criteria applied at 8 clinical sites across the USA. These criteria included:
- Age 18 years and older
- Best corrected visual acuity (BCVA) of 20/40 or better in either eye
- IOP < 21 mmHg in either eye
- Manifest refraction spherical equivalent (MRSE) within -8 to +3D range and astigmatism correction < -2D
- Exclusion criteria: Presence of any clinical vitreal, retinal optic nerve, or choroidal disease (including glaucoma or suspected glaucoma, assessed by clinical examination and fundus photography), history of ocular surgery (except uncomplicated cataract/refractive surgery), unreliable or visual field abnormalities, dense media opacities, active infection, current ocular medication, hydroxychloroquine/chloroquine use, diabetes, leukemia, multiple sclerosis, or debilitating disease.
- Only scans meeting pre-determined image quality criteria were used from these healthy subjects.
- The "ground truth" for the subjects included in the reference database was established by defining them as "healthy subjects" through rigorous inclusion and exclusion criteria applied at 8 clinical sites across the USA. These criteria included:
Ask a specific question about this device
(225 days)
VISULAS yag is intended for use in photodisrupting ocular tissue in the treatment of diseases of the eye, including Posterior capsulotomy, Iridotomy and Posterior Membranectomy.
This device is for Prescription Use (Rx) only.
VISULAS yag uses a Q-switched, flashlamp-pumped solid-state laser for photodisruption treatments of diseases of the eye, including posterior capsulotomy, iridotomy and membranectomy. Laser radiation is generated by means of a neodymium-doped yttrium aluminum garnet (Nd:YAG) gain medium inside the laser source. The emitted laser radiation with a near-infrared wavelength of = 1064 nm has a pulse duration of < 4 ns (full-width half-maximum; FWHM) and a focal diameter of 6.5 um ± 20%. The maximum energy output per pulse is 9 to 13 mJ in single-burst mode.
I am sorry, but the provided text is a 510(k) summary for a medical device (VISULAS yag), which focuses on demonstrating substantial equivalence to a predicate device rather than presenting a study design with acceptance criteria and device performance results as requested.
The document does not contain:
- A table of acceptance criteria and reported device performance for a specific study.
- Information on sample size, data provenance, number of experts, adjudication methods, MRMC studies, standalone performance, or ground truth for a clinical study.
- Details regarding training set size or how ground truth was established for a training set.
Instead, the document primarily compares the subject device's indications for use and technical characteristics to two predicate devices (Ellex YAG Laser K212630 and VISULAS YAG III K042139) to establish substantial equivalence. It mentions "Non-Clinical Performance Testing" and "functional and system level testing showed that the system met the defined specifications," and "Software verification and validation testing were conducted... All testing passed," but these are general statements and do not provide the detailed study information you've asked for.
Therefore, I cannot fulfill your request based on the provided text.
Ask a specific question about this device
(67 days)
· INFRARED 800 with FLOW 800 Option is a surgical microscope accessory intended to be used with a compatible surgical microscope in viewing and visual assessment of intraoperative blood flow in cerebral vascular area including, but not limited to, assessing cerebral aneurysm and vessel branch occlusion, as well as patency of very small perforating vessels. It also aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery. Likewise, INFRARED 800 with FLOW 800 Option used during fluorescence guided surgery aids in the visual assessment of intra-operative blood flow as well as vessel patency in bypass surgical procedures in neurosurgery, plastics and reconstructive procedures and coronary artery bypass graft surgery.
· YELLOW 560 is a surgical microscope accessory intended to be used with a compatible surgical microscope in viewing and visual assessment of intraoperative blood flow in cerebral vascular area including, but not limited to, assessing cerebral aneurysm and vessel branch occlusion, as well as patency of very small perforating vessels. It also aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery.
Fluorescence accessories (YELLOW 560 and INFRARED 800 with FLOW 800 option) are an accessory to surgical microscope and are intended for viewing and visual assessment of intra-operative blood flow as well as aids in the real-time visualization of blood flow and visual assessment of vessel types before and after Arteriovenous Malformation (AVM) surgery. The functionality of these filters is derived from their ability to hight fluorescence emitted from tissue that has been treated with a fluorescence agent by applying appropriate wavelengths of light and utilizing selected filters. This helps a surgeon to visualize different structural body elements (such as vessels, tissue, blood flow, occlusions, aneurysms, etc.) during various intraoperative procedures. The fluorescence accessory can be activated by the user via the Graphical User Interface (GUI), foot control panel or the handgrips, for example.
For these accessories to be used with a qualified surgical microscope, the critical components of the surgical microscope need to fulfill the clinically relevant parameters for the Indications for Use of YELLOW 560 and INFRARED 800 with FLOW 800 Option.
The fluorescence accessories are embedded into the surgical microscope. The emission filter wheels are present within the head of the microscope. For filter installation into the surgical microscope, two emissions filters (one for each eyepiece) are placed into each of these filter wheel is present in front of the light source, which is installed along with the excitation filter
The provided text is a 510(k) summary for the Carl Zeiss Meditec Inc. "Fluorescence Accessories (YELLOW 560 and INFRARED 800 with FLOW 800 Option)". This document focuses on demonstrating substantial equivalence to predicate devices rather than providing detailed acceptance criteria and a study proving the device meets those criteria.
The 510(k) summary primarily addresses:
- Indications for Use: The device is a surgical microscope accessory for viewing and visual assessment of intraoperative blood flow in the cerebral vascular area (e.g., assessing cerebral aneurysm, vessel branch occlusion, patency of small perforating vessels, and vessel types before/after Arteriovenous Malformation (AVM) surgery). It also aids in real-time visualization of blood flow and vessel patency in bypass surgical procedures in neurosurgery, plastics, reconstructive procedures, and coronary artery bypass graft surgery.
- Technological Characteristics: Comparison to predicate devices (YELLOW 560 (K162991) and INFRARED 800 with FLOW 800 Option (K100468)) is presented, showing substantial equivalence in application, patient population, device description, fluorescent agents used, visualization of real-time images, display, physical method, fluorescence excitation/detection, white light application, camera adaption, zoom, autofocus, autogain, control system, storage, and upgrade options. Minor differences are noted and deemed not to affect substantial equivalence.
- Non-Clinical Testing: A list of performance testing parameters for the system is provided, confirming that the "functional and system level testing showed that the system met the defined specifications."
Therefore, based on the provided text, a detailed table of acceptance criteria and a study proving the device meets those criteria (with specific performance metrics) cannot be fully constructed as requested. The document attests that the device met internal specifications through software verification and non-clinical system testing, but does not provide the specific numerical acceptance criteria or the study results themselves.
Here's a breakdown of what can be extracted and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Cannot be fully provided as specific numerical acceptance criteria and reported device performance are not detailed in the provided document. The document states that "functional and system level testing showed that the system met the defined specifications" and lists the parameters tested. However, the values for these specifications and the results of the testing are not included.
| Acceptance Criteria (Implied / Stated) | Reported Device Performance (Not detailed in document) |
|---|---|
| Brightness of fluorescence ocular image | Met defined specifications |
| Excitation wavelength | Met defined specifications |
| Excitation filter | Met defined specifications |
| Emission wavelength | Met defined specifications |
| Emission filter | Met defined specifications |
| Color reproduction of fluorescence ocular images | Met defined specifications |
| Spatial resolution of the ocular image | Met defined specifications |
| Color reproduction of fluorescence video images | Met defined specifications |
| Non-mirrored video image | Met defined specifications |
| Non-rotated video image | Met defined specifications |
| Non-deformed video image | Met defined specifications |
| Centered video image | Met defined specifications |
| Photometric resolution of video image | Met defined specifications |
| Signal-to-noise ratio of the video image (sensitivity) | Met defined specifications |
| Latency of the video image (external monitor) | Met defined specifications |
| Spatial resolution of the video image | Met defined specifications |
| Irradiance (minimum irradiance at maximum illumination) | Met defined specifications |
| Color reproduction of non-fluorescence ocular images | Met defined specifications |
| Color reproduction of non-fluorescence video images | Met defined specifications |
| Software performing as intended | Performed as intended |
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified. The document mentions "non-clinical system testing" and "software verification testing" but does not provide details on the number of samples, test cases, or images used.
- Data Provenance: Not specified. This appears to be internal company testing (bench testing) rather than a study involving patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable/Not specified. This was a non-clinical bench and software performance testing; it does not involve expert ground truth for clinical assessment.
4. Adjudication method for the test set
- Not applicable/Not specified. As noted above, this was non-clinical bench and software performance testing.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC study was mentioned. The device is an accessory to a surgical microscope providing visualization, not an AI diagnostic tool that assists human readers in interpreting images.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. The device provides "real-time visualization" and "visual assessment," which implies human interpretation of the images/data it presents. It's an accessory, not a standalone automated diagnostic algorithm. The testing described is for the functional and system performance of the accessory.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable for the non-clinical and software testing described. The "ground truth" for the performance testing would be the predefined specifications that the system components were designed to meet.
8. The sample size for the training set
- Not applicable. The description does not suggest this device uses machine learning or AI that would require a "training set" in the conventional sense for image analysis. It's a fluorescence visualization system.
9. How the ground truth for the training set was established
- Not applicable, as there is no mention of a training set.
Ask a specific question about this device
(262 days)
CIRRUS™ HD-OCT is a non-contact, high resolution tomographic and biomicroscopic imaging device intended for in vivo viewing, axial cross-sectional, and three-dimensional imaging of anterior ocular structures. The device is indicated for visualizing and measuring and posterior ocular structures, including corneal epithelium, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, and optic nerve head.
CIRRUS' AngioPlex OCT Angiography is indicated as an aid in the visualization of vascular structures of the retina and choroid.
CIRRUS HD-OCT is indicated as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
This device is Prescription Use (Rx) only.
The CIRRUS™ HD-OCT Model 6000 is indicated for in-vivo viewing, axial cross-sectional, and threedimensional imaging and measurement of anterior and posterior ocular structures. The clinical purpose of this device has not been modified as compared to the predicate.
CIRRUS 6000 uses the same optical system, architecture, and principle of operation as the previously cleared CIRRUS 5000 (K181534). CIRRUS 6000 has a 100 kHz scan rate for all structural and angiography scans. The primary impact of the higher acquisition speed is its impact on signal-to-noise ratio. The signal-to-noise ratio m the subject device is calibrated to match the specifications of the CIRRUS 6000 uses the same segmentation algorithms as the predicate device and therefore the segmentation results will be equivalent.
In addition to the acquisition speed change, CIRRUS 6000 also has a wider field of view (FOV) and has increased the number of fixation points to 21.
Here's an analysis of the acceptance criteria and the studies that prove the device meets them, based on the provided text.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported precision summaries (Repeatability %CV and Reproducibility %CV and Limits) for various measurements on the CIRRUS HD-OCT 6000 (C6000) and the qualitative image quality results. The studies aim to demonstrate that the C6000 performs comparably to the predicate device, the CIRRUS HD-OCT 5000 (C5000), and that its image quality is clinically acceptable.
Since explicit numerical acceptance criteria (e.g., "Repeatability %CV must be < X%") are not directly stated in the text, I will present the reported performance as the evidence that these implicit criteria have been met. The table below summarizes key metrics where direct quantitative comparison is available.
| Metric (Measurement and Population) | Acceptance Criterion (Implicitly demonstrated by achieving results comparable to predicate and within
clinically acceptable ranges) | Reported Device Performance (CIRRUS HD-OCT 6000) |
|---|---|---|
| Ganglion Cell Thickness (Normal Subjects) | | |
| Average Thickness (µm) Repeatability %CV | Should be low, indicating consistent measurements | 0.5% |
| Average Thickness (µm) Reproducibility %CV | Should be low, indicating consistent measurements | 0.6% |
| Macular Thickness (Normal Subjects) | | |
| Central Subfield (µm) Repeatability %CV | Should be low, indicating consistent measurements | 0.4% |
| Central Subfield (µm) Reproducibility %CV | Should be low, indicating consistent measurements | 0.7% |
| ONH (Rim Area, Normal Subjects) | | |
| Rim Area (mm2) Repeatability %CV | Should be low, indicating consistent measurements | 4.9% |
| Rim Area (mm2) Reproducibility %CV | Should be low, indicating consistent measurements | 6.6% |
| RNFL Thickness (Average, Normal Subjects) | | |
| Average RNFL Thickness (µm) Repeatability %CV | Should be low, indicating consistent measurements | 1.2% |
| Average RNFL Thickness (µm) Reproducibility %CV | Should be low, indicating consistent measurements | 1.8% |
| Epithelial Thickness (Central, Normal Subjects) | | |
| Central (µm) Repeatability %CV | Should be low, indicating consistent measurements | 1.8% |
| Central (µm) Reproducibility %CV | Should be low, indicating consistent measurements | 3.8% |
| Pachymetry Thickness (Central, Normal Subjects) | | |
| Central (µm) Repeatability %CV | Should be low, indicating consistent measurements | 0.2% |
| Central (µm) Reproducibility %CV | Should be low, indicating consistent measurements | 0.4% |
| Angiography Image Quality (Overall) | Clinically Acceptable Overall Images should be high | 0.98 or above (C6000) vs 0.94 or above (C5000) |
| Raster Image Quality (Overall) | Clinically Acceptable Overall Images should be high | 1.00 (C6000) and 1.00 (C5000) |
Studies Proving Acceptance Criteria were Met:
The document describes two main clinical performance studies:
-
CIRRUS 6000 Repeatability and Reproducibility (R&R) Study: This study quantified the precision of measurements from the C6000 for various ocular structures (Ganglion Cell Thickness, Macular Thickness, ONH, RNFL Thickness, Epithelial Thickness, Pachymetry Thickness) across different patient populations (Normal, Glaucoma, Retinal pathology, Cornea pathology). The results are presented in Tables 3 through 12, showing generally low %CV values for both repeatability and reproducibility, indicating good precision. The agreement between C5000 and C6000 was evaluated using Bland-Altman Limits of Agreement and Deming Regression analysis, suggesting that the measurements are comparable to the predicate device.
-
CIRRUS 6000 Angiography Image Quality Study: This study assessed the image quality of OCTA scans from the C6000. It reported a high proportion of clinically acceptable overall images (0.98 or above for C6000), which was comparable to or better than the predicate C5000 (0.94 or above).
-
CIRRUS 6000 Raster Image Quality Study: This study evaluated the image quality of raster B-scans from the C6000, reporting that the proportion of clinically acceptable overall images was 1.00 for all scan types, identical to the predicate C5000.
Detailed Information on the Studies:
2. Sample Sizes and Data Provenance
-
CIRRUS 6000 Repeatability and Reproducibility (R&R) Study:
- Total Subjects Enrolled: 117
- Subgroups: 27 normal, 37 with glaucoma, 30 with retinal pathology, 23 with status post refractive surgery or with corneal pathology.
- Disqualifications: 9 subjects (did not meet inclusion criteria or showed macular changes due to drusen).
- Valid Scans Analyzed: 96.0% of C6000 and 94.9% of C5000 scans were valid.
- Data Provenance: Prospective, multi-site study. Country of origin not specified, but typically multi-site studies for FDA submissions include US sites.
-
CIRRUS 6000 Angiography Image Quality Study:
- Total Subjects Enrolled: 110
- Subgroups: 103 retinal diseased subjects, 7 normal subjects.
- Disqualifications/Discontinuations: 7 normal subjects (did not meet inclusion/exclusion criteria), 11 (operator unable to acquire quality scans), 1 (unable to continue study visit), 2 (no C6000 scans acquired), 2 (did not return for second visit).
- Valid Subjects with at least one OCT scan/FA/ICGA: 93. Only 92 subjects had at least one valid OCT scan.
- Valid Scans Analyzed: 78.3% of C6000 and 77.2% of C5000 scans.
- Data Provenance: Prospective, multi-site study. Country of origin not specified, but typically multi-site studies for FDA submissions include US sites.
-
CIRRUS 6000 Raster Image Quality Study:
- Total Subjects Enrolled: 68
- Subgroups: 20 normal eyes, 48 with retinal disease.
- Valid Scans Analyzed: 92.3% of C6000 and 91.3% of C5000 scans.
- Data Provenance: Prospective, multi-site study. Country of origin not specified, but typically multi-site studies for FDA submissions include US sites.
3. Number of Experts and Qualifications for Ground Truth
- R&R Study: No external experts were used for establishing ground truth in the R&R study itself, as it focused on the device's internal precision metrics.
- Angiography Image Quality Study: Three independent reviewers from a reading center were used to grade OCTA scans on image quality and clinically relevant information. Their specific qualifications (e.g., years of experience, specific certifications) are not detailed in the provided text.
- Raster Image Quality Study: Three independent graders from a reading center were used to grade the raster B-scans on image quality and clinically relevant information. Their specific qualifications are not detailed in the provided text.
4. Adjudication Method for the Test Set
- R&R Study: Not applicable, as this study focused on precision of measurements rather than subjective grading requiring adjudication.
- Angiography Image Quality Study: The text mentions "three independent reviewers...graded...according to pre-determined grading criteria." It does not specify an adjudication method (e.g., 2+1, 3+1). It implies independent assessments were done, and the reported "Proportion of Clinically Acceptable Overall Images" likely reflects a consensus or majority opinion based on these independent grades, or an aggregate result.
- Raster Image Quality Study: Similar to the angiography study, "three independent graders...graded...according to pre-determined grading criteria." No explicit adjudication method is described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document describes "qualitative image grading and agreement" studies (Angiography and Raster Image Quality Studies) that involved multiple readers. However, these appear to be primarily for assessing image quality and clinical relevance of the device outputs, rather than a direct MRMC comparative effectiveness study where human readers' diagnostic performance with and without AI assistance is measured.
- The studies compare the C6000's image quality to the C5000, and human readers grade these images. There is no mention of an effect size for how much human readers improve with AI vs without AI assistance. The device itself, the Cirrus HD-OCT, is an imaging device, not an AI-based diagnostic tool providing interpretations to assist human readers in the traditional sense of AI studies.
6. Standalone (Algorithm Only) Performance Study
- The document implies that the device's segmentation algorithms are the same as the predicate device and that "the segmentation results will be equivalent." It also states "ZEISS demonstrated non-clinical equivalency between the CIRRUS 6000 and CIRRUS 5000 scan data using a phantom retina, which shows the equivalency of the segmentation results as the segmentation algorithms are the same in both instruments."
- This suggests that an assessment of the algorithm's performance (particularly segmentation) was conducted indirectly by verifying equivalency to the predicate device's algorithms and demonstrating that the acquired data from the C6000 results in equivalent segmentation on a phantom. However, a specific "standalone" study rigorously evaluating the algorithm's performance on clinical data separate from the device's overall output is not explicitly detailed. The R&R study focuses on the precision of measurements derived from segmentations, which implies good algorithm performance.
7. Type of Ground Truth Used
- R&R Study: The "ground truth" here is the assumed true anatomical measurement for the precision calculation. The study methodology evaluates the variability within the device's measurements, not against an independent gold standard for the measurements themselves. The "truth" for this study is essentially the consistency and reproducibility of the device's own measurements.
- Angiography and Raster Image Quality Studies: The ground truth for these studies was established by "three independent reviewers from a reading center" who "graded the OCTA scans on image quality and clinically relevant information according to pre-determined grading criteria." This is expert consensus/grading on image quality and clinical relevance.
8. Sample Size for the Training Set
- The document does not provide information about the training set size for the segmentation algorithms. It explicitly states that the C6000 uses the same segmentation algorithms as the predicate device (C5000). This implies that the algorithms were trained previously for the C5000, and no re-training or new training set was required for the C6000 because the algorithms themselves have not changed.
9. How Ground Truth for Training Set was Established
- As mentioned above, the document states that the C6000 uses the same segmentation algorithms as the predicate device, the CIRRUS HD-OCT 5000 (K181534). Therefore, the training for these algorithms would have been established during the development and clearance of the predicate device. The current submission does not describe the ground truth establishment for the original training of these algorithms.
Ask a specific question about this device
(141 days)
The CIRRUS™ HD-OCT is a non-contact, high resolution tomographic and biomicroscopic imaging device intended for in-vivo viewing, axial cross-sectional, and three-dimensional imaging of anterior ocular structures. The device is indicated for visualizing anterior and posterior ocular structures, including cornea, retina, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, and optic nerve head. The CIRRUS normative databases are quantitative tools indicated for the comparison of retinal nerve fiber layer thickness, ganglion cell plus inner plexiform layer thickness, and optic nerve head measurements to a database of normal subjects. The CIRRUS OCT angiography is indicated as an aid in the visualization of vascular structures of the retina and choroid. The CIRRUS HD-OCT is indicated as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
The CIRRUSTM HD-OCT is a computerized instrument that acquires and analyzes crosssectional tomograms of anterior and posterior ocular structures (including cornea, retina, retinal nerve fiber layer, macula, and optic disc). It employs non-invasive, non-contact, low-coherence interferometry to obtain these high-resolution images. Using this noninvasive optical technique. CIRRUS HD-OCT produces high-resolution cross-sectional tomograms of the eye without contacting the eye. It also produces images of the retina and layers of the retina from an en face perspective (i.e., as if looking directly in the eye).
The CIRRUS HD-OCT is offered in four models, Model 4000, 400, 5000 and 500. In the CIRRUS HD-OCT Models 4000 and 5000, the fundus camera is a line scanning ophthalmoscope. The CIRRUS HD-OCT Models 400 and 500 are similar to the Models 4000 and 5000 except that they provide the fundus image using the OCT scanner only.
The acquired imaging data can be analyzed to provide thickness and area measurements of regions of interest to the clinician. The system uses acquired data to determine the fovea location or the optic disc location. Measurements can then be oriented using the fovea and/or optic disc locations. The patient's results can be compared to subjects without disease for measurements of RNFL thickness, neuro-retinal rim area, average and vertical cup-to-disc area ratio, cup volume, macular thickness and ganglion cell plus inner plexiform layer thickness.
In addition to macular and optic disc cube scans, the CIRRUS HD-OCT also offers scans for OCT angiography imaging, a non-invasive approach with depth sectioning capability to visualize microvascular structures of the eye.
Anterior segment scans enable analysis of the anterior segment including Anterior Chamber Depth. Angle-to-Angle and automated measurement of the thickness of the cornea with the Pachymetry scan.
Here's an analysis of the provided text, outlining the acceptance criteria and the study details for the Carl Zeiss Meditec Inc. Cirrus HD-OCT with Software Version 8.
Acceptance Criteria and Device Performance:
The document primarily focuses on demonstrating repeatability and reproducibility of measurements, and comparability between the new device (CIRRUS HD-OCT with Software Version 8, specifically Models 4000 and 5000) and the predicate device (Visante OCT). Specific acceptance criteria are not explicitly stated as numerical targets in quantifiable terms for all parameters. Instead, the study aims to show that the new device's measurements are consistent and comparable to the predicate device. The tables report the demonstrated performance rather than predefined acceptance criteria.
Table of Acceptance Criteria and Reported Device Performance (Inferred from the study's objective to demonstrate comparability, repeatability, and reproducibility):
| Measurement Parameter | Acceptance Criteria (Inferred) | CIRRUS 4000 Reported Performance (Repeatability SD/CV% / Reproducibility SD/CV%) | CIRRUS 5000 Reported Performance (Repeatability SD/CV% / Reproducibility SD/CV%) | Comparison to Visante OCT (Mean Difference / 95% LOA) |
|---|---|---|---|---|
| Anterior Chamber Scans (Normal Cornea Group) | Demonstrates repeatability, reproducibility, and comparability to predicate. | See Table 1 | See Table 2 | See Table 7 |
| CCT | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 8.806 / 1.619%; Reproducibility SD: 9.514 / 1.750% | Repeatability SD: 9.749 / 1.774%; Reproducibility SD: 11.897 / 2.165% | C4000: 9.4 (16.4); 95% LOA: -23.3, 42.1 |
| Angle to Angle | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 0.187 / 1.517%; Reproducibility SD: 0.265 / 2.150% | Repeatability SD: 0.171 / 1.423%; Reproducibility SD: 0.300 / 2.494% | C4000: 0.665 (0.395); 95% LOA: -0.126, 1.456 |
| ACD | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 0.066 / 2.291%; Reproducibility SD: 0.068 / 2.366% | Repeatability SD: 0.034 / 1.199%; Reproducibility SD: 0.046 / 1.601% | C4000: -0.067 (0.077); 95% LOA: -0.220, 0.087 |
| Pachymetry Scans (Normal Cornea Group) | Demonstrates repeatability, reproducibility, and comparability to predicate. | See Table 1 | See Table 2 | See Table 7 |
| Center Pachymetry | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 3.359 / 0.635%; Reproducibility SD: 3.719 / 0.703% | Repeatability SD: 1.197 / 0.226%; Reproducibility SD: 1.628 / 0.308% | C4000: 1.4 (4.1); 95% LOA: -6.8, 9.7 |
| ... (Other Pachymetry Zones) | Similar criteria as Center Pachymetry. | See Table 1 (various values) | See Table 2 (various values) | See Table 7 (various values) |
| Anterior Chamber Scans (Corneal Pathology Group) | Demonstrates repeatability, reproducibility, and comparability to predicate in pathological cases. | See Table 3 | See Table 4 | See Table 8 |
| CCT | Similar criteria as Normal Cornea. | Repeatability SD: 10.023 / 1.923%; Reproducibility SD: 14.069 / 2.699% | Repeatability SD: 12.061 / 2.267%; Reproducibility SD: 18.951 / 3.561% | C4000: 8.2 (20.0); 95% LOA: -31.7, 48.2 |
| ... (ATA, ACD, Pachymetry Zones) | Similar criteria. | See Table 3 | See Table 4 | See Table 8 |
| Pachymetry Scans (Post-LASIK Group) | Demonstrates repeatability, reproducibility, and comparability to predicate in post-LASIK cases. | See Table 5 | See Table 6 | See Table 9 |
| Center Pachymetry | Similar criteria. | Repeatability SD: 1.793 / 0.385%; Reproducibility SD: 2.000 / 0.430% | Repeatability SD: 1.784 / 0.383%; Reproducibility SD: 2.068 / 0.445% | C4000: 2.2 (5.9); 95% LOA: -9.6, 14.0 |
| Angle Study (Glaucoma Suspects/Patients) | Demonstrates repeatability, reproducibility, and comparability to predicate for angle measurements. | See Table 10 | See Table 11 | See Table 12 |
| TISA 500 Nasal (Wide Angle to Angle Scan) | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 0.020 / 13.590%; Reproducibility SD: 0.032 / 21.484% | Repeatability SD: 0.025 / 16.801%; Reproducibility SD: 0.030 / 19.614% | C4000: Not directly provided for TISA (Only for AC Angle) |
| AC Angle Nasal (Wide Angle to Angle Scan) | Low SD, CV% for repeatability/reproducibility; Small mean difference and narrow LOA compared to Visante. | Repeatability SD: 4.128 / 11.479%; Reproducibility SD: 4.626 / 12.862% | Repeatability SD: 3.427 / 9.475%; Reproducibility SD: 4.861 / 13.442% | C4000: -1.887 (7.155); 95% LOA: -16.196, 12.422 |
| ... (Other angle measurements) | Similar criteria. | See Table 10 | See Table 11 | See Table 12 |
Study Details:
-
Sample sizes used for the test set and data provenance:
- Anterior Chamber and Pachymetry Scans:
- Normal Cornea group: 46 subjects (Group 1)
- Post-LASIK group: 40 subjects (Group 2)
- Corneal Pathology group: 45 subjects (Group 3)
- Age Range (all groups): 25 to 69 years.
- Data Provenance: Not explicitly stated (e.g., country of origin). The study is described as a "non-significant risk clinical study," implying prospective data collection in a clinical setting.
- Angle Study:
- Angle Study: 27 subjects, ranging from 43 to 77 years (mean 62 years).
- Specific eye counts per measurement type:
- 26 eyes for Wide Angle-to-Angle scan
- 27 eyes for HD Angle scan
- Data Provenance: Not explicitly stated (e.g., country of origin). Described as a "non-significant risk clinical study," implying prospective data collection.
- OCT Angiography: Series of case studies. No specific sample size is provided beyond "case studies."
- Anterior Chamber and Pachymetry Scans:
-
Number of experts used to establish the ground truth for the test set and their qualifications:
- Anterior Chamber and Pachymetry Scans: No external experts were used for "ground truth" in the sense of an adjudicated diagnosis. The study focused on comparing measurements between devices and assessing repeatability/reproducibility. The predicate device (Visante OCT) served as the reference for comparability. One operator acquired data on the Visante OCT. Three operators acquired data on the CIRRUS HD-OCT devices.
- Angle Study: The study focused on device measurement comparison, repeatability, and reproducibility. One operator acquired data on the Visante OCT. Three operators acquired data on the CIRRUS HD-OCT devices. The study population had a variety of angle configurations (Grade II to Grade IV) as assessed by gonioscopy using the Shaffer method, but the gonioscopy results themselves were not used as a direct "ground truth" for individual measurement values from the OCT devices.
- OCT Angiography: The "findings demonstrate that the CIRRUS OCT Angiography... can give non-invasive three-dimensional information regarding retinal microvasculature." This was compared with "fluorescein angiography images." The number and qualifications of experts interpreting either the OCTA or fluorescein angiography images for these case studies are not specified in the provided text.
-
Adjudication method for the test set:
- Anterior Chamber and Pachymetry Scans: Not applicable for establishing ground truth, as the study focused on device measurement comparison and repeatability/reproducibility across devices and operators. Measurements were generated by "manual placement of software tools."
- Angle Study: Similar to the above, not applicable for establishing "ground truth" through adjudication. Measurements were generated by "manual placement of software tools (Angle tool; TISA tool)" by the operators who acquired the data.
- OCT Angiography: Not specified.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study with human readers and AI assistance was not explicitly reported or performed in the provided text. The study primarily focused on the device's technical performance (repeatability, reproducibility, and agreement with a predicate device) rather than its impact on human reader diagnostic accuracy with or without AI. The device itself (CIRRUS HD-OCT with Software Version 8) is an imaging and measurement device, not explicitly an "AI" diagnostic tool in the context of human-in-the-loop performance measurement.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, in a sense, the primary studies for Anterior Chamber, Pachymetry, and Angle measurements assessed the standalone performance of the device's measurement algorithms by evaluating their repeatability, reproducibility, and agreement with predicate device measurements. These measurements are generated by the device's software (e.g., "manual placement of software tools" by an operator, but the calculation is still algorithmic). The OCT Angiography section also mentions the device's ability to provide information on microvasculature without human "in-the-loop" interpretation for the basic image generation. However, it's not described as an "AI algorithm" in the common sense of diagnosing from images.
-
The type of ground truth used:
- Anterior Chamber, Pachymetry, and Angle Studies: The "ground truth" was effectively the measurements obtained from the predicate device (Visante OCT). The purpose was to show substantial equivalence and comparability, not to determine diagnostic accuracy against an independent, gold-standard clinical pathology or outcome.
- OCT Angiography: Comparisons were made with fluorescein angiography images. Fluorescein angiography is a clinical gold standard for visualizing retinal and choroidal vasculature.
-
The sample size for the training set:
- The document does not provide information on the training set size. The studies described are validation studies for the device's technical performance and comparability to a predicate. The device incorporates "proprietary algorithms" and "normative databases," which would have been developed using some form of training data, but details about this training data are not included in the provided 510(k) summary.
-
How the ground truth for the training set was established:
- Not described in the provided text. As mentioned above, the 510(k) summary focuses on the validation studies, not the development or training of any underlying algorithms or normative databases. The "normative databases" would inherently rely on data from "normal subjects," but the specifics of their ground truth establishment are not given.
Ask a specific question about this device
(114 days)
The INTRABEAM® System is a system for radiotherapy treatment.
Indications for Use: INTRABEAM Flat Applicator
The INTRABEAM® Flat Applicator is intended to supply a specified radiation dose during applications exclusively in combination with the INTRABEAM System.
- During intraoperative radiotherapy, on a surgically exposed surface or in a tumor bed.
- During treatment of tumors on the body surface.
The INTRABEAM® Flat Applicator is designed to deliver a flat radiation field at a distance of 5 mm from its circular application surface in water.
Indications for Use: INTRABEAM Surface Applicator
The INTRABEAM® Surface Applicator is intended to supply a specified radiation dose during applications exclusively in combination with the INTRABEAM System.
- During intraoperative radiotherapy, on a surgically exposed surface or in a tumor bed.
- During treatment of tumors on the body surface.
The INTRABEAM® Surface Applicator is designed to deliver a flat radiation field directly at the applicator's surface.
The INTRABEAM System is a miniature, high-dose rate, low energy X-ray source that emits Xray radiation intraoperatively for the treatment of cancer at the turnor cavity. The INTRABEAM Flat Applicator and INTRABEAM Surface Applicator are accessories to the INTRABEAM System that have been developed to provide radiotherapy to cancer lesions at or near the tissue surface. There are six sizes of INTRABEAM Flat Applicators in a set. The sizes are 1.0 cm, 2.0 cm, 3.0 cm, 4.0 cm, 5.0 cm and 6.0 cm in diameter. The INTRABEAM Surface Applicators are available in the four sizes. These sizes are 1.0 cm, 2.0 cm and 4.0 cm. The four smaller sizes of INTRABEAM Flat Applicators (size 1.0 to 4.0 cm) have the same dimensions and appearance as the INTRABEAM Surface Applicators (sizes 1.0 to 4.0 cm).
The INTRABEAM System has a maximum voltage of 50 kV and a maximum current of 40 µA. The INTRABEAM Flat Applicator and the INTRABEAM Surface Applicator provide a uniform dose of radiotherapy distributed across a flat surface. The applicators are made from the same materials. From a technical point of view, the INTRABEAM Flat and Surface Applicators are basically the same, but are optimized for treating different tissue depths.
The provided text describes a 510(k) premarket notification for the INTRABEAM Flat Applicator and INTRABEAM Surface Applicator. This documentation aims to demonstrate substantial equivalence to a predicate device, rather than proving the device meets specific acceptance criteria through a clinical study with detailed performance metrics.
The document does not contain:
- A table of acceptance criteria and reported device performance.
- Details about sample sizes, data provenance, or ground truth establishment for a test set.
- Information on experts, adjudication methods, or MRMC studies.
- Data on standalone algorithm performance.
- Training set sample size or how its ground truth was established.
Instead, the document focuses on regulatory approval based on demonstrating substantial equivalence to an already legally marketed device.
Here's the relevant information that can be extracted:
1. Acceptance Criteria and Device Performance:
The document does not explicitly state acceptance criteria in terms of performance metrics for the device itself or a dedicated study's results table. The acceptance is based on demonstrating "substantial equivalence" to a predicate device.
Table: Acceptance (Substantial Equivalence) and Device Description
| Acceptance Criterion (Implicit) | Reported Device Description / Claim |
|---|---|
| Substantial Equivalence to Predicate Device (K083734) | The INTRABEAM Flat Applicator and INTRABEAM Surface Applicator are similar to the Axxent® Surface Applicator (K083734) in: - Target site: During intraoperative radiotherapy, on a surgically exposed surface or in a tumor bed; or during treatment of tumors on the body surface. - Clinical use: Radiotherapy treatment, delivering a specified radiation dose. - Principles of operation: Miniature, high-dose rate, low energy X-ray source that emits X-ray radiation. Delivers 50 kVp x-ray radiation to shallow tissue depths over small targeted areas. Provides a uniform dose of radiotherapy distributed across a flat surface. - Indications for use: Intraoperative radiotherapy, on a surgically exposed surface or in a tumor bed; treatment of tumors on the body surface. - Design: Applicators for delivering X-ray radiation. - Application: Used with the INTRABEAM System. - Maximum voltage of 50 kV and a maximum current of 40 µA. - INTRABEAM Flat Applicator designed to deliver a flat radiation field at a distance of 5 mm from its circular application surface in water. - INTRABEAM Surface Applicator designed to deliver a flat radiation field directly at the applicator's surface. |
| Safety and Effectiveness for Intended Use | "Based on the information provided in the 510(k) and the comparison to the currently marketed predicate, the INTRABEAM Flat Applicator and INTRABEAM Surface Applicator are safe and effective with regards to the intended use." |
2. Sample size used for the test set and the data provenance:
This document describes a 510(k) submission, which relies on demonstrating substantial equivalence to a predicate device primarily through comparative analysis of technical specifications, intended use, and operational principles, rather than a clinical "test set" in the context of device performance metrics. Therefore, there is no information provided regarding a specific sample size for a test set or data provenance from human subjects in a clinical study.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
Not applicable as this document does not describe a clinical study for establishing ground truth.
4. Adjudication method for the test set:
Not applicable as this document does not describe a clinical study.
5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done:
No, an MRMC comparative effectiveness study is not mentioned in this document. The submission focuses on device equivalence, not human reader performance with or without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not applicable. The device is a physical medical device (X-ray applicators) and not an algorithm.
7. The type of ground truth used:
Not applicable. The submission relies on technical verification and comparison to a predicate device's established safety and effectiveness.
8. The sample size for the training set:
Not applicable as this document does not describe an AI/algorithm-based device or a clinical training set.
9. How the ground truth for the training set was established:
Not applicable as this document does not describe an AI/algorithm-based device or a clinical training set.
Ask a specific question about this device
(326 days)
The CIRRUS photo (Models 600 and 800) is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera which is suitable for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under mydriatic and non-mydriatic conditions.
These photographs support the diagnosis and subsequent observation of eye diseases which can be visually monitored and photographically documented. The CIRRUS photo is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retina, retinal nerve fiber layer, macula, and optic disc as well as imaging of anterior ocular structures, including the cornea.
It also includes a Retinal Nerve Fiber Layer (RNFL), Optic Nerve Head (ONH), and Macular Normative Database which is a quantitative tool for the comparison of retinal nerve fiber layer, optic nerve head, and the macula in the human retina to a database of known normal subjects. It is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
The CIRRUS photo is a non-contact, high resolution digital, tomographic and biomicroscopic imaging device that merges fundus imaging and optical coherence tomography into a single device. To optimize the workflow, the system applies the same beam delivery system for imaging and scanning.
Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Device: CIRRUS photo Models 600 and 800 (Fundus Camera and Optical Coherence Tomography)
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the CIRRUS photo are not explicitly listed as specific thresholds in the document (e.g., "accuracy > 90%"). Instead, the study aimed to demonstrate equivalence and repeatability/reproducibility of measurements between the CIRRUS photo and the predicate device (Carl Zeiss Meditec Cirrus HD-OCT Model 4000).
Therefore, the "acceptance criteria" are implicitly that the differences in measurements between the two devices should be small and within acceptable limits (as evidenced by confidence intervals and limits of agreement), and that the CIRRUS photo should demonstrate good repeatability and reproducibility.
The reported device performance section details the findings from these equivalence and repeatability studies.
| Acceptance Criteria Category | Specific Metric (Implicit) | CIRRUS Photo Performance |
|---|---|---|
| Equivalence (Inter-device) | Mean Difference in RNFL, ONH, and Macular Thickness measurements between CIRRUS photo and Cirrus HD-OCT Model 4000. | Normal Eyes (N=33): The mean difference for 31 measurement parameters (17 RNFL, 5 ONH, 9 macular) between the two devices was generally small, with 95% Confidence Intervals for most parameters including or very close to zero. The 95% Limits of Agreement show a range within which differences are expected to fall. For example, Average RNFL Thickness showed a mean difference of 0.6 (1.2) µm with a CI of (0.2, 1.0) and LOA of (-1.7, 2.9).Diseased Eyes (Glaucoma N=17, Macular Disease N=19): Similar to normal eyes, mean differences were generally small, with 95% Confidence Intervals for many parameters including or near zero. For example, Average RNFL Thickness (glaucoma) showed a mean difference of 0.8 (1.3) µm with a CI of (0.1, 1.4) and LOA of (-1.7, 3.2). |
| Repeatability | Repeatability Standard Deviation (SD) and Limit | Normal Eyes: Repeatability SDs for RNFL (e.g., Average RNFL Thickness: 1.4634 µm), ONH (e.g., Cup Disc Ratio: 0.0236), and Macular Thickness (e.g., Central Subfield: 1.6398 µm) demonstrate the device's consistency when measurements are repeated under the same conditions.Diseased Eyes: Repeatability SDs for RNFL (e.g., Average RNFL Thickness: 1.4634 µm), ONH (e.g., Cup Disc Ratio: 0.0276), and Macular Thickness (e.g., Central Subfield: 5.6224 µm) were also reported. |
| Reproducibility | Reproducibility Standard Deviation (SD) and Limit | Normal Eyes: Reproducibility SDs for RNFL (e.g., Average RNFL Thickness: 2.1899 µm), ONH (e.g., Cup Disc Ratio: 0.0245), and Macular Thickness (e.g., Central Subfield: 2.7756 µm) demonstrate consistency when measurements are repeated under varying conditions (e.g., different operators).Diseased Eyes: Reproducibility SDs for RNFL (e.g., Average RNFL Thickness: 1.8796 µm), ONH (e.g., Cup Disc Ratio: 0.0278), and Macular Thickness (e.g., Central Subfield: 7.6068 µm) were also reported. |
| Qualitative Assessment | Overall conclusion regarding performance | "The mean values of the 31 thickness parameters were very similar for the two devices.""Cirrus photo showed good repeatability and reproducibility for both normal and diseased eyes." |
Study Details
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Normal Eyes Study:
- Phase 1 (Inter-operator variability): 30 subjects
- Phase 2 (Inter-device variability): 33 subjects
- Diseased Eyes Study:
- Retinal disease (inter-device): 19 subjects
- Retinal disease (inter-operator): 19 subjects
- Glaucoma (inter-device): 17 subjects
- Glaucoma (inter-operator): 18 subjects
- Total Test Set: 63 normal subjects, 73 diseased subjects (total of 136 subjects, though there's some overlap in eyes examined across phases/devices, but subjects did not participate in both phases of a study)
- Data Provenance: The document does not specify the country of origin of the data. The studies were prospective.
- Normal Eyes Study:
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- The study design focused on comparing measurements from the new device (CIRRUS photo) to an existing, cleared device (Cirrus HD-OCT Model 4000), which itself presumably has established accuracy.
- The "ground truth" here is the measurement obtained from the predicate device. Therefore, no external experts were used to establish a separate "ground truth" beyond the measurements themselves.
- For the inter-operator variability phases, four operators were involved, implying expertise in operating OCT devices. However, their specific qualifications (e.g., years of experience, medical degree) are not detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. This was a quantitative measurement comparison study, not a diagnostic classification study requiring adjudication of expert opinions. The method involved calculating mean differences, confidence intervals, and limits of agreement between device measurements.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a MRMC comparative effectiveness study was not done. This study focuses on the technical equivalence and repeatability/reproducibility of a diagnostic imaging device (OCT) rather than evaluating human reader performance with or without AI assistance.
- The term "AI" is not mentioned in this document.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, the primary studies described are standalone performance evaluations of the device, focusing on the measurements generated by the CIRRUS photo itself compared to a predicate device. While human operators are involved in acquiring the images, the analysis of the measurement data (31 parameters of RNFL, ONH, and macular thickness) is performed by the device's algorithms. The document does not describe a human-in-the-loop study to modify or interpret the device's measurements.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" in this context is the measurement data obtained from the predicate device (Cirrus HD-OCT Model 4000). The study aimed to show that the CIRRUS photo provides comparable measurements. It's a device-to-device comparison rather than an evaluation against a clinical ground truth like pathology or patient outcomes.
-
The sample size for the training set
- The document implies that the CIRRUS photo uses a "normative database" (RNFL, ONH, and Macular Normative Database) which comes from the Cirrus HD-OCT Model 4000 and is adjusted based on regression analysis. However, it does not specify the sample size of this underlying normative database or detail a separate "training set" for the CIRRUS photo's algorithms in this context. The study described focuses on testing a new device's performance against an established device, not on training a new algorithm.
-
How the ground truth for the training set was established
- As noted above, a distinct "training set" for the CIRRUS photo's algorithms (other than the inherited normative database from the predicate device, K083291; K111157) is not explicitly described. For the normal subjects in the "Normative Database," the implicit "ground truth" would be their classification as "normal" based on clinical criteria and measurements from the original predicate device (Cirrus HD-OCT 4000). The method for establishing this original normative database's "ground truth" is not explained in this document.
Ask a specific question about this device
(269 days)
The Cirrus™ HD-OCT with Retinal Nerve Fiber Layer (RNFL), Macular, Optic Nerve Head and Ganglion Cell Normative Databases is indicated for in-vivo viewing, axial cross-sectional, and three-dimensional imaging and measurement of anterior and posterior ocular structures.
The Cirrus™ HD-OCT is a non-contact, high resolution tomographic and biomicroscopic imaging device. It is indicated for in-vivo viewing, axial cross-sectional, and threedimensional imaging and measurement of anterior and posterior ocular structures, including cornea, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, and optic nerve head. The Cirrus normative databases are quantitative tools for the comparison of retinal nerve fiber layer thickness, macular thickness, ganglion cell plus inner plexiform layer thickness, and optic nerve head measurements to a database of normal subjects. The Cirrus HD-OCT is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.
The Cirrus™ HD-OCT is a computerized instrument that acquires and analyzes crosssectional tomograms of anterior and posterior ocular structures (including cornea, retina, retinal nerve fiber layer, macula, and optic disc). It employs non-invasive, non-contact, low-coherence interferometry to obtain these high-resolution images. Using this noninvasive optical technique, Cirrus HD-OCT produces high-resolution cross-sectional tomograms of the eye without contacting the eye. It also produces images of the retina and layers of the retina from an en face perspective (i.e., as if looking directly in the eye).
The Cirrus HD-OCT is offered in two models, Model 4000 and Model 400. In the Cirrus HD-OCT Model 4000 instrument, the fundus camera is a line scanning ophthalmoscope. The Cirrus HD-OCT Model 400 is similar to the Model 4000 except that it provides the fundus image using the OCT scanner only.
The acquired imaging data can be analyzed to provide thickness and area measurements of regions of interest to the clinician. The system uses acquired data to determine the fovea location or the optic disc location. Measurements can then be oriented using the fovea and/or optic disc locations. The patient's results can be compared to subjects without disease for measurements of RNFL thickness, neuroretinal rim area, average and vertical cup-to-disc area ratio, cup volume, macular thickness and ganglion cell plus inner plexiform layer thickness.
Visit-to-visit comparison of images and measurements is available for the macula. Specifically, change in macular thickness, area and volume of Retinal Pigment Epithelium (RPE) elevations, area of sub-RPE illumination and distance of Sub-RPE illumination to the fovea. Change analysis of multiple visits, up to eight, can be performed for RNFL thickness, neuroretinal rim area, average and vertical cup-to-disc area ratio, cup volume, and macular thickness.
The provided document describes the predicate device and the clinical studies performed to support the substantial equivalence of the "Cirrus HD-OCT with Retinal Nerve Fiber Layer (RNFL), Macular, Optic Nerve Head and Ganglion Cell Normative Databases" (Cirrus HD-OCT). The clinical evaluation focuses on demonstrating the device's measurement capabilities and the establishment of normative databases.
Here's an analysis of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state "acceptance criteria" in a pass/fail quantifiable manner for the overall device's performance against a gold standard for disease detection or management. Instead, the studies presented focus on the repeatability and reproducibility of various measurements performed by the device and the comparability of certain automated measurements to expert manual measurements from different imaging modalities.
The tables below summarize the specified repeatability and reproducibility limits (acceptance criteria as per ISO 5725-1 and ISO 5725-6, defined as the upper 95% limit for the difference between repeated results) and the reported performance (SD and limits).
Table 1. Repeatability and Reproducibility of Area of Sub-RPE Illumination (Automated Algorithm)
| Scan | Acceptance Criteria (Repeatability Limit, mm²) | Reported Performance (Repeatability Limit, mm²) | Acceptance Criteria (Reproducibility Limit, mm²) | Reported Performance (Reproducibility Limit, mm²) | Reported CVc |
|---|---|---|---|---|---|
| 200x200 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 2.4885 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 2.6460 | 12.5% |
| 512x128 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 2.4313 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 2.8889 | 15.8% |
Table 2. Repeatability and Reproducibility of Closest Distance to Fovea (Automated Algorithm)
| Scan | Acceptance Criteria (Repeatability Limit, mm) | Reported Performance (Repeatability Limit, mm) | Acceptance Criteria (Reproducibility Limit, mm) | Reported Performance (Reproducibility Limit, mm) |
|---|---|---|---|---|
| 200x200 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.2070 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.2133 |
| 512x128 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.3492 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.3520 |
Table 3. Repeatability and Reproducibility of Area of Sub-RPE Illumination (Manually Edited)
| Scan | Acceptance Criteria (Repeatability Limit, mm²) | Reported Performance (Repeatability Limit, mm²) | Acceptance Criteria (Reproducibility Limit, mm²) | Reported Performance (Reproducibility Limit, mm²) | Reported CVc |
|---|---|---|---|---|---|
| 200x200 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.6365 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 1.0705 | 4.3% |
Table 4. Repeatability and Reproducibility of Closest Distance to Fovea (Manually Edited)
| Scan | Acceptance Criteria (Repeatability Limit, mm) | Reported Performance (Repeatability Limit, mm) | Acceptance Criteria (Reproducibility Limit, mm) | Reported Performance (Reproducibility Limit, mm) |
|---|---|---|---|---|
| 200x200 Scan | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.0990 | (Not explicitly stated, but inferred to be the achieved limit based on SD) | 0.1229 |
Table 5. Repeatability and Reproducibility of Area of RPE Elevations
| Circle | Scan | Acceptance Criteria (Repeatability Limit, mm²) | Reported Performance (Repeatability Limit, mm²) | Acceptance Criteria (Reproducibility Limit, mm²) | Reported Performance (Reproducibility Limit, mm²) | Reported CVc |
|---|---|---|---|---|---|---|
| 3 mm Circle | 200x200 Scan | (Not explicitly stated, but inferred) | 0.3626 | (Not explicitly stated, but inferred) | 0.4389 | 10.1% |
| 5 mm Circle | 200x200 Scan | (Not explicitly stated, but inferred) | 0.2834 | (Not explicitly stated, but inferred) | 0.4073 | 4.9% |
| 3 mm Circle | 512x128 Scan | (Not explicitly stated, but inferred) | 0.2343 | (Not explicitly stated, but inferred) | 0.2794 | 7.5% |
| 5 mm Circle | 512x128 Scan | (Not explicitly stated, but inferred) | 0.4304 | (Not explicitly stated, but inferred) | 0.5422 | 9.6% |
Table 6. Repeatability and Reproducibility of Volume of RPE Elevations
| Circle | Scan | Acceptance Criteria (Repeatability Limit, mm³) | Reported Performance (Repeatability Limit, mm³) | Acceptance Criteria (Reproducibility Limit, mm³) | Reported Performance (Reproducibility Limit, mm³) | Reported CVc |
|---|---|---|---|---|---|---|
| 3 mm Circle | 200x200 Scan | (Not explicitly stated, but inferred) | 0.0327 | (Not explicitly stated, but inferred) | 0.0341 | 15.2% |
| 5 mm Circle | 200x200 Scan | (Not explicitly stated, but inferred) | 0.0275 | (Not explicitly stated, but inferred) | 0.0298 | 8.3% |
| 3 mm Circle | 512x128 Scan | (Not explicitly stated, but inferred) | 0.0206 | (Not explicitly stated, but inferred) | 0.0235 | 12.0% |
| 5 mm Circle | 512x128 Scan | (Not explicitly stated, but inferred) | 0.0245 | (Not explicitly stated, but inferred) | 0.0288 | 11.4% |
Table 7. Cirrus Repeatability and Reproducibility of GCA and ONH Parameters - Normal Subjects
| Parameter | Acceptance Criteria (Repeatability Limit) | Reported Performance (Repeatability Limit) | Acceptance Criteria (Reproducibility Limit) | Reported Performance (Reproducibility Limit) | Reported CVc |
|---|---|---|---|---|---|
| GCA Parameters | |||||
| Average Thickness (µm) | (Not explicitly stated, but inferred) | 1.6348 | (Not explicitly stated, but inferred) | 2.0942 | 0.7% |
| Minimum Thickness (µm) | (Not explicitly stated, but inferred) | 8.0165 | (Not explicitly stated, but inferred) | 8.1018 | 2.5% |
| Temporal-Superior Thickness (µm) | (Not explicitly stated, but inferred) | 2.3502 | (Not explicitly stated, but inferred) | 2.6590 | 1.0% |
| Superior Thickness (µm) | (Not explicitly stated, but inferred) | 2.5522 | (Not explicitly stated, but inferred) | 3.0024 | 1.1% |
| Nasal-Superior Thickness (µm) | (Not explicitly stated, but inferred) | 2.5753 | (Not explicitly stated, but inferred) | 2.9154 | 1.0% |
| Nasal-Inferior Thickness (µm) | (Not explicitly stated, but inferred) | 4.6857 | (Not explicitly stated, but inferred) | 4.8525 | 1.5% |
| Inferior Thickness (µm) | (Not explicitly stated, but inferred) | 2.7894 | (Not explicitly stated, but inferred) | 3.3339 | 1.2% |
| Temporal-Inferior Thickness (µm) | (Not explicitly stated, but inferred) | 2.2948 | (Not explicitly stated, but inferred) | 2.5696 | 1.0% |
| ONH Parameters | |||||
| Cup Disc Ratio | (Not explicitly stated, but inferred) | 0.0380 | (Not explicitly stated, but inferred) | 0.0679 | 5.4% |
| Vertical CD Ratio | (Not explicitly stated, but inferred) | 0.0681 | (Not explicitly stated, but inferred) | 0.0846 | 7.1% |
| Disc Area (mm²) | (Not explicitly stated, but inferred) | 0.1506 | (Not explicitly stated, but inferred) | 0.2637 | 5.4% |
| Rim Area (mm²) | (Not explicitly stated, but inferred) | 0.1177 | (Not explicitly stated, but inferred) | 0.1733 | 4.7% |
| Cup Volume (mm³) | (Not explicitly stated, but inferred) | 0.0181 | (Not explicitly stated, but inferred) | 0.0287 | 7.8% |
Table 8. Repeatability and Visit-to-Visit Variability of ONH Parameters - Glaucomatous Subjects
| Parameter | Acceptance Criteria (Repeatability Limit) | Reported Performance (Repeatability Limit) | Acceptance Criteria (Visit-to-Visit Limit) | Reported Performance (Visit-to-Visit Limit) | Reported CV% |
|---|---|---|---|---|---|
| Disc Area (mm²) | (Not explicitly stated, but inferred) | 0.233 mm² | (Not explicitly stated, but inferred) | 0.233 mm² | 4.4% |
| Rim Area (mm²) | (Not explicitly stated, but inferred) | 0.125 mm² | (Not explicitly stated, but inferred) | 0.125 mm² | 6.6% |
| Average Cup-to-Disc Ratio | (Not explicitly stated, but inferred) | 0.025 | (Not explicitly stated, but inferred) | 0.025 | 1.2% |
| Vertical Cup-to-Disc Ratio | (Not explicitly stated, but inferred) | 0.039 | (Not explicitly stated, but inferred) | 0.042 | 1.9% |
| Cup Volume (mm³) | (Not explicitly stated, but inferred) | 0.089 mm³ | (Not explicitly stated, but inferred) | 0.175 mm³ | 11.7% |
Table 9. Repeatability of GCA Parameters - Glaucomatous Subjects
| GCA Parameters (μm) | Acceptance Criteria (Repeatability Limit) | Reported Performance (Repeatability Limit) | Reported CV% |
|---|---|---|---|
| Overall | |||
| Average GCL + IPL Thickness | (Not explicitly stated, but inferred) | 1.7567 | 1.0% |
| Minimum GCL + IPL Thickness | (Not explicitly stated, but inferred) | 4.2689 | 2.6% |
| Temporal-Superior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 3.4171 | 1.8% |
| Superior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 3.5429 | 1.8% |
| Nasal-Superior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 2.3013 | 1.2% |
| Nasal-Inferior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 3.1371 | 1.7% |
| Inferior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 2.9593 | 1.7% |
| Temporal-Inferior GCL + IPL Thickness | (Not explicitly stated, but inferred) | 3.4049 | 2.0% |
| Mild Glaucoma | |||
| Average GCL + IPL Thickness | (Not explicitly stated, but inferred) | 1.4277 | 0.7% |
| ... (specific metrics provided for Mild, Moderate, and Severe Glaucoma in the document) | ... | ... | ... |
Summary of Studies Demonstrating Acceptance:
The studies listed above (Advanced RPE Analysis Study, measurements of Elevated RPE, Repeatability and Reproducibility studies) and the establishment of the normative databases serve as the proof that the device meets its intended use and demonstrates substantial equivalence. The "acceptance criteria" appear to be the demonstrated repeatability and reproducibility limits themselves, calculated according to ISO 5725-1 and ISO 5725-6 standards, or the statistical comparability (e.g., paired t-test showing no significant difference, good correlation R+) in the case of the RPE illumination comparison.
2. Sample Sizes used for the Test Set and Data Provenance:
- Advanced RPE Analysis Study (Areas of Increased Illumination):
- Test Set Sample Size: 52 eyes from 52 subjects.
- Data Provenance: Not explicitly stated (e.g., country of origin), but it was a "non-significant risk clinical study" with "Four sites participated in the clinical data collection." The study was conducted on subjects evaluated for dry AMD with geographic atrophy. The study compares Cirrus HD-OCT automated measurements to FAF images (an existing, accepted modality). This implies a prospective data collection for this comparison.
- Advanced RPE Analysis Study (Measurements of Elevated RPE):
- Test Set Sample Size: 70 eyes from 70 subjects were considered; the number included in final analysis is not explicitly stated but implies a similar number after qualification.
- Data Provenance: Not explicitly stated (e.g., country of origin), but "Three sites participated in the clinical data collection." Subjects were 50 years or older with dry age-related macular degeneration (AMD) with macular drusen. This implies a prospective data collection for this comparison.
- Measurements of Area of Increased Illumination Under the RPE Repeatability and Reproducibility:
- Test Set Sample Size: Phase 1: 49 eyes of 37 subjects. Phase 2: 53 eyes of 39 subjects.
- Data Provenance: "Single-site clinical study." The subjects had dry AMD with geographic atrophy. This suggests prospective data collection.
- Measurements of Elevated RPE Repeatability and Reproducibility:
- Test Set Sample Size: Phase 1: 26 eyes of 23 subjects. Phase 2: 24 eyes of 21 subjects.
- Data Provenance: "Single-site clinical study." The subjects had dry AMD with macular drusen. This suggests prospective data collection.
- Optic Nerve Head and Ganglion Cell Analysis Repeatability and Reproducibility (Normal Subjects):
- Test Set Sample Size: 63 normal subjects.
- Data Provenance: Not explicitly stated (e.g., multi-site, country). This suggests prospective data collection.
- Optic Nerve Head Parameters Repeatability and Visit-to-Visit Variability (Glaucomatous Subjects):
- Test Set Sample Size: 55 glaucomatous subjects.
- Data Provenance: Not explicitly stated (e.g., multi-site, country). This suggests prospective data collection.
- GCA Parameters Repeatability (Glaucomatous Subjects):
- Test Set Sample Size: 119 subjects with glaucoma enrolled; 94 subjects with two qualified scans each were included in the analysis.
- Data Provenance: "clinical study conducted at four sites." Not explicitly stated (e.g., country). This suggests prospective data collection.
3. Number of Experts used to Establish the Ground Truth for the Test Set and Qualifications:
- Advanced RPE Analysis Study (Areas of Increased Illumination): Ground truth was established by "expert manual measurements of areas of hypofluorescence typical of geographic atrophy in fundus autofluorescence (FAF) images." The number and specific qualifications of these experts are not specified in the document.
- Advanced RPE Analysis Study (Measurements of Elevated RPE): Ground truth was established by "manually drawn by experts designated as drusen on color fundus photographs (CFPs)." The number and specific qualifications of these experts are not specified in the document.
- For repeatability and reproducibility studies, the ground truth is implicitly the repeated measurements themselves by the device, as the studies aim to quantify the variation within the device's measurements, not compare them against an external gold standard by human experts for classifications.
4. Adjudication Method:
- For the studies comparing automated measurements to expert manual measurements (e.g., Advanced RPE Analysis), the adjudication method is not explicitly stated. It mentions "expert manual measurements" and "manually drawn by experts," implying these were used as reference, but how inconsistencies between experts (if there were multiple) were resolved is not detailed.
- For the repeatability and reproducibility studies, adjudication doesn't apply in the traditional sense, as these studies focus on the intrinsic variation of the device's measurements.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was described where human readers' performance with and without AI assistance was evaluated. The studies primarily focus on the device's measurement accuracy, repeatability, and reproducibility, and the establishment of normative databases. The comparison for RPE illumination detection was between the device's automated measurements and expert manual measurements from a different imaging modality, not an AI-assisted human vs. human-alone scenario.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the comparability studies for Advanced RPE Analysis (Areas of Increased Illumination and Elevated RPE) assess the device's "automated measurements" (automated algorithm) against expert manual measurements from other imaging modalities. These can be considered standalone performance assessments of specific algorithms within the device.
- Similarly, all the repeatability and reproducibility studies for various parameters (GCA, ONH, RPE illumination, RPE elevations) assess the intrinsic performance of the device's measurement algorithms in a standalone manner, quantifying their consistency.
7. The Type of Ground Truth Used:
- Advanced RPE Analysis Study (Areas of Increased Illumination): Ground truth was "expert manual measurements of areas of hypofluorescence typical of geographic atrophy in fundus autofluorescence (FAF) images." This is a form of expert consensus/manual delineation using another imaging modality.
- Advanced RPE Analysis Study (Measurements of Elevated RPE): Ground truth was "manually drawn by experts designated as drusen on color fundus photographs (CFPs)." This is also a form of expert consensus/manual delineation using another imaging modality.
- Normative Databases (RNFL, Macular, Optic Nerve Head, Ganglion Cell): The ground truth for these databases is implicitly the classification of subjects as "normal" based on clinical criteria (not specified in detail, but implied by the selection of normal subjects). The measurements are then statistically analyzed to establish reference values.
- For repeatability and reproducibility studies, the ground truth is the repeated measurements, not an external gold standard.
8. The Sample Size for the Training Set:
The document doesn't explicitly mention "training sets" as it would for a typical machine learning algorithm development (e.g., deep learning). The device is likely based on established image processing algorithms.
- The normative databases serve a similar function to a reference set, providing "normal" ranges for comparison in diagnosis.
- Optic Nerve Head Normative Database: Derived from a "post-hoc analysis" of 282 eyes from 284 subjects (aged 19-84 years) included in a previous RNFL normative database (K083291).
- Ganglion Cell Normative Database: Utilized the "same 282 subjects, aged 19-84 years that were deemed representative of a normal population" as the original macula normative database (K083291).
9. How the Ground Truth for the Training Set was Established:
- For the normative databases, the "ground truth" for inclusion was that the subjects were considered "normal." The document states the subjects (n=282) were "deemed representative of a normal population." However, the specific criteria or methods (e.g., expert clinical review, exclusion criteria) used to define and confirm these subjects as "normal" are not detailed in this summary. The data were collected from "seven sites."
Ask a specific question about this device
(150 days)
The Guided Progression Analysis for the Humphrey® Field Analyzer II (HFA II) and Humphrey® Field Analyzer II- i series is a software analysis module that is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, glaucoma. It is also intended to compare change over time and determine if statistically significant change has occurred.
The Carl Zeiss Meditec, Inc. Guided Progression Analysis is a software analysis module for the Humphrey® Field Analyzer II (HFA II) and Humphrey® Field Analyzer II - i series (HFA II - i) that assists practitioners with the detection, measurement, and management of progression of visual field loss. It aids in assessing change over time, including change from baseline and rate of change. It is intended for use as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, glaucoma.
The Carl Zeiss Meditec, Inc. Guided Progression Analysis (GPA) is a software package for the Humphrey Field Analyzer II and II - i series that is designed to help practitioners identify progressive visual field loss in glaucoma patients. GPA compares the visual field test results of up to 14 follow-up tests to an established baseline over time and determines if there is statistically significant change. The GPA printout highlights any changes from baseline that represent larger than expected clinical variability, and it provides simple plain-language messages such as "Possible Progression" or "Likely Progression" whenever changes show consistent and statistically significant loss. The GPA printout also presents the Visual Field Index (VFI), a global index which reports a measure of the patient's remaining useful vision in the form of a percentage, as well as the VFI Rate of Progression plot which provides a trend analysis of the patient's overall visual field history and indicates a 3-5 year projection of the VFI regression line if the current trend continued.
The provided FDA 510(k) summary for the Guided Progression Analysis (GPA) for the Humphrey® Field Analyzer II and II - i series does not detail specific acceptance criteria in a quantitative table format or a standalone study with a predefined set of performance metrics that the device had to meet. Instead, the submission relies on:
- Substantial Equivalence: Demonstrating that the GPA software is functionally equivalent to predicate devices and does not raise new questions regarding safety and effectiveness.
- Clinical Literature Review: Citing published research that discusses GPA's development and its successful use in identifying statistically significant visual field progression, particularly referencing its incorporation of metrics from the Early Manifest Glaucoma Trial (EMGT).
- Sponsored Study on Test-Retest Variability: A study to quantify perimetric test-retest variability in glaucoma subjects, which was used to establish limits for change at different significance levels based on test-retest variability in glaucomatous visual fields. This allows GPA to indicate when change exceeds normal test-retest variability.
Therefore, a table of explicit acceptance criteria and corresponding reported device performance, in the traditional sense of a validation study with pre-defined thresholds, cannot be directly extracted or constructed from the provided text. The "performance" is described in terms of its ability to identify statistically significant change based on established variability data, rather than specific sensitivity/specificity figures against an external gold standard.
However, I can provide a summary of the information available in the document regarding the study that underpins the device's claims.
Acceptance Criteria and Reported Device Performance
As stated, explicit acceptance criteria (e.g., minimum sensitivity, specificity, or accuracy targets) are not provided in this 510(k) summary. The "performance" is intrinsically linked to its ability to identify changes beyond normal test-retest variability.
| Criterion Type | Acceptance Criteria (Not explicitly stated in the document) | Reported Device Performance (as described in the document) |
|---|---|---|
| Efficacy in detecting progression | Implicit: Ability to identify statistically significant visual field progression | GPA incorporates visual field progression metrics successfully used in EMGT. It determines if statistically significant change has occurred by comparing follow-up tests to a baseline and highlighting changes that represent larger than expected clinical variability. Provides plain-language messages like "Possible Progression" or "Likely Progression". |
| Statistical Robustness | Implicit: Ability to distinguish true change from test-retest variability | Results from a sponsored study established limits for change at different significance levels based on test-retest variability in glaucomatous visual fields. GPA indicates when change at a given test location exceeds this test-retest variability. |
| Aid in management | Implicit: Provides actionable information for clinicians | Presents Visual Field Index (VFI), VFI Rate of Progression plot, and trend analysis with 3-5 year projection to aid in estimating future visual status. |
-
Sample size used for the test set and the data provenance:
- Sample Size: 363 qualified glaucoma subjects.
- Data Provenance: Data was collected across a worldwide nine-site study. It is not specified if it was retrospective or prospective, but the description ("Each subject was tested four times within one month") suggests a prospective data collection for the purpose of establishing test-retest variability.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not mention the use of experts to establish a "ground truth" for the test set in the traditional sense of defining disease progression. The study focused on quantifying perimetric test-retest variability in glaucoma subjects. The "ground truth" or reference for this study was the inherent variability of visual field measurements themselves, not an expert-determined clinical diagnosis of progression.
-
Adjudication method for the test set:
- Adjudication methods (like 2+1 or 3+1) are typically used when experts are determining a ground truth for a diagnostic outcome. Since the study focused on quantifying test-retest variability rather than an expert-adjudicated ground truth for progression, no adjudication method is described or implied in the provided text.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study is described in this 510(k) summary. The device, Guided Progression Analysis (GPA), is a software analysis module designed to assist practitioners, but its performance is described in terms of its algorithmic output based on statistical analysis of visual field data, not human reader performance with or without AI (in this case, "AI" refers to the GPA algorithm). The summary indicates that "the results allow HFA GPA to indicate when the change... exceeds the test-retest variability," which implies the software's direct output.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the core study mentioned (quantifying perimetric test-retest variability) and the subsequent development of GPA to use this data to identify statistically significant changes represent a standalone algorithmic function. The GPA software, without human intervention in its analysis, compares visual field test results to a baseline and determines statistical significance of changes, providing messages like "Possible Progression" or "Likely Progression." This is an algorithm-only function based on statistical rules derived from the variability study.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" used for the development and validation of the statistical thresholds within GPA was data on perimetric test-retest variability. This means the system's ability to declare "progression" is benchmarked against the statistically expected noise and fluctuations in visual field measurements in glaucoma patients, as determined by the sponsored study. It is not an expert consensus on true progression, pathology, or long-term outcomes data, although the underlying clinical effectiveness of detecting visual field progression is supported by reference to studies like EMGT.
-
The sample size for the training set:
- The document references the Early Manifest Glaucoma Trial (EMGT) literature (citations 3 and 4) as providing the "visual field progression metrics successfully used" in GPA. The EMGT study design document (cited as "Ophthalmology 1999; 106:2144-2153") would contain details about its sample size, which served as a foundational dataset for the conceptual framework of progression analysis incorporated into GPA. However, a specific "training set" sample size for the development of this particular software version (GPA for HFA II) is not explicitly stated in the document beyond the reference to EMGT and the "363 qualified glaucoma subjects" used for the test-retest variability study. The 363 subjects constituted a dataset for establishing variability limits, which are essentially statistical parameters used by the algorithm. It is unclear if these 363 subjects' data was used for training a machine learning model vs. establishing statistical thresholds.
-
How the ground truth for the training set was established:
- Given the reliance on EMGT, the "ground truth" for the principles of progression analysis would have been established within the EMGT, likely through clinical outcomes and expert evaluation of visual field series over time in relation to glaucoma diagnosis and treatment. For the test-retest variability study (the 363 subjects), the "ground truth" was derived from repeated measurements from the same subjects to quantify the inherent variability, rather than a clinical ground truth of progression. The document does not describe a machine learning-style training set with an explicitly adjudicated ground truth for progression; instead, it highlights the incorporation of established clinical knowledge and statistical principles from previous large-scale studies.
Ask a specific question about this device
(12 days)
The GDx is a confocal polarimetric scanning laser ophthalmoscope that is intended for imaging and three-dimensional analysis of the fundus and retinal nerve fiber layer (RNFL) in vivo. The GDx and its GDx Variable Corneal Compensation (VCC) and GDx Enhanced Corneal Compensation (ECC) RNFL Normative Databases aid in the diagnosis and monitoring of diseases and disorders of the eye that may cause changes in the polarimetric retinal nerve fiber layer thickness. The GDx is to be used in patients who may have an optic neuropathy.
The GDxPRO is a confocal scanning laser ophthalmoscope comprising an optomechanical scanning laser head unit and a computer. The device employs Scanning Laser Polarimetry (SLP) to measure the Retinal Nerve Fiber Layer (RNFL) thickness using polarized light.
Here's a breakdown of the acceptance criteria and study information for the GDxPRO™ device based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text for the GDxPRO™ does not explicitly state specific quantitative acceptance criteria or detailed device performance metrics that would typically be seen in a clinical study. Instead, the submission focuses on demonstrating substantial equivalence to a predicate device (GDx with ECC Retinal Nerve Fiber Layer Normative Database, K082016).
The key "performance" claimed is that the GDxPRO™ retains the functionality and safety/effectiveness of the predicate device.
| Acceptance Criterion (Implicit) | Reported Device Performance (Summary) |
|---|---|
| Substantial Equivalence to predicate device (GDx with ECC) | Evaluations demonstrate the device is substantially equivalent to the predicate device in terms of safety and effectiveness, and does not raise new questions. |
| Supports Intended Use | All necessary testing was conducted to ensure the device is safe and effective for its intended use. |
2. Sample Size Used for the Test Set and Data Provenance
The provided text does not explicitly mention a separate "test set" sample size or data provenance (e.g., country of origin, retrospective/prospective). Since the submission focuses on substantial equivalence based on modifications to a predicate, it's possible that direct clinical efficacy testing on a dedicated test set, as might be done for a novel device, was not the primary focus here. The "evaluation performed on the GDxPRO" likely involved engineering and performance testing rather than a large-scale clinical study with a distinct patient test set.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The provided text does not specify the number of experts or their qualifications for establishing ground truth for a test set. Given the focus on substantial equivalence based on modifications to an existing device, it's unlikely that a new, independent "ground truth" establishment process by external experts for a clinical dataset was undertaken as part of this 510(k) summary.
4. Adjudication Method for the Test Set
The provided text does not mention any adjudication method (e.g., 2+1, 3+1, none) for a test set. This further supports the interpretation that a traditional clinical study with independent review of cases by multiple experts was not the primary basis of this submission.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. Therefore, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance. The GDxPRO™ as described here is an imaging device, not an AI diagnostic assistant, so an MRMC study in that context would not be relevant.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
The provided text does not describe a standalone (algorithm only) performance study. The GDxPRO™ is a medical imaging device used by clinicians. Its "performance" is tied to its ability to accurately measure RNFL thickness, which then aids in diagnosis and monitoring by a human ophthalmologist.
7. Type of Ground Truth Used
The provided text does not explicitly state the type of ground truth used in the context of clinical validation (e.g., expert consensus, pathology, outcomes data). However, since the device measures Retinal Nerve Fiber Layer (RNFL) thickness for aiding in diagnosis of optic neuropathy, the implicit "ground truth" for the predicate device (and thus the GDxPRO™'s equivalence) would ultimately relate to clinical diagnosis of optic neuropathy by healthcare professionals, potentially based on a combination of clinical findings, imaging, and long-term outcomes for the normative databases.
8. Sample Size for the Training Set
The provided text does not specify a sample size for a "training set." The GDxPRO™ applies a "Normative Database" (GDx Variable Corneal Compensation (VCC) and GDx Enhanced Corneal Compensation (ECC) RNFL Normative Databases). These databases would have been built from a large population of healthy and diseased eyes. While the exact sample size for the original database development is not in this document, it would likely be significant to establish normative ranges.
9. How the Ground Truth for the Training Set Was Established
The text mentions "GDx Variable Corneal Compensation (VCC) and GDx Enhanced Corneal Compensation (ECC) RNFL Normative Databases." These databases are fundamental to the device's function, as they provide a reference against which patient RNFL measurements are compared.
The ground truth for these normative databases would have been established by:
- Clinical examinations: Identifying healthy individuals to form the "normal" range.
- Clinical diagnosis: Identifying individuals with various "diseases and disorders of the eye that may cause changes in the polarimetric retinal nerve fiber layer thickness" (e.g., optic neuropathy, glaucoma) to understand abnormal ranges.
- Longitudinal follow-up: In some cases, tracking disease progression or stability to correlate RNFL changes with clinical outcomes.
This process would involve expert ophthalmologists classifying patients and their conditions, likely through a combination of clinical assessment tools, other imaging modalities, and potentially histopathology in cases where it's relevant and obtainable (though less common for RNFL thickness in living patients). The "ground truth" for a normative database essentially means accurate classification of individuals as healthy or having a specific condition based on established clinical criteria at the time the database was built.
Ask a specific question about this device
Page 1 of 2