Search Results
Found 2 results
510(k) Data Aggregation
(57 days)
The Topcon 3D OCT-1 Maestro is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The 3D OCT-1 Maestro is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the human retina to a database of known normal subjects. The 3D OCT-1 Maestro is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
The Maestro is a non-contact, high-resolution, tomographic and bio-microscopic imaging system that merges OCT and fundus cameras into a single device. The technological characteristics of the OCT employed are similar to those of already 510(k)-cleared OCT products, such as Topcon's 3D OCT-2000 (K092470), in that it employs conventional spectral domain OCT with widely-used 840 nm light source. The technological characteristics of the fundus camera employed are also similar to those of already cleared fundus cameras, such as Topcon's TRC NW300 (K123460), in terms of field of view (FOV) and camera sensor resolution.
The Maestro captures an OCT image and a color fundus image sequentially. It can take anterior OCT images in addition to fundus OCT images. It also includes a reference database for fundus OCT. Captured images are transferred from the device to an off-the-shelf personal computer (PC) via LAN cable, where the dedicated software for this device is installed. The transferred data is then automatically processed with analysis functions such as the automatic retinal layers segmentation, the automatic thickness calculation with several grids, the optic disc analysis and comparison with a reference database of eyes free of ocular pathology, and is finally automatically saved to the PC.
Two software programs for installation on an off-the-shelf PC are provided with the device. The first PC software program, called "FastMap", captures the images from the device, analyzes them and enables viewing of the data. The second PC software program, called "OCT Viewer", can only analyze and view the data.
Accessories include the following: power cord; chin-rest paper sheet; LAN cable; chin-rest paper pins; external fixation target; dust cover; spare parts case; and stylus pen. An optional Anterior Segment Kit allows the user to activate the anterior segment imaging functionality of the Maestro device.
The Topcon 3D OCT-1 Maestro is a non-contact, high-resolution tomographic and biomicroscopic imaging device. The provided text outlines its performance data, primarily focusing on repeatability and reproducibility measurements for various ocular structures in different patient populations.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria for the repeatability and reproducibility of the measurements. Instead, it presents the calculated repeatability and reproducibility measurements (SD, Limit, CV%) for the Maestro device across different parameters and patient groups. The "acceptance criteria" appear to be implied by the presentation of these results, suggesting that the device's performance, as measured, is considered acceptable for demonstrating substantial equivalence to predicate devices.
However, based on the provided tables, here's a summary of the reported device performance:
| Measurement Type | Population | Scan Pattern | Typical Repeatability CV% (Range) | Typical Reproducibility CV% (Range) |
|---|---|---|---|---|
| Full Retinal Thickness | Normal Eyes | 12x9 3D Wide | 0.286% - 1.115% | 0.526% - 1.461% |
| 6x6 3D Macula | 0.305% - 0.684% | 0.498% - 1.025% | ||
| Retinal Disease Eyes | 12x9 3D Wide | 0.378% - 1.478% | 0.595% - 1.897% | |
| 6x6 3D Macula | 0.376% - 1.090% | 0.660% - 1.336% | ||
| Glaucoma Eyes | 12x9 3D Wide | 0.493% - 1.199% | 0.661% - 1.639% | |
| 6x6 3D Macula | 0.332% - 1.288% | 0.719% - 1.239% | ||
| Ganglion Cell + IPL | Normal Eyes | 12x9 3D Wide | 0.404% - 0.950% | 0.508% - 1.162% |
| 6x6 3D Macula | 0.364% - 1.044% | 0.557% - 1.148% | ||
| Retinal Disease Eyes | 12x9 3D Wide | 1.041% - 2.673% | 1.101% - 3.604% | |
| 6x6 3D Macula | 0.690% - 1.452% | 0.984% - 1.824% | ||
| Glaucoma Eyes | 12x9 3D Wide | 0.628% - 1.563% | 0.716% - 1.784% | |
| 6x6 3D Macula | 0.593% - 1.288% | 0.736% - 1.451% | ||
| Ganglion Cell Complex Thickness | Normal Eyes | 12x9 3D Wide | 0.470% - 0.821% | 0.645% - 1.056% |
| 6x6 3D Macula | 0.498% - 1.400% | 0.729% - 1.607% | ||
| Retinal Disease Eyes | 12x9 3D Wide | 1.112% - 3.213% | 1.112% - 3.232% | |
| 6x6 3D Macula | 0.485% - 1.093% | 0.601% - 1.093% | ||
| Glaucoma Eyes | 12x9 3D Wide | 0.638% - 1.189% | 0.687% - 1.240% | |
| 6x6 3D Macula | 0.508% - 1.131% | 0.678% - 1.265% | ||
| Retinal Nerve Fiber Layer (RNFL) - Average | Normal Eyes | 12x9 3D Wide | 1.318% | 1.517% |
| 6x6 3D Disc | 0.933% | 1.099% | ||
| Retinal Nerve Fiber Layer (RNFL) - Sectoral | Normal Eyes | 12x9 3D Wide | 2.461% - 16.711% | 3.040% - 18.538% |
| 6x6 3D Disc | 3.738% - 13.898% | 4.405% - 14.407% | ||
| Retinal Disease Eyes | 12x9 3D Wide | 1.594% - 8.143% | 1.888% - 8.675% | |
| 6x6 3D Disc | 1.084% - 5.725% | 1.480% - 7.387% | ||
| Glaucoma Eyes | 12x9 3D Wide | 1.970% - 8.261% | 2.097% - 8.299% | |
| 6x6 3D Disc | 1.929% - 6.480% | 1.933% - 7.074% | ||
| Optic Disc | Normal Eyes | 12x9 3D Wide | 3.520% - 6.600% | 4.233% - 7.967% |
| 6x6 3D Disc | 3.313% - 6.359% | 4.074% - 8.139% | ||
| Retinal Disease Eyes | 12x9 3D Wide | 3.858% - 8.404% | 4.981% - 20.586% | |
| 6x6 3D Disc | 2.855% - 5.627% | 3.438% - 11.024% | ||
| Glaucoma Eyes | 12x9 3D Wide | 3.179% - 14.274% | 3.811% - 17.103% | |
| 6x6 3D Disc | 1.852% - 5.813% | 1.959% - 7.201% |
The "Limit" values in the tables are calculated as 2.8 x SD, representing a range within which 95% of repeated measurements are expected to fall. The "CV%" is the Coefficient of Variation, indicating precision relative to the mean.
2. Sample Sizes Used for the Test Set and Data Provenance
- Test Set (Clinical Studies for Repeatability and Reproducibility):
- Normal Subjects: 25 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=25).
- Subjects with Retinal Disease: 26 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=26).
- Subjects with Glaucoma: 25 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=25).
- Data Provenance: The document does not explicitly state the country of origin. The study is referred to as "clinical studies," but it's not specified if they were prospective or retrospective. Given that the manufacturer is Topcon Corporation of Japan, but the contact person is in the US, the studies could have been conducted in either or both regions.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number or qualifications of experts used to establish a "ground truth" for the test set in the context of the repeatability and reproducibility studies. The clinical studies were conducted to determine the agreement, repeatability, and reproducibility of measurement data, not for diagnostic accuracy against a ground truth.
However, it mentions: "Consistent with the labeling for the test and control devices, the clinical site was permitted to make manual adjustments to automated segmentation based on the clinician's judgment." This indicates that clinicians (likely ophthalmologists or optometrists) were involved in reviewing and potentially adjusting automated segmentation, but it doesn't define them as "ground truth" experts in the context of defining disease states or specific measurements for the purpose of validating the AI's accuracy against a known truth.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method in the context of establishing ground truth for the test set. The clinical studies focused on repeatability and reproducibility of quantitative measurements, rather than classification or diagnosis that would typically require an adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating human reader improvement with or without AI assistance was not reported in this document. The clinical studies conducted were focused on the device's measurement precision (repeatability and reproducibility) and agreement with predicate devices rather than human-AI collaboration for diagnostic accuracy.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The clinical studies described primarily assessed the precision of the device's measurements (which involve algorithms for segmentation and thickness calculation) rather than the standalone diagnostic performance of an AI algorithm. The device performs automatic retinal layer segmentation, automatic thickness calculation, and optic disc analysis. The phrase "the clinical site was permitted to make manual adjustments to automated segmentation based on the clinician's judgment" (page 6) suggests that the device's algorithms operate with potential human oversight, implying it's not strictly a standalone AI performance evaluation for diagnostic purposes. The data presented are for the device's ability to consistently provide these measurements.
7. The Type of Ground Truth Used
For the repeatability and reproducibility studies, the "ground truth" is not a diagnostic label (e.g., pathology, outcomes data). Instead, the studies assess the consistency of the device's quantitative measurements of ocular structures (e.g., retinal thickness, RNFL thickness) by comparing multiple measurements taken under similar or varied conditions. The reference database uses "known normal subjects" but this is for comparative analysis against a normal population rather than for establishing a "ground truth" for disease diagnosis in the test set.
8. The Sample Size for the Training Set
The document specifies a "Reference Database" was compiled using "399 subject eyes from normal study subjects." This database is for "quantitative comparison... to a database of known normal subjects," which functions as a normative reference rather than a training set for an AI/algorithm in the conventional sense (e.g., for classification tasks).
If parts of the device's functionality (like automatic segmentation) involve machine learning, the training set size for those specific algorithms is not provided in this document. The 399 normal subjects form a reference database, not explicitly an algorithm training set.
9. How the Ground Truth for the Training Set Was Established
For the "Reference Database" of 399 normal subjects:
- How it was established: The study collected measurements of various ocular structures from these normal eyes. The "normal" status of these subjects would have been established through clinical evaluation to ensure they were free of ocular pathology.
- Type of Ground Truth: The ground truth for this reference database is the consensus clinical determination that the subjects have "normal eyes" and the quantitative measurements derived from these normal eyes form the expected range for a healthy population. The document states it provides "quantitative comparison... to a database of known normal subjects." It also mentions "a reference database of eyes free of ocular pathology."
Ask a specific question about this device
(105 days)
The TRC-NW400 intended for use in capturing images of the retina and the anterior segment of the eye and presenting the data to the eye care professional, without use of a mydriatic.
The Topcon TRC-NW400 is a fundus camera designed to observe, photograph and record the fundus oculi of a patient's eye with or without the use of a mydriatic. The TRC-NW400 does not come into contact with the patient's eye and provides the fundus oculi image information as an electronic image for later analysis.
The TRC-NW400 houses a color LCD monitor used for observation and display of a photographed image and a digital photography unit used for recording images. A photographed image may be recorded on a personal computer (hereinafter referred to as a PC), or on a commercially available storage device (such as a flash memory, a hard disk or a card reader/writer) connected to the TRC-NW400. A photographed image may also be printed on a commercially available digital printer connected to the TRC-NW400 or PC.
Patient information may be input on the control panel of the main unit or by using a commercially available data input device (for example: a bar code reader or a magnetic card reader) or PC.
The provided text describes a 510(k) premarket notification for the Topcon TRC-NW400 Non-Mydriatic Retinal Camera. This document is focused on demonstrating substantial equivalence to a predicate device, the TRC-NW300, rather than providing detailed acceptance criteria and a comprehensive study report for standalone device performance.
Therefore, much of the requested information regarding specific acceptance criteria, sample sizes for test/training sets, expert qualifications, and adjudication methods is not explicitly detailed in this 510(k) summary. The document focuses on regulatory compliance and a comparative analysis.
Here's an attempt to extract and infer information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Inferred from regulatory standards and comparative study) | Reported Device Performance (TRC-NW400) |
|---|---|
| Compliance with IEC60601-1:2005 (Basic Safety & Essential Performance) | Compliant |
| Compliance with IEC 60601-1-2:2007 (Electromagnetic Compatibility) | Compliant |
| Compliance with ISO 15004-1:2006 (General requirements for ophthalmic instruments) | Compliant |
| Compliance with ISO 15004-2:2007 (Light hazard protection) | Compliant |
| Compliance with ISO 10940:2009 (Fundus cameras) | Compliant |
| Image quality (sharpness, focus, resolution power) for model eye images | Equivalent to predicate device (TRC-NW300) |
| Image quality for clinical images | Equivalent to or better than predicate device (TRC-NW300) |
| Color imaging quality | Equivalent to predicate device (TRC-NW300) |
| Intended Use | Same as predicate device |
| Technological Characteristics | Similar to predicate device, minor differences do not raise new safety/effectiveness questions |
2. Sample size used for the test set and the data provenance
- Test set sample size: Not explicitly stated. The document mentions "an analysis was performed of images captured with the TRC-NW400 and the predicate device which were formally evaluated" and "the grading of clinical images from the TRC-NW400." This implies a test set of images, but the number is not provided.
- Data provenance: Not explicitly stated. It refers to "model eye images" and "clinical images," but the country of origin or whether the study was retrospective or prospective is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided in the document. It only states that images were "formally evaluated" and "graded."
4. Adjudication method for the test set
- This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study involving human readers and AI assistance is not mentioned. The study described compares the TRC-NW400's image quality to that of a predicate device (TRC-NW300). The device itself (TRC-NW400) is a retinal camera, not an AI-powered diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This question is not applicable in the context of this device. The TRC-NW400 is a retinal camera that captures images for an eye care professional to interpret; it is not an algorithm that performs standalone diagnoses. The "performance data" presented relates to the technical image capture capabilities of the camera.
7. The type of ground truth used
- For "model eye images," the ground truth implicitly would be the known ideal conditions and characteristics of the model eye.
- For "clinical images," the "grading" suggests that expert assessment was used to determine image quality, though the specific 'ground truth' criteria (e.g., presence/absence of disease, image clarity ratings) and how it was established are not detailed.
8. The sample size for the training set
- This information is not provided. The document describes a comparison study, not the development of an algorithm that would typically require a training set.
9. How the ground truth for the training set was established
- This information is not applicable as no training set for an algorithm is mentioned.
Ask a specific question about this device
Page 1 of 1