Search Results
Found 12 results
510(k) Data Aggregation
(89 days)
Topcon Corporation
· 3D OPTICAL COHERENCE TOMOGRAPHY 3D OCT-1(type: Maestro2)
The Topcon 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2) is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under mydriatic conditions.
It is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a reference database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the human retina to a database of known normal subjects.
It is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
All the above functionalities and indications are available in combination with IMAGEnet 6.
· Indications for Use of the combination of the Maestro2 in conjunction with IMAGEnet6
Maestro2 in combination with IMAGEnet 6 is indicated as an aid in the visualization of vascular structures of the posterior segment of the eye including the retina, optic disc and choroid.
- · IMAGEnet6 Ophthalmic Data System
The IMAGEnetto Ophthalmic Data System is a software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices.
It is intended for processing and displaying ophthalmic images and optical coherence tomography data.
The IMAGEnet6 Ophthalmic Data System uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects.
· Indications for Use of the combination of the Maestro2 in conjunction with IMAGEnet6
Maestro2 in combination with IMAGEnet 6 is indicated as an aid in the visualization of vascular structures of the posterior segment of the eye including the retina, optic disc and choroid.
3D OPTICAL COHERENCE TOMOGRAPHY 3D OCT-1 (type: Maestro2) with system linkage software (herein referred to as "Maestro2") is a non-contact ophthalmic device combining spectral-domain optical coherence tomography (SD-OCT) with digital color fundus photography. Maestro2 includes an optical system of OCT, fundus camera (color, IR and Red-free image), and anterior observation camera. Maestro2 is used together with IMAGEnet6 by connecting via System linkage software which is a PC software installed to off the shelf PC connected to Maestro2 is capable of OCT imaging and color fundus photography. For this 510(k) notification, Maestro2 has been modified to allow for OCT "angiographic" imaging (only in conjunction with IMAGEnet6).
IMAGEnet6 is a software program installed on a server computer and operated via web browser on a client computer. It is used in acquiring, storing, managing, processing, measuring, displaying of patient information, examination information and image information delivered from TOPCON devices.
When combined with Maestro2, IMAGEnet6 plays an essential role as the user interface of the external PC by working together with the linkage software of Maestro2. In this configuration, IMAGEnet6 performs general GUI functions such as providing of the log-in screen, display of the menu icons, display function, measurement, analysis function, image editing functions, storing and management of data of the captured OCT scans and provides the reference database for quantitative comparison. For this 510(k) notification, IMAGEnet6 has been modified to allow for OCT "angiographic" imaging on the Maestro2.
Here's a summary of the acceptance criteria and study details for the Topcon Corporation's 3D OCT-1 (Maestro2) and IMAGEnet6 Ophthalmic Data System, primarily focusing on the new OCT Angiography functionality:
Acceptance Criteria and Device Performance
The study primarily focused on comparing the performance of the Maestro2 (with IMAGEnet6) against the predicate CIRRUS HD-OCT, particularly for its new OCT Angiography (OCTA) imaging capabilities. The acceptance criteria were implicitly defined through the evaluation of "response rates" and agreement metrics.
Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Maestro2 vs. CIRRUS HD-OCT) |
---|---|---|
OCTA Image Quality Response Rate (Maestro2 scans same or better grade than CIRRUS HD-OCT) | High percentage indicating comparable or superior image quality. | Entire Cohort: |
- 3x3-mm macular scan: 75.0%
- 6x6-mm macular scan: 71.0%
- 4.5x4.5-mm disc scan: 71.0%
Pathology Group: - 3x3-mm macular scan: 75.6%
- 6x6-mm macular scan: 77.9%
- 4.5x4.5-mm disc scan: 74.4% |
| Visibility of Key Anatomical Vascular Features Response Rate (FAZ, large, medium, small vessels/capillaries - Maestro2 scans same or better grade than CIRRUS HD-OCT) | High percentage indicating comparable or superior visibility of features. | Entire Cohort: - 3x3-mm Macular Scan: FAZ visibility 87.1%, medium vessels 87.9%, small vessels/capillaries 82.3%
- 6x6-mm Macular Scan: FAZ visibility 81.5%, large vessels 87.1%, medium vessels 77.4%, small vessels/capillaries 79.8%
- 4.5x4.5-mm Disc Scan: large vessels 83.9%, medium vessels 80.6%, small vessels/capillaries 80.6% |
| Positive Percent Agreement (PPA) for Pathological Vascular Features (Maestro2 vs. FA/ICGA) | High PPA indicating good sensitivity in identifying pathologies. | Microaneurysms (MAs): - 3x3-mm macular scans: 96.6%
- 6x6-mm macular scans: 96.6%
- 4.5x4.5-mm disc scans: 73.9%
Retinal Ischemia/Capillary Dropout (RI/CD): - 3x3-mm scans: 93.1%
- 6x6-mm scans: 100%
- 4.5x4.5-mm disc scans: 75.0%
Choroidal Neovascularization (CNV): - 3x3-mm macular scans: 88.9%
- 6x6-mm macular scans: 84.2%
- 4.5x4.5-mm disc scans: 66.7% |
| Negative Percent Agreement (NPA) for Pathological Vascular Features (Maestro2 vs. FA/ICGA) | High NPA indicating good specificity in ruling out pathologies. | Microaneurysms (MAs): - 3x3-mm macular scans: 92.7%
- 6x6-mm macular scans: 92.7%
- 4.5x4.5-mm disc scans: 100%
Retinal Ischemia/Capillary Dropout (RI/CD): - 3x3-mm scans: 85.4%
- 6x6-mm scans: 87.8%
- 4.5x4.5-mm disc scans: 85.7%
Choroidal Neovascularization (CNV): - 3x3-mm macular scans: 82.7%
- 6x6-mm macular scans: 84.3%
- 4.5x4.5-mm disc scans: 98.4% |
| Response Rate for Identification of Pathologies (Maestro2 scans same or better outcome than CIRRUS HD-OCT) | High percentage indicating comparable or superior ability to identify specific pathologies. | Microaneurysms (MAs) response rates: - 3x3-mm macular scans: 81.0%
- 6x6-mm macular scans: 82.1%
- 4.5x4.5-mm disc scan: 75.6%
RI/CD response rates: - 3x3-mm macular scans: 76.2%
- 6x6-mm macular scans: 79.8%
- 4.5x4.5-mm disc scans: 71.8%
CNV response rates: - 3x3-mm macular scans: 79.8%
- 6x6-mm macular scans: 81.0%
- 4.5x4.5-mm disc scan: 75.3% |
Study Details
-
Sample size used for the test set and the data provenance:
- Sample Size: 124 eligible eyes from 122 subjects. This included 38 "normal" eyes and 86 "pathology" eyes.
- Data Provenance: Prospective, multi-center, observational study. The country of origin of the data is not explicitly stated in the provided text, but "multi-center" suggests data from various clinical sites.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated. The text mentions that images were sent to an "independent reading center (RC) for image grading," implying multiple experts, but the exact number is not provided.
- Qualifications of Experts: Not explicitly stated. The nature of the study (comparing OCTA images and identifying vascular pathologies) suggests that the graders at the independent reading center would be ophthalmologists or trained image graders with expertise in retinal imaging and pathology.
-
Adjudication method for the test set:
- The document implies that an "independent reading center (RC)" performed "image grading." The specific adjudication method (e.g., 2+1, 3+1, none) is not explicitly stated.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- This was a comparative effectiveness study, but it primarily compared device performance (Maestro2 vs. CIRRUS HD-OCT) and visualization capabilities against a reference standard (dye-based angiography), rather than assessing improvement in human readers with AI assistance versus without. The OCTA function itself is part of the imaging device, providing images for human interpretation, not an AI assisting human reads. Therefore, an effect size of human readers improving with/without AI assistance is not applicable to this study design as described. The study aims to demonstrate that the new device's OCTA images are comparable or superior to the predicate device and aid in visualizing vascular structures.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document describes the device as providing visualization capabilities ("indicated as an aid in the visualization"). The PPA/NPA results compare the device's diagnostic capability for certain pathologies against dye-based angiography, suggesting a standalone assessment of the image data's ability to reveal these pathologies. However, the exact methodology for pathology identification (e.g., whether it relied purely on automated detection within the device or expert interpretation of the OCTA images generated by the device) is not fully detailed. Given the context of "image grading" by a reading center, it strongly suggests expert interpretation of the images produced by the Maestro2 (device-only output).
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for identifying key pathological vascular features (Microaneurysms, Retinal Ischemia/Capillary Dropout, and Choroidal Neovascularization) was established by dye-based angiography (e.g., fluorescein angiography [FA] and indocyanine green angiography [ICGA]). This is referred to as the reference standard against which the OCTA images from Maestro2 and CIRRUS HD-OCT were compared. Additionally, "clinically significant pathology" as determined by clinical assessment was used to categorize the "Pathology Population."
-
The sample size for the training set:
- The document only describes a clinical performance test study. Information regarding a specific training set size is not provided within this document. The device uses algorithms and reference databases, suggesting prior training, but details on that process or data size are absent.
-
How the ground truth for the training set was established:
- As no information on a specific training set is provided, how its ground truth was established is also not detailed in this document. The document mentions "reference databases for posterior ocular measurements" (e.g., for normal subjects), implying ground truth for these databases would have been established through prior clinical studies or expert consensus on normal ocular anatomy.
Ask a specific question about this device
(156 days)
Topcon Corporation
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2) is a noncontact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2) is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retina, retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the macula in the human retina to a database of known normal subjects.
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2)is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
3D OPTICAL COHERENCE TOMOGRAPHY 3D OCT-1(type: Maestro2) with System linkage software (herein referred to as "Maestro2") is a non-contact ophthalmic device combining spectral-domain optical coherence tomography (SD-OCT) with digital color fundus photography. Maestro2 includes an optical system of OCT, fundus camera (color. IR and Red-free image), and anterior observation camera. The color fundus camera acquires color images of the posterior segment of the eye under mydriatic or non-mydriatic conditions. Maestro2 is used together with IMAGEnet6 by connecting via System linkage software which is a PC software installed to off the shelf PC connected to Maestro2. The remote operation function is not intended to be used from any further distance (e.g., operation from different rooms or different buildings) other than social distancing recommendations.
This document is a 510(k) summary for the Topcon 3D Optical Coherence Tomography 3D OCT-1 (type: Maestro2) with System linkage software. It focuses on demonstrating substantial equivalence to a previously cleared predicate device, rather than presenting a performance study with detailed acceptance criteria for an AI/algorithm-driven diagnostic device.
Therefore, the document does not contain the information requested regarding acceptance criteria and a study proving the device meets them, especially in the context of AI/algorithm performance. It primarily addresses the device's technical specifications, intended use, and conformance to general medical device standards.
Specifically, it states:
- "This section is not applicable because clinical data was not required for this 510(k) submission." (Page 6)
- The substantial equivalence discussion highlights that the main difference in the subject device is the "system linkage software" and an "optional remote operation function." For the remote operation function, it states that "comparison testing confirmed that image quality and diagnosability is the same with or without the remote operation function." This is a functional comparison, not a clinical performance study with defined acceptance criteria for diagnostic output.
In summary, the provided text does not offer the details required to answer your prompt.
Ask a specific question about this device
(170 days)
Topcon Corporation
The IMAGEnet6 Ophthalmic Data System is a software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices.
It is intended for processing and displaying ophthalmic images and optical coherence tomography data.
The IMAGEnet6 Ophthalmic Data System uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects.
IMAGEnet6 Ophthalmic Data System is a Web application that allows management of patient information, exam information and image information. It is installed on a server PC and operated via a web browser of a client PC.
When combined with 3D OCT-1 (type: Maestro2), IMAGEnet6 provides GUI for remote operation function. This function is an optional function which enables users to use some of the image capture function by operating the PC or tablet PC connected to the external PC of 3D OCT-1 (Type:Maestro2) device via ethernet cable. The remote operation function is not intended to be used from any further distance (e.g., operation from different rooms or different buildings) other than social distancing recommendations.
The provided document is a 510(k) Premarket Notification from the FDA for the IMAGEnet6 Ophthalmic Data System. This device is a Medical Image Management and Processing System, classified as Class II, with product code NFJ.
Based on the document, the IMAGEnet6 Ophthalmic Data System, subject device (version 2.52.1), is considered substantially equivalent to the predicate device (IMAGEnet6, version 1.52, K171370). The submission is primarily for a software update with changes including a modified remote operation function and expanded compatibility with additional Topcon devices.
Here's an analysis of the acceptance criteria and study information provided:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not specify quantitative acceptance criteria in a typical clinical study format (e.g., target sensitivity, specificity). Instead, the acceptance criterion for the software modification (remote operation function) appears to be that its performance (image quality and diagnosability) is equivalent to or the same as the device without the remote operation function.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Image quality with remote operation function is the same as without. | Confirmed that image quality is the same with or without the remote operation function. |
Diagnosability with remote operation function is the same as without. | Confirmed that diagnosability is the same with or without the remote operation function. |
2. Sample size used for the test set and data provenance:
- Sample Size for Test Set: Not explicitly stated. The document mentions "comparison testing" was performed for the modified remote operation function, but the size of the test set (number of images or cases) is not provided.
- Data Provenance: Not specified. The document does not indicate the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and qualifications of those experts:
This information is not provided in the document. As "clinical performance data was not required for this 510(k) submission," there is no mention of expert-established ground truth for a clinical test set. The assessment of "image quality and diagnosability" was likely an internal validation, possibly by qualified personnel, but the specifics are not disclosed.
4. Adjudication method for the test set:
This information is not provided. Given that clinical performance data was not required, a formal adjudication process akin to clinical trials is unlikely to have been detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The device is an image management and processing system, not an AI-powered diagnostic tool intended to assist human readers in interpretation.
- Effect Size: Not applicable, as no such study was performed or required.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document implies that "comparison testing" was conducted to confirm image quality and diagnosability with and without the remote operation function. This would be a form of standalone performance assessment of the system's ability to maintain image integrity and diagnostic utility, but it does not involve the standalone diagnostic performance of an algorithm without human input for disease detection.
7. The type of ground truth used:
- The document states that "clinical performance data was not required." Therefore, there is no mention of a ground truth established by expert consensus, pathology, or outcomes data for diagnostic accuracy. The ground truth for the "comparison testing" of the remote operation function would likely be the inherent quality and diagnostic features of the images generated by the original (non-remote) system. The testing aimed to confirm that the remote function did not degrade this baseline.
8. The sample size for the training set:
IMAGEnet6 is described as a "software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices." It uses "the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects."
This suggests the device itself is not a deep learning AI model that requires a "training set" in the conventional sense of machine learning for diagnostic tasks. Rather, it integrates existing algorithms and reference databases. Therefore, the concept of a "training set" for the IMAGEnet6 software as a whole is not applicable in the context of this 510(k) submission.
9. How the ground truth for the training set was established:
As the concept of a training set for a machine learning model is not applicable to the functionality described for IMAGEnet6 in this submission, the method for establishing ground truth for a training set is not relevant or discussed. The reference databases it utilizes would have had their own data collection and establishment methods, but those pertain to the underlying instruments, not the IMAGEnet6 system itself regarding this submission.
Ask a specific question about this device
(185 days)
Topcon Corporation
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2) is a noncontact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2) is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retina, retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the macula in the human retina to a database of known normal subjects.
The TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type:Maestro2)is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
3D OPTICAL COHERENCE TOMOGRAPHY 3D OCT-1(type: Maestro2) is a non-contact ophthalmic device combining spectral-domain optical coherence tomography (SD-OCT) with digital color fundus photography. Maestro2 includes an optical system of OCT, fundus camera (color, IR and Red-free image), and anterior observation camera. The color fundus camera acquires color images of the posterior segment of the eye under mydriatic or non-mydriatic conditions.
This document is a 510(k) premarket notification for the TOPCON 3D Optical Coherence Tomography 3D OCT-1 (Type: Maestro2). It aims to demonstrate substantial equivalence to a legally marketed predicate device (TOPCON 3D OCT-1, K170164). The document indicates that clinical performance data was NOT required for this 510(k). Therefore, the device's acceptance criteria and the study proving it meets them do not involve clinical trials or human-in-the-loop performance studies as typically seen with AI-powered diagnostics.
Instead, the performance data provided focuses on verifying the device functions as intended through engineering tests against a set of FDA-recognized, voluntary consensus standards and in-house specifications. The substantial equivalence argument relies on the similarity of intended use, indications for use, operating principle, and technological characteristics to the predicate device.
Therefore, based on the provided document, the following points regarding acceptance criteria and studies are applicable primarily to the technical performance and substantial equivalence to a predicate device, rather than a clinical performance study of a novel AI algorithm.
1. A table of acceptance criteria and the reported device performance:
The document doesn't present a specific "acceptance criteria" table with precise numerical values for clinical performance metrics. Instead, it demonstrates compliance with recognized technical standards for ophthalmic devices and general medical electrical equipment. The "reported device performance" is implicitly that the device meets these standards and performs equivalently to the predicate.
Here's an interpretation based on the provided information, focusing on functional specifications, as no clinical performance data was provided or required:
Acceptance Criteria Category | Specific Criteria (from document) | Reported Device Performance (Implied from "Substantial Equivalence" discussion) |
---|---|---|
General Safety & Performance | Compliance with: |
- IEC 60601-1-2:2014+AMD1:2020 (Medical electrical equipment - Electromagnetic disturbances)
- ANSI AAMI ES60601-1:2005/(R)2012 (Medical electrical equipment - Basic safety and essential performance)
- ISO 15004-1:2020 (Ophthalmic instruments - General requirements) | Device performs as intended and complies with these standards, demonstrating substantial equivalence. |
| Ophthalmic Instrument Specifics | Compliance with: - ISO 10940:2009 (Ophthalmic instruments - Fundus cameras)
- ANSI Z80.36-2021 (Light Hazard Protection for Ophthalmic Instruments) | Device performs as intended and complies with these standards, demonstrating substantial equivalence. Specifically "Device testing confirmed Maestro2 fulfills the standard for fundus camera." |
| Usability | Compliance with: - IEC 60601-1-6:2013 (Usability)
- IEC 62366-1:2015+AMD1:2020 (Application of usability engineering) | Device performs as intended and complies with these standards. |
| Laser Safety | Compliance with: - IEC 60825-1:2007 (Safety of laser products) | Device performs as intended and complies with these standards. |
| Biocompatibility (if applicable) | Compliance with: - ISO 10993-1:2018 (Biological evaluation - General)
- ISO 10993-5:2009 (Biological evaluation - Cytotoxicity)
- ISO 10993-10:2010 (Biological evaluation - Irritation and skin sensitization) | Device performs as intended and complies with these standards. |
| Software Performance | Compliance with: - IEC 62304:2015 (Medical device software - Software life cycle processes) | Software verification and validation testing were performed and documentation provided. "Software for Maestro2 was concluded to be a Moderate level of concern." "software testing confirmed Maestro2 functions as intended with the updated windows OS version." |
| Interoperability | Compliance with: - NEMA PS 3.1 - 3.20 2021e (Digital Imaging and Communications in Medicine (DICOM) Set) | Device performs as intended and complies with this standard. |
| Functional Equivalence | Match or demonstrate equivalent performance for: - Type of photography (Color, Red-free, IR)
- Picture angle
- Operating distance
- Observable/photographable pupil diameter
- Scan range and pattern
- Scan speed
- Lateral and In-depth resolution
- Fixation target
- Absence/presence of movable IR filter (with justification for difference)
- Focal length of relay lens (with justification for difference)
- Camera specifications (Resolution, Sensor type, IF, Pixel size) | "The results of the testing support substantial equivalence by demonstrating that the device performs as intended and complies with the same standards as the predicate device." Differences in camera, IR filter, focal length, and OS are addressed with verification testing deeming them equivalent or not affecting SE. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
Since no clinical data was required or submitted, there is no "test set" of patient data in the clinical sense. The testing performed was primarily engineering and validation testing of the device's technical specifications and compliance with standards. The document does not specify sample sizes for these technical tests (e.g., how many units were tested for electrical safety or image resolution), nor does it describe data provenance in terms of patient demographics or study design (retrospective/prospective, country of origin).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not applicable as the submission did not rely on a clinical test set with ground truth established by medical experts. The ground truth for technical tests would be established by engineering specifications and reference standards.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
Not applicable, as no clinical test set requiring medical expert adjudication was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
Not applicable. The device is an imaging device (OCT) and does not appear to incorporate AI for diagnostic assistance according to the provided summary, nor was a comparative effectiveness study involving human readers required or conducted.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
Not applicable. This device is an imaging system, not presented as an "algorithm only" diagnostic tool. Its performance characterization is based on technical specifications and adherence to engineering standards.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The "ground truth" for this submission refers to the technical and performance specifications outlined in the relevant FDA-recognized consensus standards (e.g., IEC, ISO, ANSI) and the manufacturer's in-house specifications. For example, for image resolution, the ground truth is a specific lines/mm measurement that the device must achieve.
8. The sample size for the training set:
Not applicable. This document is for an imaging device, not an AI model that requires a "training set" of data for learning.
9. How the ground truth for the training set was established:
Not applicable, as there is no "training set" for this device.
Ask a specific question about this device
(136 days)
Topcon Corporation
The Non-Mydriatic Retinal Camera NW500 intended for use in capturing images of the retina and presenting the data to the eye care professional, without the use of a mydriatic.
The Non-Mydriatic Retinal Camera NW500 is a non-mydriatic and slit-scanning ophthalmic camera intended to capture, display and store images of the retina and the surrounding adnexa (the fundus oculi) to aid in the diagnosis. It has automatic functions such as auto-alignment, auto-focus, auto-shoot and auto-small pupil functions which can be switched ON/OFF or automatic/manual operation. Eyes with pupil diameters of 2.0mm or more are photographable with NW500. The digital cameras incorporated in the main unit capture images of the retina and the surrounding adnexa (the fundus oculi), and the control panel (LCD touch panel) their associated information (such as patient/test/photography information). The captured images (static images) can also be displayed on a commercially available monitor of a personal computer (hereafter called “PC”) by using the capturing software, Ez Capture for NW500 which is one of the accessories of NW500. The captured images (static images) and their associated information (such as patient/test/photography information) can be exported to and stored in commercially available USB flash drives, PCs, servers (such as a DICOM server) and shared network folders as electric data, and they can be printed out from commercially available printers.
The provided text describes the NON-MYDRIATIC RETINAL CAMERA NW500 (K221111), but it does not contain any information about specific acceptance criteria or a study proving the device meets those criteria, especially not regarding AI/algorithm performance.
The document details the device's substantial equivalence to a predicate device (TRC-NW400 Non-Mydriatic Retinal Camera, K141481) based on its intended use, operation principle, and technological characteristics. It mentions that "Bench Testing" and "In-house test specification" were used to verify that the NW500 functions as intended and complies with consensus standards, but no specific performance metrics or acceptance criteria are listed for these tests.
Here's a breakdown of the information that is available and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
Criteria/Metric | Acceptance Criteria | Reported Device Performance | Notes |
---|---|---|---|
Resolving Power on Fundus - Color Image-Capturing | Same as predicate, presumed to meet predicate performance. | ||
- Center | 60 lp/mm or more (Predicate) | 60 lp/mm or more | Direct comparison to predicate, indicating it meets or exceeds. |
- Middle (r/2) | 40 lp/mm or more (Predicate) | 40 lp/mm or more | Direct comparison to predicate, indicating it meets or exceeds. |
- Periphery (r) | 25 lp/mm or more (Predicate) | 25 lp/mm or more | Direct comparison to predicate, indicating it meets or exceeds. |
Angular Field of View | Subject device has a slightly larger FOV. | ||
- NW500 (Subject) | N/A | 50° | The subject device extends beyond the predicate's capability. |
- TRC-NW400 (Predicate) | N/A | 45°/30° | |
Measuring Range for Dioptric Power | -33 D to +40 D | -33 D to +40 D | Same as predicate. |
Operating Distance | Slightly different, but not stated if this required specific acceptance. | ||
- NW500 (Subject) | N/A | 35.5mm | |
- TRC-NW400 (Predicate) | N/A | 34.8mm | |
Photographable Diameter of Pupil | Subject device can photograph smaller pupils. | ||
- Normal (NW500) | N/A | φ2.5mm or more | Improved over predicate. |
- Small Pupil (NW500) | N/A | φ2.0mm or more | Improved over predicate. |
- Normal (TRC-NW400) | N/A | φ4.0mm or more | |
- Small Pupil (TRC-NW400) | N/A | φ3.3mm or more | |
Software Level of Concern | N/A | Moderate | Verified and validated as per FDA guidance. |
Missing Information:
- No specific acceptance criteria for image quality beyond resolving power is detailed. While standards like ISO 15004-1:2006 are listed, the specific metrics and thresholds used for acceptance are not provided.
- The document explicitly states: "This section is not applicable because clinical data was not provided for this 510(k) submission." This means there was no clinical study to evaluate the device's image capture performance against specific diagnostic outcomes or expert interpretations. The approval is based on substantial equivalence to a predicate device, assuming similar performance characteristics due to similar technological aspects and bench testing.
Therefore, the following information cannot be provided from the given document:
2. Sample size used for the test set and the data provenance: Not applicable, as no clinical test set/study is described.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. The device is a camera, not an AI-assisted diagnostic tool described in this document.
6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done: Not applicable. The device is a camera. Software verification and validation were performed for the "Software of NW500," which was concluded to be a "Moderate Level of Concern," but this refers to the operational software of the camera, not an AI algorithm for diagnostic interpretation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.): Not applicable, as no clinical study or ground truth establishment process is described.
8. The sample size for the training set: Not applicable, as no AI/algorithm training is described.
9. How the ground truth for the training set was established: Not applicable.
Ask a specific question about this device
(112 days)
Topcon Corporation
The Topcon DRI OCT Triton is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The DRI OCT Triton is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the human retina to a database of known normal subjects. The DRI OCT Triton is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
The DRI OCT Triton ("Triton") and the DRI OCT Triton (plus)") are non-contact, highresolution, tomographic and bio-microscopic imaging systems that merge optical coherence tomography (OCT) and fundus camera into a single device. Triton and Triton (plus) employ the swept source OCT (SS-OCT) technology. Both can take anterior OCT images in addition to fundus OCT images. The fundus camera, in both Triton and Triton (plus), includes color imaging, red-free imaging, and infrared light imaging (hereinafter, IR imaging) capabilities for fundus observation. The Triton (plus) has fluorescein angiography (FA), and fundus autofluorescence angiography (FAF) imaging function in addition to all fundus functions for Triton.
The fundus photographs and OCT images are captured by different system components of this device, which enables Triton to capture an OCT image and a fundus image sequentially. It allows in vivo viewing, axial cross sectional, and three dimensional imaging and measurement of posterior ocular structures, including retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures. It also has a reference database for posterior ocular measurements of normal subjects, which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head and the macula.
Captured images are transferred from the device to an off-the-shelf personal computer (PC) via LAN cable, where the dedicated software for this device is installed. The transferred data is then automatically processed with analysis functions such as the automatic retinal layers segmentation, the automatic thickness calculation with several grids, the optic disc analysis and comparison with a reference database of eyes free of ocular pathology, and is finally automatically saved to the PC. It allows the user to manually adjust the automated retinal layer segmentation results and optic disc analysis results.
Accessories include the power cord, chin-rest paper sheet, monitor cleaner, LAN cable; chin-rest paper pins, external fixation target, dust cover accessory case, user manual, unpacking and analysis software DVD.
The Topcon DRI OCT Triton is a non-contact, high-resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying, and storing data of the retina and surrounding parts of the eye. It is indicated for in vivo viewing, axial cross-sectional, and three-dimensional imaging and measurement of posterior ocular structures (retinal nerve fiber layer, macula, and optic disc) and anterior ocular structures. It includes a Reference Database for posterior ocular measurements to quantitatively compare these structures to a database of known normal subjects. The device is intended as a diagnostic aid in the diagnosis, documentation, and management of adult ocular health and diseases.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the Topcon DRI OCT Triton were based on demonstrating substantial equivalence to its predicate devices, the Topcon 3D OCT-1 Maestro and the Topcon TRC-50DX Retinal Camera. This was evaluated through agreement and precision studies, as well as image quality evaluations. The specific acceptance criteria are implicit in the reported performance metrics shown below.
Measurement Type | Acceptance Criteria (Implicit) | Reported Device Performance (Triton vs. Maestro) |
---|---|---|
Agreement Metrics (Triton vs. Maestro) | ||
Full Retinal Thickness | Measurements obtained with Triton should be mathematically similar, statistically consistent with, and clinically useful as compared to Maestro, across normal, retinal, and glaucoma eyes and various scan areas (7x7 Macula vs. 6x6 Macula, and 12x9 Wide vs. 12x9 Wide). The 95% Limits of Agreement (LOA) should demonstrate clinical equivalence. | For Normal Eyes (N=25): Central Fovea difference (Mean (SD)) 0.744 (6.219), 95% LOA (-11.695, 13.182).Inner Superior difference -2.653 (4.541), 95% LOA (-11.734, 6.428).Other regions showed similar narrow LOA. For Retinal Eyes (N=26): Central Fovea difference -2.503 (5.865), 95% LOA (-14.233, 9.227). Inner Superior difference -4.555 (4.739), 95% LOA (-14.034, 4.924). Other regions showed similar narrow LOA.For Glaucoma Eyes (N=25): Central Fovea difference -1.795 (4.937), 95% LOA (-11.670, 8.079). Inner Superior difference -3.864 (3.917), 95% LOA (-11.698, 3.971). Other regions showed similar narrow LOA.General Conclusion: "The measurements obtained with the Triton device as compared to the Maestro device were mathematically similar, statistically consistent with, and clinically useful in the assessment of normal and diseased eyes." (Page 7) |
Retinal Nerve Fiber Layer (RNFL) Thickness | Similar to Full Retinal Thickness, demonstrated by 95% LOA and statistical consistency. | For Normal Eyes (N=25): Average RNFL difference -1.996 (0.782), 95% LOA (-3.561, -0.431). Other regions (Superior, Nasal, Inferior Quadrants, and 12-Sectors) showed similar narrow LOA.For Retinal Eyes (N=26): Average RNFL difference -1.677 (1.185), 95% LOA (-4.047, 0.693). Other regions showed similar narrow LOA.For Glaucoma Eyes (N=25): Average RNFL difference -1.156 (1.045), 95% LOA (-3.246, 0.934). Other regions showed similar narrow LOA.General Conclusion: "The measurements obtained with the Triton device as compared to the Maestro device were mathematically similar, statistically consistent with, and clinically useful in the assessment of normal and diseased eyes." (Page 7) |
Ganglion Cell + IPL Thickness | Similar to Full Retinal Thickness, demonstrated by 95% LOA and statistical consistency. | For Normal Eyes (N=25): Average GCL+IPL difference -1.756 (0.593), 95% LOA (-2.942, -0.570). Other regions showed similar narrow LOA.For Retinal Eyes (N=26): Average GCL+IPL difference -1.525 (0.999), 95% LOA (-3.523, 0.473). Other regions showed similar narrow LOA.For Glaucoma Eyes (N=25): Average GCL+IPL difference -1.008 (0.752), 95% LOA (-2.513, 0.496). Other regions showed similar narrow LOA.General Conclusion: "The measurements obtained with the Triton device as compared to the Maestro device were mathematically similar, statistically consistent with, and clinically useful in the assessment of normal and diseased eyes." (Page 7) |
Ganglion Cell Complex (GCC) Thickness | Similar to Full Retinal Thickness, demonstrated by 95% LOA and statistical consistency. | For Normal Eyes (N=25): Average GCC difference -0.044 (1.158), 95% LOA (-2.361, 2.273). Other regions showed similar narrow LOA.For Retinal Eyes (N=26): Average GCC difference 0.475 (1.732), 95% LOA (-2.988, 3.938). Other regions showed similar narrow LOA.For Glaucoma Eyes (N=25): Average GCC difference 0.537 (0.791), 95% LOA (-1.044, 2.119). Other regions showed similar narrow LOA.General Conclusion: "The measurements obtained with the Triton device as compared to the Maestro device were mathematically similar, statistically consistent with, and clinically useful in the assessment of normal and diseased eyes." (Page 7) |
Optic Disc Measurements | Similar to Full Retinal Thickness, demonstrated by 95% LOA and statistical consistency for various optic disc parameters (e.g., C/D Vertical, C/D Area, Disc Area, Cup Area, Rim Area, Cup Volume, Rim Volume, Linear C/D Ratio). | For Normal Eyes (N=25): C/D Vertical difference 0.004 (0.110), 95% LOA (-0.216, 0.224). Disc Area difference -0.285 (0.145), 95% LOA (-0.575, 0.006). Other parameters showed similarly narrow LOA.For Retinal Eyes (N=26): C/D Vertical difference 0.029 (0.036), 95% LOA (-0.044, 0.102). Disc Area difference -0.240 (0.214), 95% LOA (-0.668, 0.188). Other parameters showed similarly narrow LOA.For Glaucoma Eyes (N=25): C/D Vertical difference 0.038 (0.052), 95% LOA (-0.066, 0.142). Disc Area difference -0.247 (0.165), 95% LOA (-0.576, 0.082). Other parameters showed similarly narrow LOA.Conclusion: "The measurements obtained with the Triton device as compared to the Maestro device were mathematically similar, statistically consistent with, and clinically useful in the assessment of normal and diseased eyes." (Page 7) |
Image Quality Evaluation | ||
Fundus Photograph Evaluation | Majority of photographs should be clinically useful (grade 3 or above). Response rates (Triton grades equal to or better than Maestro) should be high. Inter-grader agreement should demonstrate consistency. | Majority of photographs graded as good or excellent by both graders. Response rates ranged between 65.4% and 96%. Over 95% of photographs were considered clinically useful (grade 3 or higher). Total inter-grader agreement between 28% and 68% for Triton, and 28% and 64% for Maestro. (Page 20-21) Differences of 1 grade were not considered significant. Overall, the graders generally agreed on the clinical utility of the images. |
Anterior B Scan Image Quality | Nearly all images graded as fair or good. High response rates for Triton vs. Maestro. Inter-grader agreement should be high. | Nearly all images (all Triton and 74/76 Maestro) graded as fair or good. Response rates (Triton grades equal to or better than Maestro) ranged between 92% and 100%. Total inter-grader agreement was generally higher for Triton (72%-100%) compared to Maestro (48%-80%). (Page 21) |
Posterior B Scan Image Quality | All images graded as good or fair. High response rates for Triton vs. Maestro. Inter-grader agreement should be consistent. | All images graded as good or fair by both graders. Response rates (Triton grades equal to or better than Maestro) ranged between 84.6% and 100%. Inter-grader agreement ranged between 64% and 96% for Triton and 68% and 96% for Maestro. 34% of images differed by one grade, but all these were clinically useful (grade 2 or higher). (Page 21) |
Fundus Autofluorescence (FAF) and Fluorescein Angiography (FA) Image Quality | Majority of images graded as good or excellent. High response rates for Triton (plus) vs. TRC-50DX. | Majority of FAF and FA images graded as good or excellent by both graders. Response rates (Triton (plus) grades equal to or better than TRC-50DX) ranged between 85.2% and 94.9%. Triton had higher rates of agreement (68.3%-73.2%) compared to TRC-50DX (56%-61.7%). (Page 56) |
2. Sample Size Used for the Test Set and Data Provenance
For the Agreement and Precision Study (comparing Triton to Maestro):
- Sample Size: 76 participants, including:
- 25 Normal eyes
- 26 Retinal eyes
- 25 Glaucoma eyes
- Data Provenance: Prospective comparative clinical study conducted at a single U.S. clinical site. (Page 6)
For the Fundus Autofluorescence and Fluorescein Angiography Image Quality Evaluation Study (comparing Triton (plus) to TRC-50DX):
- Specific sample size (number of participants/eyes) is not explicitly stated, but the submission mentions "Majority of the FAF and FA images were graded by both graders" and "response rates (i.e., percentage of subjects whose Triton grades were equal to or better than the corresponding TRC-50DX grades)".
- Data Provenance: Prospective clinical study conducted at one clinical site, located in the United States. (Page 55-56)
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Agreement and Precision Study (Fundus Photographs, Anterior B Scan, Posterior B Scan evaluations): The image quality of the fundus photographs, and the image quality of anterior and posterior OCT B scans were graded by two masked independent experts. (Page 7). Their specific qualifications (e.g., years of experience, specialty) were not detailed in the provided text.
- Fundus Autofluorescence and Fluorescein Angiography Image Quality Evaluation Study: The FAF and FA images were graded by two masked independent graders in a blinded and randomized fashion. Their specific qualifications were not detailed. (Page 56).
4. Adjudication Method for the Test Set
- Agreement and Precision Study: The text describes that two masked independent experts graded the images. For fundus photographs, the company performed a further analysis on image quality grades that differed by 1 point, stating this difference is not considered significantly different when both graders' scores are certain values. For posterior B scans, it states that 34% of images differed by one grade, but all these images were considered clinically useful by both graders. This suggests a form of implicit agreement or tolerance for minor discrepancies, rather than a formal adjudication process like 2+1 or 3+1 where a third expert decides.
- Fundus Autofluorescence and Fluorescein Angiography Image Quality Evaluation Study: The text indicates images were graded by "two masked independent graders" who performed the grading in a "blinded and randomized fashion." No explicit adjudication method (e.g., tie-breaking by a third reader) is mentioned for instances of disagreement.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- A MRMC comparative effectiveness study was not explicitly described for evaluating human reader improvement with AI assistance. The clinical studies focused on comparing the performance and agreement of the Triton device (algorithm included) with predicate devices directly, and on establishing a reference database. Human readers were involved in grading image quality for agreement studies, but the studies were not designed to measure the effect of AI assistance on human reader performance.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- The study primarily focuses on the standalone performance of the DRI OCT Triton device compared to predicate devices for quantitative measurements and image quality. The device itself performs the image acquisition, segmentation, and thickness calculations. The evaluation of image quality by experts is a pragmatic assessment of the output generated by the device, essentially evaluating its standalone output for clinical utility. The quantitative metrics (e.g., retinal thickness measurements, optic disc parameters) are direct outputs of the device's algorithms.
7. The Type of Ground Truth Used
- For the Agreement and Precision Study: The ground truth for quantitative measurements was established by comparison against the measurements obtained from the predicate device (Maestro). For image quality evaluations, the ground truth was established by expert grading/consensus from two masked independent experts.
- For the Fundus Autofluorescence and Fluorescein Angiography Image Quality Evaluation Study: The ground truth for image quality was established by expert grading/consensus from two masked independent graders comparing Triton (plus) images to TRC-50DX images.
8. The Sample Size for the Training Set
The provided text describes clinical studies for performance evaluation and for establishing a reference database. It does not explicitly mention a training set sample size for the device's algorithms. The clinical studies appear to be validation studies rather than studies for training the underlying algorithms. The "Reference Database" was established with 410 evaluable eyes. While this database is used for quantitative comparison within the device, it's not explicitly stated to be the training set for the segmentation or measurement algorithms.
9. How the Ground Truth for the Training Set Was Established
Since a "training set" is not explicitly detailed or a method for establishing its ground truth described, this information cannot be provided from the given text. The reference database for quantitative comparisons was established from measurements of 410 normal eyes (age ≥18, no glaucomatous optic nerve damage) collected across six U.S. clinical sites. For these eyes, various scan parameters (full retinal thickness, RNFL thickness, GCL+IPL thickness, GCC thickness, optic disc measurements, TSNIT circle profile measurements) were collected, and percentiles were estimated using quantile regression with age and/or disc area as covariates. This database serves as a "ground truth" for comparison for normal subjects within the device's functionality, but not explicitly as a ground truth for training the segmentation or measurement algorithms themselves.
Ask a specific question about this device
(175 days)
Topcon Corporation
The IMAGEnet 6 Ophthalmic Data System is a software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices without controlling or altering the functions and parameters of any medical devices or through computerized networks. It is intended for processing and displaying ophthalmic images and optical coherence tomography data.
The IMAGEnet 6 Ophthalmic Data System uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects.
IMAGEnet 6 Ophthalmic Data System is a Web application that allows management of patient information, exam information and image information. It is installed on a server PC and operated via a web browser of a client PC.
IMAGEnet 6 Ophthalmic Data System receives information from Topcon ophthalmological medical devices and saves the information including the patient information. The saved data can be displayed for diagnosis. In addition, it can save patient information, exam information, and image information as digital data to a database. These data can also be exported as digital data.
IMAGEnet 6 Ophthalmic Data System does not control or alter the functions or parameters of any medical device. IMAGEnet 6 Ophthalmic Data System is used in cooperation with the capture software designated for each capture device to retrieve image data such as an OCT image or a fundus image. IMAGEnet 6 Ophthalmic Data System receives, displays, and saves the image data captured with the capture software.
It also allows to send/receive patient information and image information, etc. to/from an external system via communication conforming to the DICOM standard.
This document, a 510(k) Summary for the IMAGEnet 6 Ophthalmic Data System, outlines the device's substantial equivalence to predicate devices, focusing on its function as a picture archiving and communication system (PACS) for ophthalmic data.
Here's an analysis of the requested information, based only on the provided text:
Acceptance Criteria and Reported Device Performance
The document states that "Software verification and validation testing was conducted" and that "the IMAGEnet 6 Ophthalmic Data System was tested to demonstrate that the measurement and analysis functions are equivalent to the predicate devices and been found equivalent to the predicate devices."
However, the document does not explicitly define specific numerical acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) or provide a table listing these criteria alongside reported device performance metrics. Instead, it broadly claims equivalency to predicate devices.
Acceptance Criteria (Explicitly Stated in Document) | Reported Device Performance (Explicitly Stated in Document) |
---|---|
Equivalence of measurement and analysis functions to predicate devices. | Measurement and analysis functions found equivalent to predicate devices. |
Missing Information: Specific quantitative metrics for acceptance criteria and device performance are not provided. The document relies on a qualitative statement of equivalence.
Study Details
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size for test set: Not specified.
- Data provenance: Not specified (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not specified. The document focuses on software verification and validation and equivalence to predicate devices, not on diagnostic performance against a ground truth established by experts.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not specified. This is typically relevant for studies involving human readers or expert consensus, which is not detailed here.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not explicitly stated that an MRMC comparative effectiveness study was done. The focus of the performance data section is on validating the software and its equivalence to predicate devices, not on human-AI collaboration or improvement with AI assistance. The device is described as a data system, not an AI diagnostic tool.
- Effect size of human reader improvement: Not mentioned, as an MRMC study is not detailed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that "measurement and analysis functions" were tested for equivalence to predicate devices in a standalone manner (i.e., the software's performance itself), but it does not provide standalone performance metrics beyond a claim of equivalence. The device is a data management system, not a diagnostic algorithm in the sense of AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The document states that the device "uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects." This suggests the ground truth for these comparisons is derived from these "reference databases of known normal subjects." However, for general software functionality, ground truth would typically be defined by engineering specifications and expected output.
8. The sample size for the training set
- Not applicable/Not specified. This device is described as a data system that uses algorithms and reference databases from predicate devices. It is not presented as an AI/ML model that would have its own training set in a traditional sense. The algorithms and databases are inherited from already cleared devices.
9. How the ground truth for the training set was established
- Not applicable/Not specified. As above, this information would be relevant if the IMAGEnet 6 itself encompassed novel AI/ML algorithms requiring a training set. The document indicates it reuses established algorithms and reference databases.
Ask a specific question about this device
(44 days)
Topcon Corporation
The Topcon 3D OCT-1 Maestro is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The 3D OCT-1 Maestro is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retina, retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
lt also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the macula in the human retina to a database of known normal subjects. The 3D OCT-1 Maestro is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
The 3D OCT-1 Maestro with new line CCD is a non-contact, high-resolution, tomographic and biomicroscopic imaging system that combines optical coherence tomography (OCT) and fundus camera technology, along with various quantitative measurement and other data analysis functionalities. The device consists of the instrument body (main unit, chin-rest unit, and power supply base), software (to operate the instrument and to process the analysis functions), and various accessories. The software incorporates a number of safety features to detect errors during use and interrupt device functions as needed when an error is identified.
The only patient-contacting materials in the device - silicone rubber, Acrylonitrile-butadiene styrene resin (ABS), and polyamide resin (PA) – are classified per FDA's guidance on ISO 10993-1 as limited-duration contact with the patient or operator's intact skin. These are the same materials as were incorporated in the patient-contacting pieces of the predicate device.
The device is re-usable and is not supplied sterile; cleaning instructions are provided in the labeling and are essentially the same as those for the predicate device. The device is AC-powered.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Topcon 3D OCT-1 Maestro:
It's important to note that this document is a 510(k) summary for a submission seeking clearance for a modified version of an existing device (Topcon 3D OCT-1 Maestro, K170164) by establishing substantial equivalence to its predicate device (Topcon 3D OCT-1 Maestro, K161509). Therefore, the "study" described is primarily focused on demonstrating that the modifications did not alter the safety or effectiveness of the device compared to the predicate, rather than a de novo clinical trial to prove a new performance claim.
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria in the traditional sense (e.g., "sensitivity must be >X%, specificity >Y%"). Instead, the acceptance criterion for this 510(k) submission is "functioning equivalently to the predicate 3D OCT-1 Maestro" and demonstrating that "the safety and effectiveness profile of the modified device is the same as that of its predicate."
The reported device performance, in this context, is that:
- "In all instances, the 3D OCT-1 Maestro with new line CCD functioned as intended and produced the expected results."
- "Performance data demonstrate that the modified device is as safe and effective as the predicate Maestro device."
- The modified device is "substantially equivalent."
While a table of acceptance criteria and reported "performance" in numerical terms (like sensitivity/specificity) is not provided in the document for the reasons explained above, we can frame it as:
Acceptance Criteria (Implicit for Substantial Equivalence) | Reported Device Performance |
---|---|
Device functions equivalently to predicate. | Functioned as intended. |
Device produces expected results. | Produced expected results. |
Safety profile is the same as predicate. | As safe as the predicate. |
Effectiveness profile is the same as predicate. | As effective as the predicate. |
No new issues of safety or effectiveness. | No new issues. |
Study Details
The document describes "bench testing" as the primary study performed to demonstrate substantial equivalence for the modifications.
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not applicable in the context of clinical image data. The "test set" here refers to the physical device components and software. No patient-specific test set data is described.
- Data Provenance: Not applicable for a clinical test set. The testing was bench-based, involving the modified device hardware and software.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The "ground truth" for this bench testing was the expected functionality and output of the device components based on engineering specifications and comparison to the predicate device. It did not involve expert-labeled clinical data.
-
Adjudication method for the test set: Not applicable. No clinical test set requiring adjudication was used.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No, an MRMC comparative effectiveness study was not done. This device is an imaging system (OCT and fundus camera), not an AI-driven diagnostic system providing interpretations or assisting human readers in a way that would lend itself to an MRMC study with AI assistance. The modifications were related to hardware components and software functionality.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: No. This is an imaging device, not a standalone diagnostic algorithm in the AI sense. Performance assessment focused on the device's ability to capture images and perform its intended measurements as reliably as the predicate.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.): The "ground truth" for the bench testing was based on the device's engineering specifications, expected results from the predicate device, and compliance with recognized consensus standards (e.g., for safety, electromagnetic compatibility, software life cycle). It was not clinical ground truth like pathology or expert consensus on patient cases.
-
The sample size for the training set: Not applicable. This device is a medical imaging instrument; the modifications did not involve training an AI algorithm on a dataset.
-
How the ground truth for the training set was established: Not applicable, as there was no AI training set.
Summary of the Study's Nature:
The "study" described in this 510(k) summary is primarily a bench testing and verification/validation effort. The purpose was to demonstrate that modifications to the Topcon 3D OCT-1 Maestro (replacing a line CCD component and other minor updates) did not negatively impact its safety or effectiveness compared to the previously cleared predicate device. This is a common approach for 510(k) submissions where the changes are considered minor and do not alter the fundamental scientific technology or intended use. It is not a clinical study to establish new performance metrics or compare diagnostic accuracy against a clinical gold standard.
Ask a specific question about this device
(144 days)
TOPCON CORPORATION
The Slit Lamp SL-D301 is an AC-powered slitlamp biomicroscope intended for use in eye examination of the anterior eye segment, from the cornea epithelium to the posterior capsule. It is used to aid in the diagnosis of diseases or trauma which affect the structural properties of the anterior eye segment.
This instrument is a slit lamp to observe, examine and photograph the eyeball and appendage of the eye. The slit lamp has the illumination unit for illumination and the binocular stereo microscope, and allows for stereoscopic observation. This instrument allows users to photograph and save the observed images by combining with an accessory, the digital camera unit DC-4. This instrument consists of the main body and accessories.
The document is a 510(k) summary for the Topcon Slit Lamp SL-D301, which is an AC-powered slitlamp biomicroscope.
Here's an analysis of the provided text in relation to your request:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table with specific acceptance criteria (e.g., numerical thresholds for performance metrics) or directly compare the device performance against such criteria in a quantitative manner. Instead, it states that the device was found to be "substantially equivalent" to a predicate device based on its intended use, indications for use, and similar technological characteristics, and compliance with recognized consensus standards.
The closest to "acceptance criteria" and "reported device performance" are statements of compliance with standards:
Acceptance Criteria Category | Reported Device Performance (Compliance) |
---|---|
Electrical Safety | Compliant with AAMI ANSI/ES60601-1:2005/(R)2012 and IEC 60601-1-2: 2007 |
Optical Safety (Light Hazard Protection) | Compliant with ISO 15004-2:2007 |
General Ophthalmic Instrument Requirements | Compliant with ISO 15004-1:2006 |
Slit-lamp Microscope Standards | Compliant with ISO 10939:2007 |
Overall Performance | "The performance testing demonstrated that the Slit Lamp SL-D301 is as safe and effective as the predicate device, and performs as well or better than the predicate. The test results showed that the Slit Lamp SL-D301 met the same electrical safety, optical safety and slit lamp standards as the predicate device." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states "The following bench testing was conducted...". This indicates that the study primarily involved laboratory-based evaluations against engineering and safety standards, rather than clinical studies using patient data. Therefore, concepts like "sample size for the test set" or "data provenance (country of origin, retrospective/prospective)" in the context of patient data do not apply here.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Given that the study was bench testing for compliance with technical standards, there were no "experts" in the sense of clinical specialists establishing ground truth on patient data. The "ground truth" was defined by the requirements of the recognized consensus standards themselves.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as this was bench testing against technical standards, not a clinical study requiring adjudication of expert interpretations.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No such study was performed or mentioned. This device is a Slit Lamp, a diagnostic instrument for direct observation by clinicians, and does not involve AI or human-in-the-loop assistance in the diagnostic aid sense that would warrant an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The device is a physical instrument for observation. It does not perform an "algorithm only" task in the absence of a human operator.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
The "ground truth" in this context refers to the requirements and specifications set forth in the recognized consensus standards (e.g., IEC 60601-1, ISO 10939, etc.). The device was tested to ensure it met these predetermined engineering and safety specifications.
8. The sample size for the training set
Not applicable. This device does not use machine learning or AI that would require a training set.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(57 days)
Topcon Corporation
The Topcon 3D OCT-1 Maestro is a non-contact, high resolution tomographic and biomicroscopic imaging device that incorporates a digital camera for photographing, displaying and storing the data of the retina and surrounding parts of the eye to be examined under Mydriatic and non-Mydriatic conditions.
The 3D OCT-1 Maestro is indicated for in vivo viewing, axial cross sectional, and three-dimensional imaging and measurement of posterior ocular structures, including retinal nerve fiber layer, macula and optic disc as well as imaging of anterior ocular structures.
It also includes a Reference Database for posterior ocular measurements which provide for the quantitative comparison of retinal nerve fiber layer, optic nerve head, and the human retina to a database of known normal subjects. The 3D OCT-1 Maestro is indicated for use as a diagnostic device to aid in the diagnosis, documentation and management of ocular health and diseases in the adult population.
The Maestro is a non-contact, high-resolution, tomographic and bio-microscopic imaging system that merges OCT and fundus cameras into a single device. The technological characteristics of the OCT employed are similar to those of already 510(k)-cleared OCT products, such as Topcon's 3D OCT-2000 (K092470), in that it employs conventional spectral domain OCT with widely-used 840 nm light source. The technological characteristics of the fundus camera employed are also similar to those of already cleared fundus cameras, such as Topcon's TRC NW300 (K123460), in terms of field of view (FOV) and camera sensor resolution.
The Maestro captures an OCT image and a color fundus image sequentially. It can take anterior OCT images in addition to fundus OCT images. It also includes a reference database for fundus OCT. Captured images are transferred from the device to an off-the-shelf personal computer (PC) via LAN cable, where the dedicated software for this device is installed. The transferred data is then automatically processed with analysis functions such as the automatic retinal layers segmentation, the automatic thickness calculation with several grids, the optic disc analysis and comparison with a reference database of eyes free of ocular pathology, and is finally automatically saved to the PC.
Two software programs for installation on an off-the-shelf PC are provided with the device. The first PC software program, called "FastMap", captures the images from the device, analyzes them and enables viewing of the data. The second PC software program, called "OCT Viewer", can only analyze and view the data.
Accessories include the following: power cord; chin-rest paper sheet; LAN cable; chin-rest paper pins; external fixation target; dust cover; spare parts case; and stylus pen. An optional Anterior Segment Kit allows the user to activate the anterior segment imaging functionality of the Maestro device.
The Topcon 3D OCT-1 Maestro is a non-contact, high-resolution tomographic and biomicroscopic imaging device. The provided text outlines its performance data, primarily focusing on repeatability and reproducibility measurements for various ocular structures in different patient populations.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria for the repeatability and reproducibility of the measurements. Instead, it presents the calculated repeatability and reproducibility measurements (SD, Limit, CV%) for the Maestro device across different parameters and patient groups. The "acceptance criteria" appear to be implied by the presentation of these results, suggesting that the device's performance, as measured, is considered acceptable for demonstrating substantial equivalence to predicate devices.
However, based on the provided tables, here's a summary of the reported device performance:
Measurement Type | Population | Scan Pattern | Typical Repeatability CV% (Range) | Typical Reproducibility CV% (Range) |
---|---|---|---|---|
Full Retinal Thickness | Normal Eyes | 12x9 3D Wide | 0.286% - 1.115% | 0.526% - 1.461% |
6x6 3D Macula | 0.305% - 0.684% | 0.498% - 1.025% | ||
Retinal Disease Eyes | 12x9 3D Wide | 0.378% - 1.478% | 0.595% - 1.897% | |
6x6 3D Macula | 0.376% - 1.090% | 0.660% - 1.336% | ||
Glaucoma Eyes | 12x9 3D Wide | 0.493% - 1.199% | 0.661% - 1.639% | |
6x6 3D Macula | 0.332% - 1.288% | 0.719% - 1.239% | ||
Ganglion Cell + IPL | Normal Eyes | 12x9 3D Wide | 0.404% - 0.950% | 0.508% - 1.162% |
6x6 3D Macula | 0.364% - 1.044% | 0.557% - 1.148% | ||
Retinal Disease Eyes | 12x9 3D Wide | 1.041% - 2.673% | 1.101% - 3.604% | |
6x6 3D Macula | 0.690% - 1.452% | 0.984% - 1.824% | ||
Glaucoma Eyes | 12x9 3D Wide | 0.628% - 1.563% | 0.716% - 1.784% | |
6x6 3D Macula | 0.593% - 1.288% | 0.736% - 1.451% | ||
Ganglion Cell Complex Thickness | Normal Eyes | 12x9 3D Wide | 0.470% - 0.821% | 0.645% - 1.056% |
6x6 3D Macula | 0.498% - 1.400% | 0.729% - 1.607% | ||
Retinal Disease Eyes | 12x9 3D Wide | 1.112% - 3.213% | 1.112% - 3.232% | |
6x6 3D Macula | 0.485% - 1.093% | 0.601% - 1.093% | ||
Glaucoma Eyes | 12x9 3D Wide | 0.638% - 1.189% | 0.687% - 1.240% | |
6x6 3D Macula | 0.508% - 1.131% | 0.678% - 1.265% | ||
Retinal Nerve Fiber Layer (RNFL) - Average | Normal Eyes | 12x9 3D Wide | 1.318% | 1.517% |
6x6 3D Disc | 0.933% | 1.099% | ||
Retinal Nerve Fiber Layer (RNFL) - Sectoral | Normal Eyes | 12x9 3D Wide | 2.461% - 16.711% | 3.040% - 18.538% |
6x6 3D Disc | 3.738% - 13.898% | 4.405% - 14.407% | ||
Retinal Disease Eyes | 12x9 3D Wide | 1.594% - 8.143% | 1.888% - 8.675% | |
6x6 3D Disc | 1.084% - 5.725% | 1.480% - 7.387% | ||
Glaucoma Eyes | 12x9 3D Wide | 1.970% - 8.261% | 2.097% - 8.299% | |
6x6 3D Disc | 1.929% - 6.480% | 1.933% - 7.074% | ||
Optic Disc | Normal Eyes | 12x9 3D Wide | 3.520% - 6.600% | 4.233% - 7.967% |
6x6 3D Disc | 3.313% - 6.359% | 4.074% - 8.139% | ||
Retinal Disease Eyes | 12x9 3D Wide | 3.858% - 8.404% | 4.981% - 20.586% | |
6x6 3D Disc | 2.855% - 5.627% | 3.438% - 11.024% | ||
Glaucoma Eyes | 12x9 3D Wide | 3.179% - 14.274% | 3.811% - 17.103% | |
6x6 3D Disc | 1.852% - 5.813% | 1.959% - 7.201% |
The "Limit" values in the tables are calculated as 2.8 x SD, representing a range within which 95% of repeated measurements are expected to fall. The "CV%" is the Coefficient of Variation, indicating precision relative to the mean.
2. Sample Sizes Used for the Test Set and Data Provenance
- Test Set (Clinical Studies for Repeatability and Reproducibility):
- Normal Subjects: 25 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=25).
- Subjects with Retinal Disease: 26 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=26).
- Subjects with Glaucoma: 25 subjects for macula and optic disc measurements (full retinal thickness, ganglion cell + IPL, ganglion cell complex thickness, retinal nerve fiber layer, optic disc parameters). Explicitly stated in the tables (N=25).
- Data Provenance: The document does not explicitly state the country of origin. The study is referred to as "clinical studies," but it's not specified if they were prospective or retrospective. Given that the manufacturer is Topcon Corporation of Japan, but the contact person is in the US, the studies could have been conducted in either or both regions.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number or qualifications of experts used to establish a "ground truth" for the test set in the context of the repeatability and reproducibility studies. The clinical studies were conducted to determine the agreement, repeatability, and reproducibility of measurement data, not for diagnostic accuracy against a ground truth.
However, it mentions: "Consistent with the labeling for the test and control devices, the clinical site was permitted to make manual adjustments to automated segmentation based on the clinician's judgment." This indicates that clinicians (likely ophthalmologists or optometrists) were involved in reviewing and potentially adjusting automated segmentation, but it doesn't define them as "ground truth" experts in the context of defining disease states or specific measurements for the purpose of validating the AI's accuracy against a known truth.
4. Adjudication Method for the Test Set
The document does not describe an adjudication method in the context of establishing ground truth for the test set. The clinical studies focused on repeatability and reproducibility of quantitative measurements, rather than classification or diagnosis that would typically require an adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating human reader improvement with or without AI assistance was not reported in this document. The clinical studies conducted were focused on the device's measurement precision (repeatability and reproducibility) and agreement with predicate devices rather than human-AI collaboration for diagnostic accuracy.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The clinical studies described primarily assessed the precision of the device's measurements (which involve algorithms for segmentation and thickness calculation) rather than the standalone diagnostic performance of an AI algorithm. The device performs automatic retinal layer segmentation, automatic thickness calculation, and optic disc analysis. The phrase "the clinical site was permitted to make manual adjustments to automated segmentation based on the clinician's judgment" (page 6) suggests that the device's algorithms operate with potential human oversight, implying it's not strictly a standalone AI performance evaluation for diagnostic purposes. The data presented are for the device's ability to consistently provide these measurements.
7. The Type of Ground Truth Used
For the repeatability and reproducibility studies, the "ground truth" is not a diagnostic label (e.g., pathology, outcomes data). Instead, the studies assess the consistency of the device's quantitative measurements of ocular structures (e.g., retinal thickness, RNFL thickness) by comparing multiple measurements taken under similar or varied conditions. The reference database uses "known normal subjects" but this is for comparative analysis against a normal population rather than for establishing a "ground truth" for disease diagnosis in the test set.
8. The Sample Size for the Training Set
The document specifies a "Reference Database" was compiled using "399 subject eyes from normal study subjects." This database is for "quantitative comparison... to a database of known normal subjects," which functions as a normative reference rather than a training set for an AI/algorithm in the conventional sense (e.g., for classification tasks).
If parts of the device's functionality (like automatic segmentation) involve machine learning, the training set size for those specific algorithms is not provided in this document. The 399 normal subjects form a reference database, not explicitly an algorithm training set.
9. How the Ground Truth for the Training Set Was Established
For the "Reference Database" of 399 normal subjects:
- How it was established: The study collected measurements of various ocular structures from these normal eyes. The "normal" status of these subjects would have been established through clinical evaluation to ensure they were free of ocular pathology.
- Type of Ground Truth: The ground truth for this reference database is the consensus clinical determination that the subjects have "normal eyes" and the quantitative measurements derived from these normal eyes form the expected range for a healthy population. The document states it provides "quantitative comparison... to a database of known normal subjects." It also mentions "a reference database of eyes free of ocular pathology."
Ask a specific question about this device
Page 1 of 2