Search Results
Found 2 results
510(k) Data Aggregation
(329 days)
The RetinAl Discovery is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including: patient data, clinical images and information, reports and measurements of DICOM-compliant images. The device is also indicated for manual labeling and annotation of retinal OCT scans.
The RetinAl Discovery consists of a platform which displays and analyzes images of the eye (e.g. OCT scans and fundus images) along with associated measurements (e.g. layer thickness) generated by the user through Discovery. The platform allows the user to manually segment layers and volumes in the images, it calculates the layer thickness and volume from annotated images and presents the progression of the measurement in graphs. Discovery provides a tool for measuring ocular anatomy and ocular lesion distances. The multiple views in Discovery and the measurements allow the user to assess the eye anatomy and, ultimately, assist the user in making decisions on diagnosis and monitoring of disease progression.
Here's a breakdown of the acceptance criteria and study information for RetinAI Discovery, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of specific metrics (e.g., accuracy percentages, Dice scores). Instead, the performance is described through comparison testing demonstrating equivalence with predicate/reference devices for manual segmentation and image measurement of retinal OCT scans.
Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|
Equivalence in manual segmentation of retinal layers | Comparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This implies the results of manual segmentation in Discovery do not significantly differ from those obtained from the established reference devices. |
Equivalence in image measurement of retinal OCT scans | Comparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This indicates that measurements performed within Discovery are consistent with measurements from the reference devices. |
Functioned as intended | "In all instances, Discovery functioned as intended and expected performance was reached." This suggests the software operated without critical errors or deviations from its design specifications during testing. |
IEC 62304 and IEC 82304 compliance (Software Development) | The device was "designed, developed and tested according to the software development lifecycle process implemented at RetinAI Medical AG, based on the IEC 62304 and IEC 82304 standards, and the FDA Guidance for the 'General Principles of Software Validation'." This indicates adherence to accepted software development and validation practices for medical devices, which are a form of acceptance criteria for the development process. Testing included "verification and validation activities (static code analysis, unit and integration testing, system and functional testing)." |
No new questions of safety or effectiveness from technological differences | "The minor technological differences between the RetinAI Discovery and its predicate device do not raise different questions of safety or effectiveness." This is a key regulatory acceptance criterion for substantial equivalence. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the numerical sample size for the test set (number of OCT scans or patients).
- Test Set: Implied to be the same images used for comparison testing with the reference devices.
- Data Provenance: Not specified. The document states "comparison testing was performed... with the same images segmented in cleared devices," but does not explicitly mention country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
The document does not explicitly state the number of experts or their qualifications for establishing ground truth. The comparison testing relies on the "computed values from the Reference Devices" as the standard, implying that the ground truth is derived from the established and cleared functionalities of those devices when experts perform manual segmentation or measurements within them.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method for a test set in the traditional sense of multiple human readers independently assessing and then reaching a consensus. Instead, the "ground truth" for the comparison study appears to be the output of the cleared reference devices when manual segmentation/measurements are performed by users (presumably clinicians or operators) within those systems.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a standalone performance validation comparing RetinAI Discovery's manual segmentation and measurement capabilities with those of existing cleared devices. There is no mention of human readers improving with or without AI assistance, as the device's main specified function (based on the provided text) is for displaying, analyzing, and manual labeling/annotation, not AI-powered automated analysis or decision support for human readers.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance validation was done for the manual segmentation and image measurement functionalities of the RetinAI Discovery. The device itself is described as a "standalone, browser-based software application." The comparison testing verified the performance of the Discovery platform's manual segmentation and measurement tools against the established reference devices, essentially testing the accuracy of the tools themselves "without human-in-the-loop performance" improvement claims. The document focuses on the platform's ability to facilitate manual activities.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the comparison testing was effectively the "computed values from the Reference Devices" (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton). This implies that the accepted and pre-cleared outputs of these established ophthalmic imaging and analysis devices, whether derived from their automatic or manual segmentation/measurement tools, served as the benchmark for evaluating RetinAI Discovery.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size. This is consistent with the nature of the device as described, which is specified for manual labeling and annotation and general image management/display, not for an AI model that requires a large training dataset.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for an AI model, the method for establishing ground truth for a training set is not applicable here. The described studies focus on the validation of the manual tools and general software functions.
Ask a specific question about this device
(116 days)
The Glaucoma Module is a software application intended for the management, display and analysis of visual field and optical coherence tomography data. It is intended as an aid to the detection and management of visual field defects and progression of visual field loss.
The Glaucoma Module works as an optional module, integrated into the Harmony user interface, and interfacing to Harmony to access the relevant data and information. Harmony is a comprehensive software platform intended for use in importing, processing, measurement, analysis and storage of clinical images and videos of the eye, as well as for management of patient data, diagnostic data, clinical information, reports from ophthalmic diagnostic instruments through either a direct connection with the instruments or through computerized networks. Harmony was most recently cleared by FDA in K182376.
The Glaucoma Module is a fully interactive multi-modality software for clinicians to assess, diagnose and manage patients who are glaucoma suspects or have been diagnosed with glaucoma. The Glaucoma Module is an aid to detection and management of visual field and OCT data.
The Glaucoma Module displays key information for diagnosis and management using a wellorganized interface.
Glaucoma Module is integrated into the Harmony user interface that utilizes both OCT exam and Visual Field data in an interactive manner. It employs two main sections, the Hood Dashboard screen used to determine glaucoma suspects and the Glaucoma Trend screen which can be used to observe patient data over a larger period of time.
The Glaucoma Module does not include predictive interpretations of the correlation of structural and functional measures, two measures that are understood to be independent of each other.
The Glaucoma Module will work with the following medical devices:
- Topcon's Maestro. Maestro 2, and Triton Optical Coherence Tomography devices .
- Zeiss' Visual Field instruments HFA3 and HFA Iii
- Visual Field data from other manufacturers. (e.g. Oculus EasyField) through DICOM ● OPV data format.
Here's a breakdown of the requested information based on the provided text:
Key Takeaway: The provided 510(k) summary for the Topcon Healthcare Solutions Glaucoma Module states that no performance data was required or provided for its clearance. This means there is no study described in this document that proves the device meets specific acceptance criteria related to its clinical performance. Instead, the clearance primarily relies on demonstrating substantial equivalence to a predicate device through similar intended use and technological characteristics, as well as software validation and verification.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Functional/Technical Only - No Clinical Performance) | Reported Device Performance (Software Validation & Verification) |
---|---|
Device performs as intended | Confirmed through software validation and verification |
Device meets its specifications | Confirmed through software validation and verification |
Manages, displays, and analyzes visual field and OCT data | Confirmed through substantial equivalence comparison |
Integrates into Harmony user interface | Confirmed by device description |
Accesses relevant data and information from Harmony | Confirmed by device description |
Displays key information for diagnosis and management | Confirmed by device description |
Employs Hood Dashboard and Glaucoma Trend screen | Confirmed by device description |
Does not include predictive interpretations | Confirmed by device description |
Works with specified medical devices (e.g., Topcon OCTs, Zeiss HFA) | Confirmed by device description |
Performs data retrieval from allowed devices | Confirmed by substantial equivalence comparison |
Displays visual field reports and combined reports | Confirmed by substantial equivalence comparison |
Displays visual field information of a single exam | Confirmed by substantial equivalence comparison |
Provides data plots (threshold, graytone, total/pattern deviation) | Confirmed by substantial equivalence comparison |
Provides global and reliability indices | Confirmed by substantial equivalence comparison |
Allows user comments | Confirmed by substantial equivalence comparison |
Note: The document explicitly states, "No performance data was required or provided. Software validation and verification demonstrate that the Glaucoma Module performs as intended and meets its' specifications." Therefore, the "acceptance criteria" here are primarily functional and technical requirements met through software testing and comparison to a predicate, not clinical performance metrics like sensitivity, specificity, or accuracy.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not applicable. No clinical performance testing against a specific test set is mentioned.
- Data Provenance: Not applicable. No clinical performance testing data is provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Not applicable. No clinical performance testing against a ground truth is mentioned.
4. Adjudication Method for the Test Set
- Not applicable. No clinical performance testing with adjudication is mentioned.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. An MRMC comparative effectiveness study was not done or reported.
6. If a Standalone (i.e. algorithm only, without human-in-the-loop performance) was done
- No. A standalone performance study was not done or reported. The device is described as a software application for clinicians to aid in assessment, diagnosis, and management, implying a human-in-the-loop context. However, no performance data (standalone or otherwise) is presented.
7. The Type of Ground Truth Used
- Not applicable. No ground truth for clinical performance evaluation is mentioned.
8. The Sample Size for the Training Set
- Not applicable. The document does not describe any machine learning or AI algorithm development that would involve a training set. The device is a "software application intended for the management, display and analysis..." and not an AI/ML diagnostic tool requiring such a set.
9. How the Ground Truth for the Training Set was Established
- Not applicable. As no training set is mentioned, no ground truth for it was established.
Ask a specific question about this device
Page 1 of 1