K Number
K211715
Manufacturer
Date Cleared
2022-04-28

(329 days)

Product Code
Regulation Number
892.2050
Panel
OP
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The RetinAl Discovery is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including: patient data, clinical images and information, reports and measurements of DICOM-compliant images. The device is also indicated for manual labeling and annotation of retinal OCT scans.

Device Description

The RetinAl Discovery consists of a platform which displays and analyzes images of the eye (e.g. OCT scans and fundus images) along with associated measurements (e.g. layer thickness) generated by the user through Discovery. The platform allows the user to manually segment layers and volumes in the images, it calculates the layer thickness and volume from annotated images and presents the progression of the measurement in graphs. Discovery provides a tool for measuring ocular anatomy and ocular lesion distances. The multiple views in Discovery and the measurements allow the user to assess the eye anatomy and, ultimately, assist the user in making decisions on diagnosis and monitoring of disease progression.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for RetinAI Discovery, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly state quantitative acceptance criteria in terms of specific metrics (e.g., accuracy percentages, Dice scores). Instead, the performance is described through comparison testing demonstrating equivalence with predicate/reference devices for manual segmentation and image measurement of retinal OCT scans.

Acceptance Criteria (Inferred)Reported Device Performance
Equivalence in manual segmentation of retinal layersComparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This implies the results of manual segmentation in Discovery do not significantly differ from those obtained from the established reference devices.
Equivalence in image measurement of retinal OCT scansComparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This indicates that measurements performed within Discovery are consistent with measurements from the reference devices.
Functioned as intended"In all instances, Discovery functioned as intended and expected performance was reached." This suggests the software operated without critical errors or deviations from its design specifications during testing.
IEC 62304 and IEC 82304 compliance (Software Development)The device was "designed, developed and tested according to the software development lifecycle process implemented at RetinAI Medical AG, based on the IEC 62304 and IEC 82304 standards, and the FDA Guidance for the 'General Principles of Software Validation'." This indicates adherence to accepted software development and validation practices for medical devices, which are a form of acceptance criteria for the development process. Testing included "verification and validation activities (static code analysis, unit and integration testing, system and functional testing)."
No new questions of safety or effectiveness from technological differences"The minor technological differences between the RetinAI Discovery and its predicate device do not raise different questions of safety or effectiveness." This is a key regulatory acceptance criterion for substantial equivalence.

2. Sample Size Used for the Test Set and Data Provenance

The document does not explicitly state the numerical sample size for the test set (number of OCT scans or patients).

  • Test Set: Implied to be the same images used for comparison testing with the reference devices.
  • Data Provenance: Not specified. The document states "comparison testing was performed... with the same images segmented in cleared devices," but does not explicitly mention country of origin or whether the data was retrospective or prospective.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

The document does not explicitly state the number of experts or their qualifications for establishing ground truth. The comparison testing relies on the "computed values from the Reference Devices" as the standard, implying that the ground truth is derived from the established and cleared functionalities of those devices when experts perform manual segmentation or measurements within them.

4. Adjudication Method for the Test Set

The document does not describe a formal adjudication method for a test set in the traditional sense of multiple human readers independently assessing and then reaching a consensus. Instead, the "ground truth" for the comparison study appears to be the output of the cleared reference devices when manual segmentation/measurements are performed by users (presumably clinicians or operators) within those systems.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance

No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a standalone performance validation comparing RetinAI Discovery's manual segmentation and measurement capabilities with those of existing cleared devices. There is no mention of human readers improving with or without AI assistance, as the device's main specified function (based on the provided text) is for displaying, analyzing, and manual labeling/annotation, not AI-powered automated analysis or decision support for human readers.

6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done

Yes, a standalone performance validation was done for the manual segmentation and image measurement functionalities of the RetinAI Discovery. The device itself is described as a "standalone, browser-based software application." The comparison testing verified the performance of the Discovery platform's manual segmentation and measurement tools against the established reference devices, essentially testing the accuracy of the tools themselves "without human-in-the-loop performance" improvement claims. The document focuses on the platform's ability to facilitate manual activities.

7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

The ground truth for the comparison testing was effectively the "computed values from the Reference Devices" (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton). This implies that the accepted and pre-cleared outputs of these established ophthalmic imaging and analysis devices, whether derived from their automatic or manual segmentation/measurement tools, served as the benchmark for evaluating RetinAI Discovery.

8. The Sample Size for the Training Set

The document does not provide any information about a training set size. This is consistent with the nature of the device as described, which is specified for manual labeling and annotation and general image management/display, not for an AI model that requires a large training dataset.

9. How the Ground Truth for the Training Set Was Established

As no training set is mentioned for an AI model, the method for establishing ground truth for a training set is not applicable here. The described studies focus on the validation of the manual tools and general software functions.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).