K Number
K232828
Manufacturer
Date Cleared
2024-03-01

(170 days)

Product Code
Regulation Number
892.2050
Panel
OP
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The IMAGEnet6 Ophthalmic Data System is a software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices.

It is intended for processing and displaying ophthalmic images and optical coherence tomography data.

The IMAGEnet6 Ophthalmic Data System uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects.

Device Description

IMAGEnet6 Ophthalmic Data System is a Web application that allows management of patient information, exam information and image information. It is installed on a server PC and operated via a web browser of a client PC.

When combined with 3D OCT-1 (type: Maestro2), IMAGEnet6 provides GUI for remote operation function. This function is an optional function which enables users to use some of the image capture function by operating the PC or tablet PC connected to the external PC of 3D OCT-1 (Type:Maestro2) device via ethernet cable. The remote operation function is not intended to be used from any further distance (e.g., operation from different rooms or different buildings) other than social distancing recommendations.

AI/ML Overview

The provided document is a 510(k) Premarket Notification from the FDA for the IMAGEnet6 Ophthalmic Data System. This device is a Medical Image Management and Processing System, classified as Class II, with product code NFJ.

Based on the document, the IMAGEnet6 Ophthalmic Data System, subject device (version 2.52.1), is considered substantially equivalent to the predicate device (IMAGEnet6, version 1.52, K171370). The submission is primarily for a software update with changes including a modified remote operation function and expanded compatibility with additional Topcon devices.

Here's an analysis of the acceptance criteria and study information provided:

1. Table of Acceptance Criteria and Reported Device Performance:

The document does not specify quantitative acceptance criteria in a typical clinical study format (e.g., target sensitivity, specificity). Instead, the acceptance criterion for the software modification (remote operation function) appears to be that its performance (image quality and diagnosability) is equivalent to or the same as the device without the remote operation function.

Acceptance Criterion (Implicit)Reported Device Performance
Image quality with remote operation function is the same as without.Confirmed that image quality is the same with or without the remote operation function.
Diagnosability with remote operation function is the same as without.Confirmed that diagnosability is the same with or without the remote operation function.

2. Sample size used for the test set and data provenance:

  • Sample Size for Test Set: Not explicitly stated. The document mentions "comparison testing" was performed for the modified remote operation function, but the size of the test set (number of images or cases) is not provided.
  • Data Provenance: Not specified. The document does not indicate the country of origin of the data or whether it was retrospective or prospective.

3. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

This information is not provided in the document. As "clinical performance data was not required for this 510(k) submission," there is no mention of expert-established ground truth for a clinical test set. The assessment of "image quality and diagnosability" was likely an internal validation, possibly by qualified personnel, but the specifics are not disclosed.

4. Adjudication method for the test set:

This information is not provided. Given that clinical performance data was not required, a formal adjudication process akin to clinical trials is unlikely to have been detailed.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • MRMC Study: No, an MRMC comparative effectiveness study was not done. The device is an image management and processing system, not an AI-powered diagnostic tool intended to assist human readers in interpretation.
  • Effect Size: Not applicable, as no such study was performed or required.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • The document implies that "comparison testing" was conducted to confirm image quality and diagnosability with and without the remote operation function. This would be a form of standalone performance assessment of the system's ability to maintain image integrity and diagnostic utility, but it does not involve the standalone diagnostic performance of an algorithm without human input for disease detection.

7. The type of ground truth used:

  • The document states that "clinical performance data was not required." Therefore, there is no mention of a ground truth established by expert consensus, pathology, or outcomes data for diagnostic accuracy. The ground truth for the "comparison testing" of the remote operation function would likely be the inherent quality and diagnostic features of the images generated by the original (non-remote) system. The testing aimed to confirm that the remote function did not degrade this baseline.

8. The sample size for the training set:

IMAGEnet6 is described as a "software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices." It uses "the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects."

This suggests the device itself is not a deep learning AI model that requires a "training set" in the conventional sense of machine learning for diagnostic tasks. Rather, it integrates existing algorithms and reference databases. Therefore, the concept of a "training set" for the IMAGEnet6 software as a whole is not applicable in the context of this 510(k) submission.

9. How the ground truth for the training set was established:

As the concept of a training set for a machine learning model is not applicable to the functionality described for IMAGEnet6 in this submission, the method for establishing ground truth for a training set is not relevant or discussed. The reference databases it utilizes would have had their own data collection and establishment methods, but those pertain to the underlying instruments, not the IMAGEnet6 system itself regarding this submission.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).