Search Results
Found 1 results
510(k) Data Aggregation
(136 days)
ImageSPECTRUM
The imageSPECTRUM V6 is an ophthalmic software system indicated for acquiring, storing, managing, and display patient, diagnostic and image data from Canon digital retinal cameras. It is also indicated for review of patient, diagnostic and image data and measurement by trained healthcare professional.
The imageSPECTRUM V6 is an ophthalmic imaging software for acquiring, storing, managing, processing and displaying patient, diagnostic and image data from Canon digital retinal cameras. The imageSPECTRUM V6 consists of three software; iS Capture. iS Review, and iS Server. The "iS Capture" has functions to communicate with Canon's retinal camera to take retinal images. The "iS Review" has functions for displaying, processing, and transferring retinal images. The "iS Server" has functions for storing and archiving retinal images.
The imageSPECTRUM V6 is an ophthalmic software system indicated for acquiring, storing, managing, processing, and display patient, diagnostic and image data from Canon digital retinal cameras. It is also indicated for review of patient, diagnostic and image data and measurement by trained healthcare professional.
This document is a 510(k) Premarket Notification from the FDA for the ImageSPECTRUM V6 ophthalmic software system. It determines the device is substantially equivalent to a previously cleared predicate device.
Here's an analysis of the acceptance criteria and study information, based on the provided text:
Important Note: The document focuses on demonstrating substantial equivalence to a predicate device, not on proving de novo clinical effectiveness or establishing performance thresholds against specific diagnostic accuracy metrics typically associated with AI/ML devices. Therefore, a table of "acceptance criteria" in the traditional sense of diagnostic performance (e.g., sensitivity, specificity) and reported performance metrics is not present in this type of 510(k) submission for a Picture Archiving and Communication System (PACS) software. The acceptance criteria here would primarily relate to functional requirements, safety, and equivalence to predecessor devices.
Acceptance Criteria and Reported Device Performance
Since this is a 510(k) for a PACS software system, the "acceptance criteria" are more about functional equivalence and safety rather than diagnostic accuracy. The document states:
"Canon concluded that the Canon imageSPECTRUM V6 is substantially equivalent to the predicate devices based on identical intended use and substantially equivalent technological characteristics and the similarities in functional design."
The "reported device performance" is implicitly that the device performs its intended functions (acquiring, storing, managing, processing, and displaying ophthalmic image data) safely and effectively, similar to the predicate device. The performance data section vaguely mentions "Software verification and validation was performed to ensure that the software device performed as intended."
Table of Acceptance Criteria and Reported Device Performance (as inferred from the document's purpose):
Acceptance Criteria (Inferred from Substantial Equivalence Claim) | Reported Device Performance (Summary from Document) |
---|---|
Functional Equivalence | - Acquiring, storing, managing, processing, and displaying patient, diagnostic, and image data from Canon digital retinal cameras. |
- Review of patient, diagnostic, and image data and measurement by trained healthcare professionals.
Specific Functional Similarities (from Table 1): - User Management: Supported
- Patient Management (Backup/Restoration): Supported
- Compatible Devices: Canon's retinal cameras (CR-2 Plus AF)
- Image Types: Color, FA, FAF
- Capture Function (Communication, Sequence, Auto Exposure, Auto Focus, Auto Shot Control): Supported
- View Image (Single, Comparison, Both Eyes): Supported
- Drawing Function: Supported
- Image Processing (Brightness, Contrast, Zoom, RGB Filters, Redfree, Emboss, Mosaic, Overlay): Supported (Emboss, Mosaic, Overlay are new/different but deemed to raise no safety/efficacy questions)
- Annotation (Drawing Function, Add text): Supported (New annotation tools like Macular Grid, PDT Marker, PDT Counter, AVR, Protractor added, deemed to raise no safety/efficacy questions)
- Standalone Configuration: Supported
- Server Client Configuration: Supported
- DICOM Communication (Worklist, Storage, Commitment): Supported
- STEREO Viewing: Supported
- Printing Image: Supported
- Viewing Reports by Multiple Users: Supported |
| Safety and Efficacy Equivalence | - The device is deemed safe and effective for its stated indications for use, equivalent to the predicate device. - No new safety or efficacy questions are raised by the differences in added annotation or image processing tools.
- Software verification and validation was performed to ensure that the software device performed as intended. |
| Hardware/Software Compatibility | - Software requirements: Microsoft Windows 10 Pro (64-bit) - Hardware requirements: CPU: Core-i7 2GHz or Greater, RAM: 4GB or more, Display Screen resolution: 1920x1080 (These are higher than predicate but indicate functional capability and not a performance claim for the software itself) |
Beyond this, the document provides limited details on the "study" as it would be understood for an AI/ML diagnostic device with performance metrics. This 510(k) is for a PACS system, which is typically a Class II device where the focus is on functional equivalence and risk management, rather than complex performance studies of an AI algorithm making diagnostic interpretations.
Here are the details from the document regarding the "study" closest to what was requested:
-
Sample size used for the test set and the data provenance:
- The document mentions "Software verification and validation was performed to ensure that the software device performed as intended." However, it does not specify any sample size for a test set of medical images or patient data.
- Data Provenance: Not specified. It's likely that internal corporate data (from Canon, Japan) was used for software validation, but no specifics are provided regarding country of origin or whether it was retrospective or prospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not specified. This type of information is generally required for AI/ML diagnostic devices where expert consensus is needed to establish ground truth for a labeled dataset. For a PACS system, the "ground truth" is that the software correctly acquires, stores, manages, and displays the image data as designed, and that its tools function accurately (e.g., measurement tools calculate correctly). The document implies internal software testing and validation.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable / Not specified. Adjudication methods are typically used in clinical studies involving human readers and ground truth establishment for diagnostic tasks, which is not the primary focus of this PACS software.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was NOT done. This device is a PACS system, not an AI-based diagnostic tool designed to assist human readers. Its function is to manage and display images.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not applicable / Not explicitly stated for diagnostic performance. The "algorithm" here is the PACS software itself. Its performance is its ability to handle and display images, and its various tools. There isn't a complex diagnostic "algorithm" that would perform in a standalone mode. The functions listed (image processing, annotation, etc.) are features of the software for use by a "trained healthcare professional."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated in terms of medical ground truth. As a PACS system, the "ground truth" for its validation would be the verification that the software correctly implements its specified functions (e.g., images are stored correctly, retrieved accurately, measurements are mathematically precise, display quality is as expected). This would typically involve internal software testing against functional specifications.
-
The sample size for the training set:
- N/A. This device is a PACS software, not an AI/ML model that is 'trained' on a dataset in the conventional sense. The "training" for this software would be its development and programming based on established software engineering principles.
-
How the ground truth for the training set was established:
- N/A. See point 7.
Ask a specific question about this device
Page 1 of 1