Search Results
Found 1 results
510(k) Data Aggregation
(201 days)
OptosAdvance 4.0 Software
OptosAdvance 4.0 is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including: patient data, clinical images and information, reports, videos, and measurement of DICOM-compliant images.
The OA4 software application provides multi-dimensional visualization of digital images to aid clinicians in their analysis of anatomy and pathology. The OptosAdvance user interface follows typical clinical workflow patterns to process, review, and analyze digital images.
The key features of OptosAdvance 4.0 include the ability to:
- Acquire, store, retrieve and display DICOM image data;
- Access patient data securely:
- Search patient studies and select images for closer examination:
- Interactively manipulate an image to visualize anatomy and pathology;
- Select multiple images for comparison;
- Annotate, tag and record selected views;
- Measure distance (linear) and area of DICOM images;
- Manage, backup and archive data;
- Import and export data to network storage devices;
- Securely access and transfer data: and
- Output selected views to printers.
The software relies on images being provided to a specified network path on the OptosAdvance Server by the connected ophthalmic device (Scanning Laser Ophthalmoscope, Fundus Camera, Optical Coherence Tomography unit, etc.) in a DICOM-compliant format. The software will then place the image and associated data on the network storage unit in a format which will allow the image to be available via a securely connected web browser. Locally archived studies will be securely pushed to the remote archive server for storage. The archive in the remote secure server serves as disaster recovery storage as well as access to the patient history.
The provided text describes OptosAdvance 4.0 software, a Picture Archiving and Communication System (PACS), and its substantial equivalence to predicate devices. The approval is based on non-clinical performance testing.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Operates according to requirements | Software testing ensured that new features operate according to requirements and without impact to existing functionality. |
Maintains existing functionality | Software testing confirmed no impact to existing functionality. |
Provides equivalent measurements (linear & area) | Equivalence tests were performed by loading DICOM objects with known dimensions, users measuring these features, and comparing their measurements with the known dimensions. The text implies successful equivalence, stating "OptosAdvance 4.0 software application provides equivalent measurements." |
Risk Management | Each risk pertaining to OptosAdvance 4.0 was individually assessed, reduced to "as low as possible," and evaluated to have a probability of occurrence of harm of no more than "Remote." All risks were collectively reviewed, and benefits were determined to outweigh the risks. |
Cybersecurity risks addressed | The device was designed and tested considering potential cybersecurity risks to ensure confidentiality, integrity, availability, and accountability. |
Images display at same resolution and clarity | Verification and validation testing for the subject software covered the comparison of images on custom software and the web client, which were found to display at the same resolution and clarity. (This implicitly refers to comparison with the predicate device's display capabilities or internal consistency). |
Functionality equivalent to predicates | The device's acquisition, importing, viewing, measurement and analysis, network and security, print, archive, and backup functionalities are similar to the predicate devices. Minor technological differences were determined not to raise new issues of safety or effectiveness. (This is a more qualitative criterion based on overall functionality, supported by detailed comparisons.) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not explicitly state the sample size for the test set or the data provenance for the performance testing. It mentions "previously acquired medical images" for verification, validation, and evaluation, and "DICOM objects that contain features with known dimensions" for equivalence tests. This suggests the use of retrospective or simulated data, but specifics are not provided. Given the nature of a PACS system, the "data" would consist of DICOM images.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number of experts or their qualifications used to establish ground truth. For the measurement equivalence tests, "known dimensions" implies a pre-established ground truth, likely from the creation of the DICOM objects themselves or from other validated measurement tools, rather than expert annotation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method. The performance testing appears to be based on objective comparisons (e.g., measured values vs. known dimensions, checking if features operate as required) rather than subjective expert review requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. OptosAdvance 4.0 is a PACS system designed for managing, displaying, and measuring ophthalmic data, not for providing AI assistance to human readers for diagnostic interpretation. Therefore, there is no mention of human reader improvement with or without AI assistance. The submission specifically states: "Thus, clinical studies are not required to support the subject device's safety and effectiveness; the non-clinical objective test methods used for evaluation demonstrate that the software's performance is equivalent to that of the legally marketed predicates."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the performance testing described is effectively standalone testing of the software's functionalities. The "equivalence tests" for measurements involve the software's ability to provide measurements accurately compared to known dimensions. The overall software verification and validation are for the algorithm's performance in managing and displaying data, not for human-in-the-loop diagnostic accuracy. The device itself is described as a "standalone, browser-based software application."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the measurement tests, the ground truth was "known dimensions" embedded within DICOM objects. For general software functionality, the ground truth would be the defined requirements and specifications of the software.
8. The sample size for the training set
The document does not describe a training set. This is not an AI/machine learning device that requires a training set. It is a PACS system.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
Page 1 of 1