Search Results
Found 51 results
510(k) Data Aggregation
(245 days)
NFJ
iCare ALTIUS CW is a Medical Device Software indicated for the review, processing and analysis of ophthalmic medical images, for the review of video, clinical and diagnostic data, measurements and reports, generated by ophthalmic medical devices or documentation system through computerized networks, to support trained healthcare professionals in the diagnosis and monitoring of several eye pathologies.
iCare ALTIUS CW is a cloud-based software application with a web-based interface able to:
- review medical ophthalmic images, including videos,
- digitally process images,
- review diagnostic data, clinical information and reports,
from ophthalmic diagnostic instruments. CW does not perform automated image analysis but provides advanced imaging manipulation tools.
CW allows to review and process diagnostic data and multiple images with different formats (e.g. PDFS, JPEG, ...) and provides the following features: - image manipulation filters such as zooming, changing brightness and contrast and gamma, RGB filtering,
- side-by-side image comparison (detached or synchronized mode) with different layouts,
- advanced imaging tools, such as flicker between different pictures and mosaics of several images,
- review and print reports generated by ophthalmic devices.
CW integrates with PACS software systems, which provide the medical images and reports, to be analysed by the CW. The patient data and medical images exchange between CW and PACS is done through computerized networks using secured network communication.
The web-based interface of CW is designed to be used through a desktop PC or a laptop using keyboard and mouse (further details in the technical requirements section).
The User Interface is available in the languages required by the applicable regulatory requirement of the country where the device is placed on the market.
The iCare ALTIUS CW device is a Medical Device Software indicated for the review, processing, and analysis of ophthalmic medical images, video, clinical and diagnostic data, measurements, and reports generated by ophthalmic medical devices or documentation systems. It aims to support trained healthcare professionals in the diagnosis and monitoring of various eye pathologies.
The provided text does not contain detailed acceptance criteria or a comprehensive study report with specific performance metrics and statistical results. It describes the device, its intended use, and states that "Software Verification and Validation Testing" was conducted, and "documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, 'Content of Premarket Submissions for Device Software Functions.'" However, it does not specify what those acceptance criteria were, what the reported device performance against those criteria was, or provide the specifics of the study methodology (e.g., sample sizes, ground truth establishment, expert qualifications, etc.).
Therefore, I cannot fully answer your request based on the provided input.
However, I can extract the available information and highlight what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified in the provided text. The document states that "Software Verification and Validation Testing were conducted" and implies compliance with FDA guidance and IEC 62304 standard, but does not list specific quantitative or qualitative acceptance criteria for clinical or technical performance. | Not specified in the provided text. The document does not provide specific performance metrics (e.g., accuracy, sensitivity, specificity, resolution, speed, etc.) that were observed or measured for the device in relation to defined acceptance criteria. |
2. Sample size used for the test set and the data provenance
- Sample size for test set: Not specified.
- Data provenance: Not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of experts: Not specified.
- Qualifications of experts: Not specified.
4. Adjudication method for the test set
- Adjudication method: Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC study: Not specified. The device description states, "CW does not perform automated image analysis but provides advanced imaging manipulation tools," suggesting it's primarily a viewing and processing platform rather than an AI-driven diagnostic tool in the typical sense that would necessitate an MRMC study comparing AI-assisted vs. unassisted human performance in diagnosis or detection. The capabilities listed (zooming, brightness, contrast, comparison, flickering, mosaic, cup-to-disc ratio annotation) are image manipulation and viewing tools.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone study: Not specified. As noted above, the device is described as a tool to "support trained healthcare professionals" and "does not perform automated image analysis." Thus, a standalone algorithm performance evaluation would not be applicable in the same way it would for an autonomous AI diagnostic system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of ground truth: Not specified.
8. The sample size for the training set
- Sample size for training set: Not specified. Given that the device "does not perform automated image analysis," it's unlikely to have a "training set" in the context of a machine learning algorithm.
9. How the ground truth for the training set was established
- Method for establishing ground truth: Not applicable based on the device description.
Summary of what is known:
- Device Name: iCare ALTIUS CW
- Regulatory Status: K234076, Class II, Product Code NFJ
- Indications for Use: Review, process, and analyze ophthalmic medical images, video, clinical and diagnostic data, measurements, and reports to support healthcare professionals in diagnosis and monitoring of eye pathologies.
- Key Features: Image manipulation (zoom, pan, brightness, contrast, gamma, RGB filtering), side-by-side comparison, advanced imaging tools (flicker, mosaic), review/print reports, cup-to-disc ratio annotation.
- Core Functionality: Cloud-based software providing advanced imaging manipulation tools; it does not perform automated image analysis.
- Performance Data Provided: "Software Verification and Validation Testing" was conducted, and documentation complied with FDA guidance and IEC 62304.
- Conclusion: The device is substantially equivalent to the predicate (FORUM, K213527), and differences (absence of purely database features, measurements only in dimensionless units for cup-to-disc, mosaic and flickering features, different system architecture) have no effect on safety and effectiveness.
What is explicitly missing from the provided text to fully answer the request:
- Specific quantitative or qualitative acceptance criteria.
- Detailed results of the verification and validation testing against those criteria.
- Any information regarding clinical studies, test set sizes, ground truth establishment, expert qualifications, or adjudication methods.
- Information about training sets or AI performance metrics, as the device explicitly states it does not perform automated image analysis.
Ask a specific question about this device
(170 days)
NFJ
The IMAGEnet6 Ophthalmic Data System is a software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices.
It is intended for processing and displaying ophthalmic images and optical coherence tomography data.
The IMAGEnet6 Ophthalmic Data System uses the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects.
IMAGEnet6 Ophthalmic Data System is a Web application that allows management of patient information, exam information and image information. It is installed on a server PC and operated via a web browser of a client PC.
When combined with 3D OCT-1 (type: Maestro2), IMAGEnet6 provides GUI for remote operation function. This function is an optional function which enables users to use some of the image capture function by operating the PC or tablet PC connected to the external PC of 3D OCT-1 (Type:Maestro2) device via ethernet cable. The remote operation function is not intended to be used from any further distance (e.g., operation from different rooms or different buildings) other than social distancing recommendations.
The provided document is a 510(k) Premarket Notification from the FDA for the IMAGEnet6 Ophthalmic Data System. This device is a Medical Image Management and Processing System, classified as Class II, with product code NFJ.
Based on the document, the IMAGEnet6 Ophthalmic Data System, subject device (version 2.52.1), is considered substantially equivalent to the predicate device (IMAGEnet6, version 1.52, K171370). The submission is primarily for a software update with changes including a modified remote operation function and expanded compatibility with additional Topcon devices.
Here's an analysis of the acceptance criteria and study information provided:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not specify quantitative acceptance criteria in a typical clinical study format (e.g., target sensitivity, specificity). Instead, the acceptance criterion for the software modification (remote operation function) appears to be that its performance (image quality and diagnosability) is equivalent to or the same as the device without the remote operation function.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Image quality with remote operation function is the same as without. | Confirmed that image quality is the same with or without the remote operation function. |
Diagnosability with remote operation function is the same as without. | Confirmed that diagnosability is the same with or without the remote operation function. |
2. Sample size used for the test set and data provenance:
- Sample Size for Test Set: Not explicitly stated. The document mentions "comparison testing" was performed for the modified remote operation function, but the size of the test set (number of images or cases) is not provided.
- Data Provenance: Not specified. The document does not indicate the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and qualifications of those experts:
This information is not provided in the document. As "clinical performance data was not required for this 510(k) submission," there is no mention of expert-established ground truth for a clinical test set. The assessment of "image quality and diagnosability" was likely an internal validation, possibly by qualified personnel, but the specifics are not disclosed.
4. Adjudication method for the test set:
This information is not provided. Given that clinical performance data was not required, a formal adjudication process akin to clinical trials is unlikely to have been detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The device is an image management and processing system, not an AI-powered diagnostic tool intended to assist human readers in interpretation.
- Effect Size: Not applicable, as no such study was performed or required.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document implies that "comparison testing" was conducted to confirm image quality and diagnosability with and without the remote operation function. This would be a form of standalone performance assessment of the system's ability to maintain image integrity and diagnostic utility, but it does not involve the standalone diagnostic performance of an algorithm without human input for disease detection.
7. The type of ground truth used:
- The document states that "clinical performance data was not required." Therefore, there is no mention of a ground truth established by expert consensus, pathology, or outcomes data for diagnostic accuracy. The ground truth for the "comparison testing" of the remote operation function would likely be the inherent quality and diagnostic features of the images generated by the original (non-remote) system. The testing aimed to confirm that the remote function did not degrade this baseline.
8. The sample size for the training set:
IMAGEnet6 is described as a "software program that is intended for use in the collection, storage and management of digital images, patient data, diagnostic data and clinical information from Topcon devices." It uses "the same algorithms and reference databases from the original data capture device as a quantitative tool for the comparison of posterior ocular measurements to a database of known normal subjects."
This suggests the device itself is not a deep learning AI model that requires a "training set" in the conventional sense of machine learning for diagnostic tasks. Rather, it integrates existing algorithms and reference databases. Therefore, the concept of a "training set" for the IMAGEnet6 software as a whole is not applicable in the context of this 510(k) submission.
9. How the ground truth for the training set was established:
As the concept of a training set for a machine learning model is not applicable to the functionality described for IMAGEnet6 in this submission, the method for establishing ground truth for a training set is not relevant or discussed. The reference databases it utilizes would have had their own data collection and establishment methods, but those pertain to the underlying instruments, not the IMAGEnet6 system itself regarding this submission.
Ask a specific question about this device
(92 days)
NFJ
CALLISTO eye Software is a software device intended for remote control of ophthalmic surgical microscopes of ARTEVO 750/850 and RESCAN 700, and display images of the anterior and posterior segment of the eye.
CALLISTO eye Software is indicated as graphical guidance aid to insert, align, position, and register an intraocular lens (IOL) including toric IOLs, limbal relaxing incisions, and capsulorhexis during anterior segment surgical procedures.
CALLISTO eye software version 5.0 is a new release sporting a new user interface but carries the clinical feature set of software version 3.7.2: it supports the digital visualization technology and connectivity of ARTEVO 750 / ARTEVO 850 and provides connectivity to the QUATERA700. CALLISTO eye enables the video visualization of the anterior segments of the eye and allows the connection and remote control of a surgical microscope with and without OCT Camera. It is designed for high patient throughput and can be used for teaching purposes.
CALLISTO eye is an assistance system that processes real-time video images that can be displayed on the CALLISTO eye Panel PC for viewing by the surgeon and the surgical staff in the operating room. The same video images can be viewed by the surgeon through the eyepiece of the connected surgical microscope. CALLISTO eye provides Assistant Functions displaying treatment templates as screen overlays and Cockpits displaying patient and device information as screen overlays. Both functions assist the surgeon during procedures such as limbal relaxing incisions, capsulorhexis, and alignment of toric intraocular lenses (TIOL). All treatment templates are based on preoperative clinical data of a particular patient and shall be defined by the surgeon prior surgery. These templates can be displayed on the CALLISTO eye Panel PC, through the eyepiece of the surgical microscope equipped with a data injection system (IDIS (WITH VERSION 5.0 RELABELED AS ADVISION)) of the ARTEVO 750 or on a 3D monitor connected to the ARTEVO 850. While using "ASSISTANCE markerless" configuration, CALLISTO eye can utilize the preoperative diagnostic data from the Zeiss IOLMaster and may provide the reference and target axis as required to align a toric intraocular lens without the otherwise required ink marks.
Transmission of the diagnostic data from the IOLMaster to CALLISTO eye takes place via USB stick or via a data network connected to a DICOM compatible MIMPS server such as FORUM. The DICOM functionality allows the indirect communication with other DICOM compatible diagnostic devices and patient information systems to exchange patient data (e.g. medical devices work lists).
The Carl Zeiss Meditec AG's CALLISTO eye Software, version 5.0, did not conduct a clinical study to prove that the device met the acceptance criteria and was substantially equivalent to the predicate device, CALLISTO eye Software, version 3.7.2.
The submission states: "Animal and Clinical testing was not conducted."
Instead, the submission relied on non-clinical performance testing and risk management to demonstrate substantial equivalence.
Here's the information about the acceptance criteria and the study that was not performed in the traditional sense:
1. A table of acceptance criteria and the reported device performance
The submission does not explicitly state "acceptance criteria" for clinical performance as no clinical testing was performed. However, the basis for equivalence is the identical indications for use and equivalent technological characteristics and risk profile compared to the predicate device. The performance is deemed to be equivalent to the predicate.
Acceptance Criteria (Implied by Substantial Equivalence Claim) | Reported Device Performance (Summary of Non-Clinical Testing) |
---|---|
Identical Indications for Use: CALLISTO eye Software 5.0 will perform precisely the same functions as the predicate in aiding ophthalmic surgical procedures for IOLs, limbal relaxing incisions, and capsulorhexis. | The indications for use are identical to the predicate device, K231676. |
Equivalent Technological Characteristics: The device will operate with similar functional performance and safety as the predicate device, despite software version update and some hardware connectivity changes. | Software verification and validation activities were successfully completed. The device complies with specifications and requirements. Risk management (ISO 14971) and cybersecurity assessment were performed. |
Equivalent Risk Profile: The changes to the device will not introduce new safety concerns or modify existing risks such that the device is no longer substantially equivalent. | Risk analysis identified potential hazards and mitigations, controlled by design means, protection measures, and user instructions. Cybersecurity assessment based on VAST Threat Modeling was conducted. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not applicable, as no clinical test set was used for patient data. The "test set" for non-clinical testing refers to software test cases and system verification, not patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable, as no clinical ground truth was established by experts for a test set. Non-clinical software verification relies on defined specifications and requirements as the "ground truth" for expected software behavior.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as no clinical test set requiring adjudication was used.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable. No MRMC comparative effectiveness study was done as no clinical testing was performed. The device is a "graphical guidance aid" and not an AI that independently diagnoses or drives clinical decisions, nor does it quantify human reader improvement.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable for clinical performance. The device is intended as an assistance system with human-in-the-loop (the surgeon). The non-clinical testing focused on software functionality and integration, not standalone clinical performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the non-clinical performance testing (software verification and validation), the "ground truth" was established by the pre-defined specifications, requirements, and design documents of the software. Compliance with these internal standards and relevant international standards (ISO 14971, IEC 62366-1, IEC 62304, NEMA PS 3.1-3.20) was the basis for verifying performance.
8. The sample size for the training set
Not applicable. This device is not an AI/ML model that requires a training set in the conventional sense. It is a software update to an existing medical image management and processing system.
9. How the ground truth for the training set was established
Not applicable. See point 8.
Ask a specific question about this device
(89 days)
NFJ
Harmony is a comprehensive software platform intended for use in importing, processing, measurement, analysis and storage of clinical images and videos of the eye as well as in management of patient data, clinical information, reports from ophthalmic diagnostic instruments through either a direct connection with the instruments or through computerized networks.
Harmony is a modification to the existing Harmony cleared in K182376. The differences between the new version and the currently cleared version are modifications to the graphical user interface consisting of PixelSmart Technology, Internationalization support, Analytical thickness grids, Hanging protocols, and Automatic image smoothing while zooming in.
Harmony is a comprehensive software platform intended for use in importing, processing, measurement, analysis and storage of clinical images and videos of the eye, as well as for management of patient data, diagnostic data, clinical information, reports from ophthalmic diagnostic instruments through either a direct connection with the instruments or through computerized networks.
Harmony is used together with a number of computerized digital imaging devices, including:
- Optical Coherence Tomography devices .
- Mydriatic retinal cameras .
- Non-mydriatic retinal cameras .
- Biomicroscopes (slit lamps)
In addition, Harmony collects and manages patient demographics, image data, and clinical reports from a range of medical devices, including:
- Scanning Laser Ophthalmoscope images and videos .
- Non Radiometric Ultrasound devices ●
- Video image sources ●
- TWAIN compliant imaging sources ●
- Compliant data sources placed in network accessible folders and directories
- . Images of known format from digital cameras and scanners
- . Printer files of known format form computerized diagnostic devices
- Electronic information complying to accepted DICOM formats
- Other devices connected in proprietary formats ●
There are 5 notable device modifications subject of this submission: PixelSmart Technology, International support, Analytical thickness grids, Hanging protocols, and Automatic image smoothing while zooming in, along with some minor modifications.
PixelSmart is an optional post-processing image enhancement algorithm performing a moving average across OCT B-scans, reducing speckle noise and improving contrast by applying smoothing.
International support adds the possibility to use the Harmony user interface and online user manual in Spanish, in addition to the standard English software.
Analytical thickness grids offer the same functionality as the existing, cleared thickness grids in Topcon's IMAGEnet 6, now also in Harmony. The grids show sectorial average thickness values as derived from OCT segmentation data.
Hanging protocols allows a customizable image display arrangement in the Harmony user interface, resembling the arrangement of physical images on a light box.
Automatic image smoothing while zooming in is an optional display feature that will cause OCT B-scan images on higher zoom levels to look less pixelated.
The provided text describes a 510(k) premarket notification for a device called "Harmony" by Topcon Healthcare Solutions. This submission is for modifications to an existing cleared device (K182376). As such, the focus is on demonstrating that the modifications do not introduce new safety or effectiveness concerns and that the device remains substantially equivalent to its predicate.
Therefore, the document does not contain the kind of detailed clinical study and performance data (e.g., acceptance criteria tables, sample sizes for test/training sets, expert ground truth establishment, MRMC studies) that would typically be required for the initial clearance of a novel AI/ML-driven device with diagnostic claims. Instead, it relies on demonstrating that the "modified Harmony" functions equivalently to the predicate Harmony, primarily through software validation and verification.
Based on the provided text, here's what can and cannot be answered:
1. A table of acceptance criteria and the reported device performance
- Cannot be provided. The document states: "Software validation and verification demonstrate that Harmony performs as intended and meets its specifications, using methods equivalent to the predicate device." However, it does not specify quantitative acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy, F1-score) or report specific performance values for the modified features. This is expected given that the modifications are primarily related to UI, image enhancement (PixelSmart), and display features, not fundamental diagnostic algorithms requiring extensive performance studies against clinical ground truth.
2. Sample sizes used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Cannot be provided in detail. The document mentions "software validation and verification activities" and "non-clinical performance testing." These are typically done with internal test cases or simulated data rather than large, independent clinical test sets for a device of this nature (an image management and processing system with UI/display modifications). There is no mention of specific sample sizes of patient images or their provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/Cannot be provided. Since no formal clinical test set with a "ground truth" adjudicated by multiple experts is described for the modifications in this 510(k) summary, details about expert involvement are not present. The changes (PixelSmart, Internationalization, Analytical thickness grids, Hanging protocols, Automatic image smoothing) relate to image display, processing, and user interface, rather than directly generating a diagnostic output that would require expert-adjudicated ground truth.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable/Cannot be provided. As no multi-expert ground truth establishment for a test set is described, there's no mention of an adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC study was not done. The document describes modifications to an image management and processing system. The "PixelSmart" technology is an optional post-processing image enhancement algorithm (moving average to reduce speckle noise and improve contrast). While this could hypothetically improve reader performance, the submission does not present an MRMC study to quantify such an effect. This type of study is more common for AI algorithms directly assisting in interpretation or detection, which is not the primary claim for these modifications.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not explicitly described as a formal validation study. The "PixelSmart" feature is an algorithm (moving average). Its performance would be evaluated internally for its intended effect (reducing speckle noise, improving contrast). However, the document does not present a standalone performance study with metrics like sensitivity/specificity for a specific clinical task. The assessment is that it "performs as intended" and "meets its specifications" as an image enhancement tool.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable/Cannot be provided. The modifications are not addressing a diagnostic claim that would require ground truth from expert consensus, pathology, or outcomes data. The "ground truth" for verifying these changes would relate to software functionality (e.g., does PixelSmart correctly apply a moving average? Does the Spanish UI display correctly?).
8. The sample size for the training set
- Not applicable/Cannot be provided. The "Harmony" system itself is a software platform. While the PixelSmart feature is an algorithm, the document does not describe it as a machine learning model that undergoes a "training" phase with a large dataset. It's described as a "moving average across OCT B-scans," suggesting a rule-based or conventional image processing algorithm rather than a deep learning model. Therefore, there's no mention of a training set size.
9. How the ground truth for the training set was established
- Not applicable/Cannot be provided. As there's no description of a training set, the method for establishing its ground truth is not provided.
Summary of what is described regarding the study/validation:
- Type of Study: Software validation and verification, and non-clinical performance testing.
- Purpose: To demonstrate that the modified Harmony functions equivalently to the predicate Harmony and that the modifications do not introduce new safety or effectiveness concerns.
- Assessment: Risk assessment was conducted, and "newly identified risks or modified existing risks are mitigated, and no unacceptable risk was identified."
- Standards Followed: IEC 62304 (Medical Device Software Life Cycle Processes), NEMA PS 3.1-3.20 (DICOM), ISO IEC 10918-1 (JPEG), ISO 14971 (Risk Management).
In essence, this 510(k) relies on demonstrating the equivalence of a modified, already cleared, non-diagnostic software platform through robust engineering and software validation principles, rather than extensive clinical performance studies common for novel AI diagnostic devices.
Ask a specific question about this device
(80 days)
NFJ
CALLISTO eye Software is a software device intended for remote control of ophthalmic surgical microscopes of OPMI Lumera family and RESCAN 700, and display images of the anterior and posterior segment of the eye.
CALLISTO eye Software is indicated as graphical guidance aid to insert, align, position, and register an intraocular lens (IOL) including toric IOLs, limbal relaxing incisions, and capsulorhexis during anterior segment surgical procedures.
CALLISTO eye software operates as an adjunct to the ZEISS's family of ophthalmic surgical microscopes to process surgery videos and OCT data (B-Scan images). Specifically, the subject device has the functionality to be connected to an OCT camera (such as in RESCAN 700 (K180229)), a phaco machine (such as in QUATERA 700 (K212241), as well as MIMPS (such as FORUM (K213527).
CALLISTO eye Software must be installed on a computer with a touchscreen; this Panel PC (ORPC) is offered as an accessory. The current model of the ORPC is the CALLISTO eye Panel PC Model II. OPRC function and configuration has been modified since the last CALLISTO eye 510(k) by upgrading electronics components to accommodate lifecycle management needs.
CALLISTO eye 3.7.2 has the same functionalities as CALLISTO eye 3.6 (K180858). These functionalities include patient data management and transmission via DICOM protocol, interfaces to ZEISS's ophthalmic microscopes with/without OCT camera (RESCAN 700) and assists with overlay function for markerless marking to support IOL alignment.
Additional functionalities unique to CALLISTO eye 3.7.2 are inclusion of changes occurring from software version 3.6 to 3.7.1 and additional support of language packages, bug fixes, cybersecurity enhancements and interoperability abilities with a phaco system (OUATERA 700).
The subject device, CALLISTO eye 3.7.2, provides connectivity to the following surgical microscopes from ZEISS:
- . OPMI LUMERA 700 with Integrated Data Injection System (IDIS)
- OPMI LUMERA T with External Data Injection System (EDIS) ●
- OPMI LUMERA I with External Data Injection System (EDIS) .
- OPMI LUMERA 700 with OCT camera (RESCAN700)
- ARTEVO 800 with 3D monitor cart (3DIS) .
- ARTEVO 800 with OCT camera (RESCAN700) .
The software can acquire photo and videos from all surgical microscope listed above and can remote control these microscopes apart from the OPMI LUMERA T and I.
All OPMI LUMERA family surgical microscopes have been covered by the predicate device CALLISTO eye 3.6 (K180858). With the subject device the range of supported surgical microscopes was extended to the ARTEVO 800 with and without RESCAN700 as principal successor of the OPMI LUMERA 700.
The intended use and indications for use of OPMI LUMERA and ARTEVO 800 are identical and the microscopes can be applied for the same surgical procedure.
CALLISTO eye allows the connection and remote control of a surgical microscope with or without OCT Camera and thus operates as an adjunct to the family of ZEISS surgical microscopes. Functionalities such as light intensity, camera parameters, start/stop recording, zoom, focus, diaphragm, start/stop OCT scanning, etc. of the surgical microscope, including the configuration of the foot control panel and handgrips, can be accessed and managed by the user in CALLISTO eye.
CALLISTO eye Software is an assistance, information system to support ophthalmic surgical procedures. It provides an interface to other devices to facilitate the:
- Display and recording of video data provided by ZEISS surgical microscopes (OPMI) .
- Display of assistance functions (graphical guidance templates) and device information (cockpits) to aid the surgeon in the implantation of intra ocular lenses; e.g., used for the alignment for toric intraocular lenses.
- . Display and recording of OCT image data provided by ZEISS RESCAN 700
- Display and exchange data with the ZEISS OUATERA 700 phacoemulsification and vitrectorny system .
- . Retrieval and storage of patient data from and to the FORUM MIMPS system
- . Configuration of ZEISS surgical microscopes, including the assignment of functions to OPMI handgrips and foot control panel
The provided text is a 510(k) summary for the Carl Zeiss Meditec AG's CALLISTO eye (Software Version 3.7.2). It primarily focuses on demonstrating substantial equivalence to a predicate device (CALLISTO eye, Software Version 3.6) rather than detailing specific acceptance criteria and a study to prove meeting those criteria in the context of diagnostic performance.
The document discusses functional equivalence and safety, but not performance metrics like sensitivity, specificity, or accuracy for a diagnostic task. The device is described as an "assistance system" providing "non-diagnostic video documentation and image capture" and "graphical guidance aid." Therefore, the typical diagnostic performance acceptance criteria and study design (like MRMC studies) are not applicable here.
However, I can extract information related to the device's functional performance and the verification/validation activities performed, which serve as proof that the device meets its functional specifications.
Here's a breakdown based on the provided text, addressing the points where information is available or noting its absence:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is not a diagnostic device with performance metrics like sensitivity/specificity, the "acceptance criteria" are related to its functional specifications and safety. The "reported device performance" refers to the successful verification and validation of these functions.
Acceptance Criteria (derived) | Reported Device Performance (Summary from submission) |
---|---|
Functional Equivalence to Predicate Device: | |
- Identical Indications for Use | Supported by direct comparison tables showing identical IFUs. |
- Similar Technological Characteristics | Supported by detailed comparison tables showing identical or equivalent technical characteristics (e.g., software only, accessory, operating system, communication protocols, assistance functions). Differences (e.g., supported surgical microscopes, video format) were assessed and deemed equivalent. |
Safety and Effectiveness: | |
- Risk Management compliance | Risk analysis performed to identify potential hazards and mitigations; controls by design, protection measures, and user instructions. Adheres to ISO 14971. |
- Compliance with Software Requirements | Device performance complies with specifications and requirements identified through verification and validation. |
- Meets Customer Requirements | Device meets customer's requirements with respect to performance based on validation plan. |
- Conformance to applicable standards (e.g., IEC, ISO, NEMA) | Conforms to ISO 14971:2019, IEC 62366-1:2015, IEC 62304:2015, NEMA PS 3.1-3.20. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not applicable or not specified in the context of a "test set" for diagnostic performance. The document describes software verification and validation, which typically involves internal testing against specifications and requirements, often using simulated data, test cases, and potentially real (but de-identified) operational data. The document does not specify a "test set" in the sense of clinical study data with provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not applicable. As a non-diagnostic assistance system, there is no "ground truth" to establish for diagnostic outcomes in the context of the device's stated functions. The validation focuses on whether the software performs its intended functions correctly (e.g., displays images, provides graphical guidance correctly).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable, as there's no diagnostic ground truth being established via expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done or mentioned. The device's indications for use emphasize "graphical guidance aid" and "assistance system," not a primary diagnostic tool. The submission states, "Animal and Clinical testing was not conducted."
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
This concept is not directly applicable. The CALLISTO eye software is designed as an "assistance, information system to support ophthalmic surgical procedures" with "graphical guidance aid." Its function is inherently human-in-the-loop, providing information to the surgeon. Standalone performance for a predictive or diagnostic algorithm is not its purpose. The document details "software verification activities" and "validation," which confirm the software's functional correctness.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the functional validation of this device, the "ground truth" would be the expected correct behavior of the software according to its design specifications and user requirements. This is established through:
- Design specifications: The software behaving as programmed.
- User requirements: The software meeting the needs of trained clinical personnel for guidance and control.
There is no mention of external clinical ground truth like pathology or outcomes data in this submission for assessing the device's inherent performance.
8. The sample size for the training set
Not applicable. This device is described as software that provides graphical guidance and remote control, not a machine learning or AI algorithm that is "trained" on a dataset for diagnostic or predictive tasks in the conventional sense described by these questions.
9. How the ground truth for the training set was established
Not applicable, as there is no "training set" for an AI model mentioned in the submission. The "ground truth" for the software's functional correctness is simply its design specifications and user requirements, as verified and validated through software testing.
Ask a specific question about this device
(18 days)
NFJ
The Altris IMS is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, and measure data from ophthalmic diagnostic instruments, including patient data, diagnostic data, clinical images and information, reports, and measurement of DICOM-compliant images. The device is also indicated for manual labeling and annotation of retinal OCT scans.
Altris IMS is a cloud-based software program to assist healthcare professionals, specifically Eye Care Practitioners (ECPs) with OCT interpretation. Altris IMS utilizes commonly available internet browsers to locally manage and review data which is uploaded to an Amazon AWS cloud-based server. Its intended use is to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including patient data, diagnostic data, clinical images and information, reports, and measurement of DICOM-compliant images. The platform allows the user to manually annotate areas of interest in the images, calculate the layer thickness and volume from annotated images and present the progression of the measurements. Altris IMS also provides a tool for linear distance measuring of ocular anatomy and ocular lesion distances. The platform supports DICOM format files. Altris IMS is focused on the center sector of the retina. Altris IMS does not perform optic nerve analysis. Altris IMS has tools for manual area of interest image segmentation and labeling/annotation for healthcare professionals to use and review for their own diagnosis. The Subject device neither performs any diagnosis, nor provides treatment recommendations. It is solely intended to be used as a support tool by trained healthcare professionals. The software does not use artificial intelligence or machine learning algorithms. The Subject device is a client-server model. It utilizes a local user/client internet browser-based (frontend) interface used to upload, manage, annotate, and review imaging data. Data is stored and processed on a remote web-based server (backend).
The provided text does not contain detailed information about the acceptance criteria or a specific study that proves the device meets those criteria for the Altris IMS. The document is a 510(k) summary for a medical device (Altris IMS) seeking FDA clearance, focusing on demonstrating substantial equivalence to a predicate device rather than outright performance claims.
However, based on the information provided, we can infer some aspects and highlight what is missing.
The Altris IMS is a software application for managing and displaying ophthalmic diagnostic data, including manual labeling and annotation of retinal OCT scans. It explicitly states it does not use AI or ML algorithms and does not perform diagnosis or provide treatment recommendations.
Given this, the performance data section is likely to focus on the software's functionality, accuracy of manual measurements, and data handling, rather than diagnostic accuracy or clinical effectiveness in a medical sense.
Here's an attempt to answer your questions based on the provided text, and where information is missing, it will be noted:
1. A table of acceptance criteria and the reported device performance
The document does not provide a formal table of acceptance criteria or specific quantitative performance metrics like sensitivity, specificity, accuracy, or measurement error rates. The "Performance Data" section states: "Due to the difficulty in evaluating this type of software, no direct performance bench testing of software to an established standard was performed."
Instead, performance was demonstrated through:
- Software Verification
- Software Validation
- Comparative Software measurement study with the K170164 Reference device.
Without the actual study report, specific performance numbers are unavailable. The goal was to prove the device "performs as intended similarly to the Predicate device."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document mentions "a Comparative Software measurement study with the K170164 Reference device." However, it does not provide details on:
- The sample size of images/cases used in this comparative study.
- The data provenance (country of origin, whether it was retrospective or prospective data).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Given the device allows for "manual labeling and annotation of retinal OCT scans" by "healthcare professionals," and it does not use AI/ML for diagnosis, the "ground truth" for the comparative measurement study would likely involve comparing the device's manual measurement capabilities against the reference device or perhaps against expert manual measurements performed independently.
The document does not specify the number of experts or their qualifications involved in establishing any form of "ground truth" or reference measurements for the comparative study.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not describe any adjudication method used for establishing ground truth or conducting the comparative measurement study.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Given that the device "does not use artificial intelligence or machine learning algorithms" and "neither performs any diagnosis, nor provides treatment recommendations," an MRMC study comparing human readers with AI vs. without AI assistance would be irrelevant and was not performed. The study mentioned is a "Comparative Software measurement study" which likely focuses on the accuracy or consistency of the manual measurement tools provided by the software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Since the device "does not use artificial intelligence or machine learning algorithms," and its primary functions are data management, display, and manual annotation/measurement, the concept of an "algorithm only" standalone performance is not applicable in the typical sense of AI diagnostic devices. The software supports human-in-the-loop actions.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the "Comparative Software measurement study," the "ground truth" would most likely be a comparison of measurements obtained using the Altris IMS's manual tools against those obtained using the K170164 Reference device, which is an imaging system with storage/management software and supports image annotation and measurement. It could also involve comparing against expert manual measurements using established clinical standards. The document does not explicitly state the type of ground truth used beyond "measurement validation."
8. The sample size for the training set
Since the device "does not use artificial intelligence or machine learning algorithms," there is no concept of a training set in the machine learning sense for this device.
9. How the ground truth for the training set was established
As there is no training set (due to the absence of AI/ML), there is no ground truth established for a training set.
In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence based on indications for use and technological principles, supported by general software verification and validation, and a comparative measurement study. It explicitly states the device does not employ AI/ML, which changes the nature of the performance data required compared to an AI-powered diagnostic device. The document lacks the specific quantitative performance metrics, sample sizes, and expert details typically found in studies validating AI/ML-driven medical devices.
Ask a specific question about this device
(214 days)
NFJ
Ophthalmology
The EXCELSIOR Software is intended for use in importing, processing, measurement, and analysis of ophthalmic clinical images as well as in management of clinical data, through a computerized network for use in analysis of images and data obtained in clinical trials.
Radiology
Excelsior is a software solution intended to be used for viewing, manipulation, annotation, and ysis, and comparison of medical images from multiple imaging modalities and/or multiple time points. The application supports images, functional data such as PET as well as anatomical datasets, such as CT or MR. Excelsior is a software only medical device to be deployed through a cloud-based computerized network via web applications and customized user interfaces for use in the analysis of images and data obtained in clinical trials. Excelsior enables visualization of information that would otherwise have to be visually compared disjointedly. Excelsior provides and workflow automation tools to help the user assess and document the extent of a disease and/or the response to therapy in accordance with user selected standards and assess changes in imaging findings over multiple time-points. Excelsior supports the interpretation and evaluations and follow-up documentation of findings for radiologic oncology imaging and data obtained in clinical trials.
The product is intended to be used as a workflow automation tool by trained medical professionals. It is intended to provide image and related information that is interpreted by a trained professional but does not directly generate any diagnosis or potential findings.
Note: The medical professional retains the ultimate responsibility for making the pertinent observations based on their standard practices and established practices related to clinical trial outcomes. Excelsior is a complement to these standard procedures. Excelsior is not to be used in mammography.
The EXCELSIOR software is a cloud-based software that provides a central reading platform integrating remote data collection, quantitative analysis and measurement, storage, and management of ophthalmic and radiological data from DICOM images for clinical trials. The software does not use artificial intelligence or machine learning algorithms.
The provided text is a 510(k) summary for the EXCELSIOR software. It describes the device's intended use, classification, predicate devices, and the rationale for substantial equivalence. However, it does not include the specific details required to answer your request regarding acceptance criteria and the study that proves the device meets those criteria.
Specifically, the document states:
- "Software validation and verification testing was performed which showed that the software performs as intended supporting substantial equivalence."
- "The methodology used to validate and verify that the software performs as intended was used to confirm performance of the additional tools and features."
These statements confirm that testing was done, but they do not provide the raw data, acceptance criteria, study design details, or expert qualifications that you've asked for.
Therefore, I cannot extract the following information from the provided text:
- A table of acceptance criteria and the reported device performance: Not provided.
- Sample size used for the test set and the data provenance: Not provided.
- Number of experts used to establish the ground truth for the test set and the qualifications: Not provided.
- Adjudication method for the test set: Not provided.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size: Not provided. The text implies the device is a workflow tool, not an AI for diagnosis, making MRMC less likely to be the primary evaluation method for diagnostic improvement.
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not explicitly detailed. The text states "The software does not use artificial intelligence or machine learning algorithms," implying it's a tool for human use, not a standalone diagnostic algorithm.
- The type of ground truth used: Not provided.
- The sample size for the training set: Not applicable, as the document explicitly states: "The software does not use artificial intelligence or machine learning algorithms." Therefore, there is no training set as understood in the context of AI/ML.
- How the ground truth for the training set was established: Not applicable for the reason above.
In summary, the document states that validation and verification testing was performed and supports substantial equivalence, but it does not disclose the specifics of these tests, including acceptance criteria, performance metrics, or study design details.
Ask a specific question about this device
(284 days)
NFJ
FORUM is a software system intended for use in management, processing of patient, diagnostic, video and image data and measurement from computerized diagnostic instruments or documentation systems through networks. It is intended to work with other FORUM applications (including but not limited to Retina Workplace, Glaucoma Workplace).
FORUM is intended for use in review of patient, diagnostic and image data and measurement by trained healthcare professionals.
FORUM and its accessories are a computer software system designed for management, processing, and display of patient diagnostic, video and image data and measurement from computerized diagnostic instruments or documentation systems through networks. It is intended to work with other FORUM applications.
FORUM receives data via DICOM protocol from a variety of ophthalmic diagnostic instruments (such as CIRRUS, CLARUS, and 3rd Party systems), allows central data storage and remote access to patient data. This version of FORUM allows the user to access their data in the cloud via ZEISS developed non-medical device accessories. FORUM is an ophthalmic data management solution. FORUM provides basic viewing functionalities and is able to connect all DICOM compliant instruments.
This version of FORUM provides additional device functions such as review and annotation functionality of fundus images/movies, display of OCT image stacks, bidirectional data exchange between FORUM Workplaces, customization of document viewing abilities, user interface improvements, and user management updates.
This version of FORUM has additional non-medical device functions that are performed by non-medical device accessories, such as documentation storage, export of data in various formats, export to the cloud, improved IT integration capability into the existing IT network, image sorting, EMR log in improvements, numerous backend improvements with the purpose of streamlining clinical workflow.
Here's an analysis of the provided text regarding the acceptance criteria and study for the device:
Important Note: The provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device. It usually doesn't contain a detailed breakdown of a separate clinical study with acceptance criteria, sample sizes, and expert adjudication in the same way an AI/ML device would. Instead, it relies on extensive software verification and validation to demonstrate safety and effectiveness.
Based on the provided text, a direct answer to all your questions in the typical format for a clinical study is not explicitly available for this specific type of device (a medical image management and processing system). However, I can extract the relevant information and infer what's implied.
Acceptance Criteria and Device Performance Study for FORUM (K213527)
This submission for FORUM (K213527) is a 510(k) Pre-market Notification for a medical image management and processing system. The acceptance criteria and "study" are primarily focused on demonstrating substantial equivalence to a predicate device (FORUM Archive and Viewer, K122938) through software verification and validation, rather than a traditional multi-reader multi-case clinical study for a diagnostic AI algorithm.
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a software system intended for managing and processing existing image data, not generating new diagnostic conclusions, the "acceptance criteria" are related to its functional performance, safety, and equivalence to its predicate.
Acceptance Criteria Category/Area | Specific Criteria (Implied/Demonstrated) | Reported Device Performance (Demonstrated by Verification & Validation) |
---|---|---|
Indications for Use | Equivalence to predicate's IFU; no new risks associated with the updated IFU. | The IFU is "equivalent" to the predicate, with a minor textual change ("removal of the word 'storage' and display... due to an updated definition of MIMS") not constituting a substantial change. |
Functionality (Medical Device Features) | Performance of core functions for patient data management, processing, and review as intended. | All new and/or modified medical device functions (e.g., fundus image processing, image annotations, bidirectional data exchange) were demonstrated through risk analysis and testing to not impact the safety, equivalence, risk profile, and technical specifications as compared to the predicate device. |
Safety and Risk Profile | Risks associated with new/modified functions are mitigated and do not introduce new substantial concerns. | Appropriate risk analysis and testing documentation were provided to demonstrate that modifications do not impact substantial equivalence. The device was considered a "Moderate" level of concern, and verification/validation confirmed no indirect minor injury to patient or operator. |
Technical Specifications | Updated platform/OS and other backend improvements maintain or enhance performance without adverse impact. | Backend improvements (e.g., updated Windows Server/Client versions, addition of Apple OS X BigSur support) were deemed equivalent as they do not impact indications for use, device risk profile, or technical specifications, as demonstrated by risk documentation and testing. |
Non-Medical Device Functions | New non-medical accessories and functions (e.g., cloud connection, documentation storage) do not impact the core medical device functionality or safety. | The addition of non-medical accessories (e.g., for cloud connectivity) and non-medical functions does not impact the functionality or safety of the medical device, as demonstrated by appropriate risk assessments and testing information. |
Software Verification & Validation | All requirements for proposed changes must be met, and testing must be performed according to FDA guidance. | "FORUM (version 4.3) has successfully undergone extensive software verification and validation testing to ensure that all requirements for proposed changes have been met." Documentation provided as recommended by FDA's "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." All testing followed internally approved procedures. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated as a number of cases or patients. The "test set" here refers to the software verification and validation activities. These typically involve diverse test cases covering various functionalities, edge cases, and potential failure points, rather than a "patient test set" in a clinical study.
- Data Provenance: Not specified. For software verification and validation, the "data" would be test data (simulated or real but de-identified) used to exercise the software's functions.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Not applicable or specified. For this type of software, "ground truth" relates to the expected behavior of the software according to its design specifications. It doesn't involve medical experts adjudicating diagnoses in a test set.
- Qualifications of Experts: N/A for establishing "ground truth" in this context. Experts would be software engineers, quality assurance personnel, and potentially clinical subject matter experts for reviewing the functional requirements and outputs.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable. The "ground truth" for software verification is the expected output according to the design specification and requirements. Verification and validation are performed against these predetermined requirements.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No. This type of study is typically performed for AI-powered diagnostic devices where human readers' performance with and without AI assistance is compared. FORUM is a management and processing system, not an AI diagnostic algorithm that provides assistance to human readers in the diagnostic task itself.
- Effect Size of Human Readers' Improvement: Not applicable.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Was it done? No, not in the sense of a standalone diagnostic algorithm's performance. The "standalone" performance for this device would refer to its ability to perform its specified functions (managing, processing, displaying data) correctly and reliably, which was assessed through software verification and validation. It's not a diagnostic algorithm.
7. Type of Ground Truth Used
- Type of Ground Truth: Software functional specifications and requirements documents. The "truth" is whether the software behaves as designed and meets its technical and safety requirements.
8. Sample Size for the Training Set
- Training Set Sample Size: Not applicable. FORUM is a medical image management and processing system, not a machine learning model that requires a "training set."
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth Established: Not applicable, as there is no training set for this type of device.
Ask a specific question about this device
(329 days)
NFJ
The RetinAl Discovery is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including: patient data, clinical images and information, reports and measurements of DICOM-compliant images. The device is also indicated for manual labeling and annotation of retinal OCT scans.
The RetinAl Discovery consists of a platform which displays and analyzes images of the eye (e.g. OCT scans and fundus images) along with associated measurements (e.g. layer thickness) generated by the user through Discovery. The platform allows the user to manually segment layers and volumes in the images, it calculates the layer thickness and volume from annotated images and presents the progression of the measurement in graphs. Discovery provides a tool for measuring ocular anatomy and ocular lesion distances. The multiple views in Discovery and the measurements allow the user to assess the eye anatomy and, ultimately, assist the user in making decisions on diagnosis and monitoring of disease progression.
Here's a breakdown of the acceptance criteria and study information for RetinAI Discovery, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of specific metrics (e.g., accuracy percentages, Dice scores). Instead, the performance is described through comparison testing demonstrating equivalence with predicate/reference devices for manual segmentation and image measurement of retinal OCT scans.
Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|
Equivalence in manual segmentation of retinal layers | Comparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This implies the results of manual segmentation in Discovery do not significantly differ from those obtained from the established reference devices. |
Equivalence in image measurement of retinal OCT scans | Comparison testing showed "the computed values from the Discovery platform are substantially equivalent to the computed values from the Reference Devices (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton), for both Optimized and Device display modes." This indicates that measurements performed within Discovery are consistent with measurements from the reference devices. |
Functioned as intended | "In all instances, Discovery functioned as intended and expected performance was reached." This suggests the software operated without critical errors or deviations from its design specifications during testing. |
IEC 62304 and IEC 82304 compliance (Software Development) | The device was "designed, developed and tested according to the software development lifecycle process implemented at RetinAI Medical AG, based on the IEC 62304 and IEC 82304 standards, and the FDA Guidance for the 'General Principles of Software Validation'." This indicates adherence to accepted software development and validation practices for medical devices, which are a form of acceptance criteria for the development process. Testing included "verification and validation activities (static code analysis, unit and integration testing, system and functional testing)." |
No new questions of safety or effectiveness from technological differences | "The minor technological differences between the RetinAI Discovery and its predicate device do not raise different questions of safety or effectiveness." This is a key regulatory acceptance criterion for substantial equivalence. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the numerical sample size for the test set (number of OCT scans or patients).
- Test Set: Implied to be the same images used for comparison testing with the reference devices.
- Data Provenance: Not specified. The document states "comparison testing was performed... with the same images segmented in cleared devices," but does not explicitly mention country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
The document does not explicitly state the number of experts or their qualifications for establishing ground truth. The comparison testing relies on the "computed values from the Reference Devices" as the standard, implying that the ground truth is derived from the established and cleared functionalities of those devices when experts perform manual segmentation or measurements within them.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method for a test set in the traditional sense of multiple human readers independently assessing and then reaching a consensus. Instead, the "ground truth" for the comparison study appears to be the output of the cleared reference devices when manual segmentation/measurements are performed by users (presumably clinicians or operators) within those systems.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study described is a standalone performance validation comparing RetinAI Discovery's manual segmentation and measurement capabilities with those of existing cleared devices. There is no mention of human readers improving with or without AI assistance, as the device's main specified function (based on the provided text) is for displaying, analyzing, and manual labeling/annotation, not AI-powered automated analysis or decision support for human readers.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance validation was done for the manual segmentation and image measurement functionalities of the RetinAI Discovery. The device itself is described as a "standalone, browser-based software application." The comparison testing verified the performance of the Discovery platform's manual segmentation and measurement tools against the established reference devices, essentially testing the accuracy of the tools themselves "without human-in-the-loop performance" improvement claims. The document focuses on the platform's ability to facilitate manual activities.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The ground truth for the comparison testing was effectively the "computed values from the Reference Devices" (Heidelberg Engineering Spectralis HRA+OCT and Topcon DRI OCT Triton). This implies that the accepted and pre-cleared outputs of these established ophthalmic imaging and analysis devices, whether derived from their automatic or manual segmentation/measurement tools, served as the benchmark for evaluating RetinAI Discovery.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size. This is consistent with the nature of the device as described, which is specified for manual labeling and annotation and general image management/display, not for an AI model that requires a large training dataset.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for an AI model, the method for establishing ground truth for a training set is not applicable here. The described studies focus on the validation of the manual tools and general software functions.
Ask a specific question about this device
(215 days)
NFJ
Harmony Referral System (Harmony RS) is a comprehensive software platform intended for use in importing, processing, viewing, measurement and storage of clinical images and videos as well as in management and communication of patient data, diagnostic and clinical information and reports from ophthalmic diagnostic instruments through either direct connection with the instruments or through computerized networks. The system neither performs any interpretations nor provides treatment recommendations.
Harmony Referral System is an internet-browser-based software platform that allows users to access examination data of a patient from different sources. Harmony Referral System may be used together with a number of computerized digital imaging devices and third party software. In addition. Harmony Referral System software collects and manages patient demographics, image data, and clinical reports from a range of approved medical devices. Harmony Referral System enables a real-time review of diagnostic patient information at a PC workstation. The software uses SSL encryption in network communication and secure network infrastructure with firewalls and additionally also VPN and IP-based access restrictions to ensure secure networking environment. The Harmony Referral System does not perform automated image analysis but provides measurements based on pixels of an image, which were marked by the user manually on the screen including cup-disk ratio and line and area measurements.
The provided document, a 510(k) summary for the Topcon Harmony Referral System (Harmony RS), states that no performance data was required or provided for this device. Therefore, it is not possible to describe acceptance criteria or a study proving the device meets those criteria from this document.
The document explicitly states:
Performance Data
"No performance data was required or provided. Software validation and verification demonstrate that Harmony RS performs as intended and meets its' specifications."
And under the "Substantial Equivalence" section:
"The different technological characteristics of the devices do not raise new questions of safety and effectiveness. The differences in hardware requirements and system access are all system features that can be evaluated during software validation and verification and were primarily revised to allow the system to operate with newer hardware, browsers and operating systems."
This indicates that the FDA's clearance was based on demonstrating substantial equivalence to a predicate device (Topcon Harmony, K182376) and on software validation and verification, rather than a clinical performance study with defined acceptance criteria.
While the document details the device's intended use and technical specifications, it does not contain the information requested in the prompt regarding acceptance criteria, study design, sample sizes, expert ground truth, or adjudication methods for performance evaluation.
Ask a specific question about this device
Page 1 of 6