Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K232088
    Device Name
    Altris IMS
    Manufacturer
    Date Cleared
    2023-07-31

    (18 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K170164

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Altris IMS is a standalone, browser-based software application intended for use by healthcare professionals to import, store, manage, display, and measure data from ophthalmic diagnostic instruments, including patient data, diagnostic data, clinical images and information, reports, and measurement of DICOM-compliant images. The device is also indicated for manual labeling and annotation of retinal OCT scans.

    Device Description

    Altris IMS is a cloud-based software program to assist healthcare professionals, specifically Eye Care Practitioners (ECPs) with OCT interpretation. Altris IMS utilizes commonly available internet browsers to locally manage and review data which is uploaded to an Amazon AWS cloud-based server. Its intended use is to import, store, manage, display, analyze and measure data from ophthalmic diagnostic instruments, including patient data, diagnostic data, clinical images and information, reports, and measurement of DICOM-compliant images. The platform allows the user to manually annotate areas of interest in the images, calculate the layer thickness and volume from annotated images and present the progression of the measurements. Altris IMS also provides a tool for linear distance measuring of ocular anatomy and ocular lesion distances. The platform supports DICOM format files. Altris IMS is focused on the center sector of the retina. Altris IMS does not perform optic nerve analysis. Altris IMS has tools for manual area of interest image segmentation and labeling/annotation for healthcare professionals to use and review for their own diagnosis. The Subject device neither performs any diagnosis, nor provides treatment recommendations. It is solely intended to be used as a support tool by trained healthcare professionals. The software does not use artificial intelligence or machine learning algorithms. The Subject device is a client-server model. It utilizes a local user/client internet browser-based (frontend) interface used to upload, manage, annotate, and review imaging data. Data is stored and processed on a remote web-based server (backend).

    AI/ML Overview

    The provided text does not contain detailed information about the acceptance criteria or a specific study that proves the device meets those criteria for the Altris IMS. The document is a 510(k) summary for a medical device (Altris IMS) seeking FDA clearance, focusing on demonstrating substantial equivalence to a predicate device rather than outright performance claims.

    However, based on the information provided, we can infer some aspects and highlight what is missing.

    The Altris IMS is a software application for managing and displaying ophthalmic diagnostic data, including manual labeling and annotation of retinal OCT scans. It explicitly states it does not use AI or ML algorithms and does not perform diagnosis or provide treatment recommendations.

    Given this, the performance data section is likely to focus on the software's functionality, accuracy of manual measurements, and data handling, rather than diagnostic accuracy or clinical effectiveness in a medical sense.

    Here's an attempt to answer your questions based on the provided text, and where information is missing, it will be noted:


    1. A table of acceptance criteria and the reported device performance

    The document does not provide a formal table of acceptance criteria or specific quantitative performance metrics like sensitivity, specificity, accuracy, or measurement error rates. The "Performance Data" section states: "Due to the difficulty in evaluating this type of software, no direct performance bench testing of software to an established standard was performed."

    Instead, performance was demonstrated through:

    • Software Verification
    • Software Validation
    • Comparative Software measurement study with the K170164 Reference device.

    Without the actual study report, specific performance numbers are unavailable. The goal was to prove the device "performs as intended similarly to the Predicate device."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    The document mentions "a Comparative Software measurement study with the K170164 Reference device." However, it does not provide details on:

    • The sample size of images/cases used in this comparative study.
    • The data provenance (country of origin, whether it was retrospective or prospective data).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    Given the device allows for "manual labeling and annotation of retinal OCT scans" by "healthcare professionals," and it does not use AI/ML for diagnosis, the "ground truth" for the comparative measurement study would likely involve comparing the device's manual measurement capabilities against the reference device or perhaps against expert manual measurements performed independently.

    The document does not specify the number of experts or their qualifications involved in establishing any form of "ground truth" or reference measurements for the comparative study.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    The document does not describe any adjudication method used for establishing ground truth or conducting the comparative measurement study.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Given that the device "does not use artificial intelligence or machine learning algorithms" and "neither performs any diagnosis, nor provides treatment recommendations," an MRMC study comparing human readers with AI vs. without AI assistance would be irrelevant and was not performed. The study mentioned is a "Comparative Software measurement study" which likely focuses on the accuracy or consistency of the manual measurement tools provided by the software.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Since the device "does not use artificial intelligence or machine learning algorithms," and its primary functions are data management, display, and manual annotation/measurement, the concept of an "algorithm only" standalone performance is not applicable in the typical sense of AI diagnostic devices. The software supports human-in-the-loop actions.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "Comparative Software measurement study," the "ground truth" would most likely be a comparison of measurements obtained using the Altris IMS's manual tools against those obtained using the K170164 Reference device, which is an imaging system with storage/management software and supports image annotation and measurement. It could also involve comparing against expert manual measurements using established clinical standards. The document does not explicitly state the type of ground truth used beyond "measurement validation."

    8. The sample size for the training set

    Since the device "does not use artificial intelligence or machine learning algorithms," there is no concept of a training set in the machine learning sense for this device.

    9. How the ground truth for the training set was established

    As there is no training set (due to the absence of AI/ML), there is no ground truth established for a training set.


    In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence based on indications for use and technological principles, supported by general software verification and validation, and a comparative measurement study. It explicitly states the device does not employ AI/ML, which changes the nature of the performance data required compared to an AI-powered diagnostic device. The document lacks the specific quantitative performance metrics, sample sizes, and expert details typically found in studies validating AI/ML-driven medical devices.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200954
    Device Name
    Glaucoma Module
    Date Cleared
    2020-08-03

    (116 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K182376, K170164, K173119, K111157, K093213

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Glaucoma Module is a software application intended for the management, display and analysis of visual field and optical coherence tomography data. It is intended as an aid to the detection and management of visual field defects and progression of visual field loss.

    Device Description

    The Glaucoma Module works as an optional module, integrated into the Harmony user interface, and interfacing to Harmony to access the relevant data and information. Harmony is a comprehensive software platform intended for use in importing, processing, measurement, analysis and storage of clinical images and videos of the eye, as well as for management of patient data, diagnostic data, clinical information, reports from ophthalmic diagnostic instruments through either a direct connection with the instruments or through computerized networks. Harmony was most recently cleared by FDA in K182376.

    The Glaucoma Module is a fully interactive multi-modality software for clinicians to assess, diagnose and manage patients who are glaucoma suspects or have been diagnosed with glaucoma. The Glaucoma Module is an aid to detection and management of visual field and OCT data.

    The Glaucoma Module displays key information for diagnosis and management using a wellorganized interface.

    Glaucoma Module is integrated into the Harmony user interface that utilizes both OCT exam and Visual Field data in an interactive manner. It employs two main sections, the Hood Dashboard screen used to determine glaucoma suspects and the Glaucoma Trend screen which can be used to observe patient data over a larger period of time.

    The Glaucoma Module does not include predictive interpretations of the correlation of structural and functional measures, two measures that are understood to be independent of each other.

    The Glaucoma Module will work with the following medical devices:

    • Topcon's Maestro. Maestro 2, and Triton Optical Coherence Tomography devices .
    • Zeiss' Visual Field instruments HFA3 and HFA Iii
    • Visual Field data from other manufacturers. (e.g. Oculus EasyField) through DICOM ● OPV data format.
    AI/ML Overview

    Here's a breakdown of the requested information based on the provided text:

    Key Takeaway: The provided 510(k) summary for the Topcon Healthcare Solutions Glaucoma Module states that no performance data was required or provided for its clearance. This means there is no study described in this document that proves the device meets specific acceptance criteria related to its clinical performance. Instead, the clearance primarily relies on demonstrating substantial equivalence to a predicate device through similar intended use and technological characteristics, as well as software validation and verification.


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Functional/Technical Only - No Clinical Performance)Reported Device Performance (Software Validation & Verification)
    Device performs as intendedConfirmed through software validation and verification
    Device meets its specificationsConfirmed through software validation and verification
    Manages, displays, and analyzes visual field and OCT dataConfirmed through substantial equivalence comparison
    Integrates into Harmony user interfaceConfirmed by device description
    Accesses relevant data and information from HarmonyConfirmed by device description
    Displays key information for diagnosis and managementConfirmed by device description
    Employs Hood Dashboard and Glaucoma Trend screenConfirmed by device description
    Does not include predictive interpretationsConfirmed by device description
    Works with specified medical devices (e.g., Topcon OCTs, Zeiss HFA)Confirmed by device description
    Performs data retrieval from allowed devicesConfirmed by substantial equivalence comparison
    Displays visual field reports and combined reportsConfirmed by substantial equivalence comparison
    Displays visual field information of a single examConfirmed by substantial equivalence comparison
    Provides data plots (threshold, graytone, total/pattern deviation)Confirmed by substantial equivalence comparison
    Provides global and reliability indicesConfirmed by substantial equivalence comparison
    Allows user commentsConfirmed by substantial equivalence comparison

    Note: The document explicitly states, "No performance data was required or provided. Software validation and verification demonstrate that the Glaucoma Module performs as intended and meets its' specifications." Therefore, the "acceptance criteria" here are primarily functional and technical requirements met through software testing and comparison to a predicate, not clinical performance metrics like sensitivity, specificity, or accuracy.


    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not applicable. No clinical performance testing against a specific test set is mentioned.
    • Data Provenance: Not applicable. No clinical performance testing data is provided.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Not applicable. No clinical performance testing against a ground truth is mentioned.

    4. Adjudication Method for the Test Set

    • Not applicable. No clinical performance testing with adjudication is mentioned.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. An MRMC comparative effectiveness study was not done or reported.

    6. If a Standalone (i.e. algorithm only, without human-in-the-loop performance) was done

    • No. A standalone performance study was not done or reported. The device is described as a software application for clinicians to aid in assessment, diagnosis, and management, implying a human-in-the-loop context. However, no performance data (standalone or otherwise) is presented.

    7. The Type of Ground Truth Used

    • Not applicable. No ground truth for clinical performance evaluation is mentioned.

    8. The Sample Size for the Training Set

    • Not applicable. The document does not describe any machine learning or AI algorithm development that would involve a training set. The device is a "software application intended for the management, display and analysis..." and not an AI/ML diagnostic tool requiring such a set.

    9. How the Ground Truth for the Training Set was Established

    • Not applicable. As no training set is mentioned, no ground truth for it was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1