Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K213527
    Device Name
    FORUM
    Date Cleared
    2022-08-15

    (284 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FORUM

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM is a software system intended for use in management, processing of patient, diagnostic, video and image data and measurement from computerized diagnostic instruments or documentation systems through networks. It is intended to work with other FORUM applications (including but not limited to Retina Workplace, Glaucoma Workplace).

    FORUM is intended for use in review of patient, diagnostic and image data and measurement by trained healthcare professionals.

    Device Description

    FORUM and its accessories are a computer software system designed for management, processing, and display of patient diagnostic, video and image data and measurement from computerized diagnostic instruments or documentation systems through networks. It is intended to work with other FORUM applications.

    FORUM receives data via DICOM protocol from a variety of ophthalmic diagnostic instruments (such as CIRRUS, CLARUS, and 3rd Party systems), allows central data storage and remote access to patient data. This version of FORUM allows the user to access their data in the cloud via ZEISS developed non-medical device accessories. FORUM is an ophthalmic data management solution. FORUM provides basic viewing functionalities and is able to connect all DICOM compliant instruments.

    This version of FORUM provides additional device functions such as review and annotation functionality of fundus images/movies, display of OCT image stacks, bidirectional data exchange between FORUM Workplaces, customization of document viewing abilities, user interface improvements, and user management updates.

    This version of FORUM has additional non-medical device functions that are performed by non-medical device accessories, such as documentation storage, export of data in various formats, export to the cloud, improved IT integration capability into the existing IT network, image sorting, EMR log in improvements, numerous backend improvements with the purpose of streamlining clinical workflow.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the device:

    Important Note: The provided text is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device. It usually doesn't contain a detailed breakdown of a separate clinical study with acceptance criteria, sample sizes, and expert adjudication in the same way an AI/ML device would. Instead, it relies on extensive software verification and validation to demonstrate safety and effectiveness.

    Based on the provided text, a direct answer to all your questions in the typical format for a clinical study is not explicitly available for this specific type of device (a medical image management and processing system). However, I can extract the relevant information and infer what's implied.


    Acceptance Criteria and Device Performance Study for FORUM (K213527)

    This submission for FORUM (K213527) is a 510(k) Pre-market Notification for a medical image management and processing system. The acceptance criteria and "study" are primarily focused on demonstrating substantial equivalence to a predicate device (FORUM Archive and Viewer, K122938) through software verification and validation, rather than a traditional multi-reader multi-case clinical study for a diagnostic AI algorithm.

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a software system intended for managing and processing existing image data, not generating new diagnostic conclusions, the "acceptance criteria" are related to its functional performance, safety, and equivalence to its predicate.

    Acceptance Criteria Category/AreaSpecific Criteria (Implied/Demonstrated)Reported Device Performance (Demonstrated by Verification & Validation)
    Indications for UseEquivalence to predicate's IFU; no new risks associated with the updated IFU.The IFU is "equivalent" to the predicate, with a minor textual change ("removal of the word 'storage' and display... due to an updated definition of MIMS") not constituting a substantial change.
    Functionality (Medical Device Features)Performance of core functions for patient data management, processing, and review as intended.All new and/or modified medical device functions (e.g., fundus image processing, image annotations, bidirectional data exchange) were demonstrated through risk analysis and testing to not impact the safety, equivalence, risk profile, and technical specifications as compared to the predicate device.
    Safety and Risk ProfileRisks associated with new/modified functions are mitigated and do not introduce new substantial concerns.Appropriate risk analysis and testing documentation were provided to demonstrate that modifications do not impact substantial equivalence. The device was considered a "Moderate" level of concern, and verification/validation confirmed no indirect minor injury to patient or operator.
    Technical SpecificationsUpdated platform/OS and other backend improvements maintain or enhance performance without adverse impact.Backend improvements (e.g., updated Windows Server/Client versions, addition of Apple OS X BigSur support) were deemed equivalent as they do not impact indications for use, device risk profile, or technical specifications, as demonstrated by risk documentation and testing.
    Non-Medical Device FunctionsNew non-medical accessories and functions (e.g., cloud connection, documentation storage) do not impact the core medical device functionality or safety.The addition of non-medical accessories (e.g., for cloud connectivity) and non-medical functions does not impact the functionality or safety of the medical device, as demonstrated by appropriate risk assessments and testing information.
    Software Verification & ValidationAll requirements for proposed changes must be met, and testing must be performed according to FDA guidance."FORUM (version 4.3) has successfully undergone extensive software verification and validation testing to ensure that all requirements for proposed changes have been met." Documentation provided as recommended by FDA's "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." All testing followed internally approved procedures.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not explicitly stated as a number of cases or patients. The "test set" here refers to the software verification and validation activities. These typically involve diverse test cases covering various functionalities, edge cases, and potential failure points, rather than a "patient test set" in a clinical study.
    • Data Provenance: Not specified. For software verification and validation, the "data" would be test data (simulated or real but de-identified) used to exercise the software's functions.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Number of Experts: Not applicable or specified. For this type of software, "ground truth" relates to the expected behavior of the software according to its design specifications. It doesn't involve medical experts adjudicating diagnoses in a test set.
    • Qualifications of Experts: N/A for establishing "ground truth" in this context. Experts would be software engineers, quality assurance personnel, and potentially clinical subject matter experts for reviewing the functional requirements and outputs.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not applicable. The "ground truth" for software verification is the expected output according to the design specification and requirements. Verification and validation are performed against these predetermined requirements.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done? No. This type of study is typically performed for AI-powered diagnostic devices where human readers' performance with and without AI assistance is compared. FORUM is a management and processing system, not an AI diagnostic algorithm that provides assistance to human readers in the diagnostic task itself.
    • Effect Size of Human Readers' Improvement: Not applicable.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was it done? No, not in the sense of a standalone diagnostic algorithm's performance. The "standalone" performance for this device would refer to its ability to perform its specified functions (managing, processing, displaying data) correctly and reliably, which was assessed through software verification and validation. It's not a diagnostic algorithm.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Software functional specifications and requirements documents. The "truth" is whether the software behaves as designed and meets its technical and safety requirements.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not applicable. FORUM is a medical image management and processing system, not a machine learning model that requires a "training set."

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Established: Not applicable, as there is no training set for this type of device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K141297
    Date Cleared
    2014-10-03

    (137 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    FORUM GLAUCOMA WORKPLACE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM Glaucoma Workplace is a FORUM software application intended for the management, display, and analysis of visual field and optical coherence tomography data. The FORUM Glaucoma Workplace is indicated as an aid to the detection, measurement, and management of visual field defects and progression of visual field loss.

    FORUM Glaucoma Workplace is also intended for generating reports that contain results from perimetry, optical coherence tomography and fundus photography.

    FORUM Glaucoma Workplace implements Cirrus algorithms and normative databases for retinal nerve fiber layer thickness, ganglion cell plus inner plexiform thickness and optic nerve head measurement and Humphrey Field Analyzer algorithms and databases for visual field measurements and Guided Progression Analysis.

    Device Description

    FORUM Glaucoma Workplace is a FORUM software application intended for the management, display, and analysis of visual field and optical coherence tomography data.

    FORUM Glaucoma Workplace implements Cirrus algorithms and normative databases for retinal nerve fiber layer thickness, ganglion cell plus inner plexiform thickness and optic nerve head measurement and Humphrey Field Analyzer algorithms and databases for visual field measurements and Guided Progression Analysis.

    FORUM Glaucoma Workplace provides a means to review and analyze data from various visual field examinations to identify progressive visual field loss. FORUM Glaucoma Workplace utilizes Humphrey® Field Analyzer (HFA) algorithms and databases including STATPAC and Guided Progression Analysis (GPA) to process visual field data and generate visual field reports. GPA compares visual field test results of follow-up tests to an established baseline over time and determines if there is change that exceeds the expected test-retest variability.

    FORUM Glaucoma Workplace generates combined reports that contain results from perimetry. optical coherence tomography and fundus photography. FORUM Glaucoma Workplace implements Cirrus algorithms and databases for retinal nerve fiber laver (RNFL) thickness. ganglion cell plus inner plexiform layer thickness and optic nerve head (ONH) measurements included in these reports.

    The created reports and the Guided Progression Analysis provide a comprehensive overview of the structural and functional exam results to aid health care professionals in the measurement, and management of visual field defects and progression of visual field loss.

    The following are the main functionalities of FORUM Glaucoma Workplace:

    • Data retrieval and report storage
    • Managing, analyzing and displaying visual field exams and OCT exams
    • . Creation of visual field reports and combined reports

    FORUM Glaucoma Workplace retrieves HFA visual field test data from the FORUM Archive, uses the HFA algorithms and databases to process the visual field raw data, then generates and displays visual field reports.

    FORUM Glaucoma Workplace creates combined reports using HFA visual field exam data (functional information) and Cirrus acquisition data (structural information); fundus images stored in FORUM may also be added to the reports. The reports generated by FORUM Glaucoma Workplace are stored as DICOM Encapsulated PDFs in the FORUM Archive. FORUM Glaucoma Workplace displays interactive screens and the generated visual field reports. These reports include those previously offered by the HFA II and HFA II -i: Single Field Analysis; Three in One; Numeric; Suprathreshold; Kinetic Reports; Overview; Guided Progression Analysis (GPA) Summary, Full GPA, GPA Last Three Follow-up and Single Field Analysis (SFA) GPA.

    FORUM Glaucoma Workplace manages Cirrus OCT data to generate combined functional (perimetry) and structural (OCT) reports. These combined reports contain the results from perimetry, OCT and fundus photography.

    FORUM Glaucoma Workplace processes Cirrus OCT data by implementing the Cirrus algorithms and databases offered by Cirrus HD-OCT and CIRRUS photo. The databases offered by Cirrus HD-OCT are used within FORUM Glaucoma Workplace for comparison to Cirrus HD-OCT data; the databases offered by CIRRUS photo are used within FORUM Glaucoma Workplace for comparison to CIRRUS photo data.

    FORUM Glaucoma Workplace provides two types of combined reports:

      1. 24-2/30-2 and RNFL (for Cirrus HD-OCT and CIRRUS photo data): This report presents the visual field test result comprised of either the 24-2 or 30-2 test pattern combined with a Retinal Nerve Fiber Layer (RNFL) report.
      1. 10-2 and GCA (only for Cirrus HD-OCT data): This report presents a visual field test result comprised of the central 10-2 test pattern combined with a Ganglion Cell Analysis (GCA) report.

    Elements from the visual field reports that are provided in the Combined Reports include the Graytone plot, Pattern Deviation and Total Deviation plots (using probability symbols) and a kev to the probability symbols. In addition, the Reliability Indices (Fixation Losses; False Positive errors; False Negative errors) and Global Indices [Visual Field Index (VFI); Mean Deviation (MD); Pattern Standard Deviation (PSD) and Glaucoma Hemifield Test (GHT)] are provided.

    Elements from the Cirrus OCT reports that are provided in the Combined Report include the Retinal Nerve Fiber Layer Thickness (RNFL) Deviation Map, Average RNFL Thickness and Optic Nerve Head Summary. FORUM Glaucoma Workplace also provides the Ganglion Cell Analysis (GCA) Thickness Deviation Map and GCA parameters table for Cirrus HD-OCT data.

    After launching FORUM Glaucoma Workplace from the FORUM application, the user can select from four tabs: Visual Fields; Overview; GPA and Create Reports. Within these tabs, FORUM Glaucoma Workplace provides tools for the management, display and analysis of visual field exam data and the creation of reports.

    Visual Fields Tab

    FORUM Glaucoma Workplace displays a range of visual field tests (Threshold, Suprathreshold and Kinetic) that have been stored in FORUM. The exam list includes the exam date, test pattern, test strategy, and the stimulus color, size, and background for each selected patient. From the Visual Field tab, users can create reports for later retrieval in FORUM Viewer and/or be printed.

    Overview Tab

    FORUM Glaucoma Workplace creates and displays visual field reports for visual field tests provided the visual field examination results have been stored in FORUM. These reports include the Overview and Single Field Analysis. The Overview report contains the data of all existing tests selected. The Single Field Analysis report contains data from a single central threshold test.

    GPA Tab

    FORUM Glaucoma Workplace contains the same GPA algorithms and databases as offered in the Humphrey Field Analyzer II and II-i and allows GPA to be performed on a computer running FORUM independent of and apart from the visual field instrument itself. Within the GPA tab, GPA information is provided on interactive screens.

    GPA analysis can be performed for any patient who has at least two baseline visual field tests. These tests must have been performed with the Full Threshold, Swedish Interactive Threshold Algorithm (SITA) Standard, or SITA Fast strategies. Also, at least one follow-up visual field test must have been performed using either the SITA Standard or SITA Fast test strategy. With FORUM Glaucoma Workplace, the user can set an optional, second baseline.

    From the GPA tab, users can create four types of GPA reports: Full GPA, GPA Summary, GPA Last Three Follow-up and Single Field Analysis (SFA) GPA. A Single Field Analysis report can also be created within the GPA tab.

    FORUM Glaucoma Workplace allows the user to interact with the available data. When viewing the GPA on the screen, the user can hold the mouse pointer over a particular area and a small tooltip will appear with details regarding the particular test. In addition, the user can add notes about an exam through the Comments feature and view previous comments about any exam.

    Create Reports Tab

    FORUM Glaucoma Workplace provides the user with the option of creating different report types, such as Single Field Analysis, Kinetic, or Suprathreshold using exam data stored in FORUM. Several reports of the same type can also be generated in one simple procedure, for example, if the user wants to create or print Single Field Analysis reports for every Threshold exam for a particular patient.

    Technological Characteristics

    FORUM Glaucoma Workplace is connected to FORUM via an internal interface; it consists of a server and client that integrate into an existing FORUM Archive and Viewer installation. Once FORUM Glaucoma Workplace is installed and licensed, the new functionality becomes available in FORUM Viewer.

    The FORUM Glaucoma Workplace server is installed on the FORUM server. The data access components are located on the server. The server installation enables FORUM Glaucoma Workplace to retrieve HFA and OCT exam data stored in the FORUM Archive. It also contains the algorithms and databases for data management and creation of visual field reports and reports that contain results from perimetry, optical coherence tomography, and fundus photography (Combined Reports).

    The client is installed on the FORUM Viewer. The display components are located on the client. The client installation enables FORUM Glaucoma Workplace to display visual field results, optical coherence tomography data and the user interaction information.

    The reports are displayed on a computer monitor with interactive screens using the FORUM Viewer. The created reports may be stored as DICOM Encapsulated PDFs in the FORUM Archive.

    AI/ML Overview

    Here's a breakdown of the requested information based on the provided text, outlining the acceptance criteria and study details for the FORUM® Glaucoma Workplace device:

    The provided document describes the FORUM® Glaucoma Workplace, a software application for managing, displaying, and analyzing visual field and optical coherence tomography (OCT) data in the context of glaucoma. It states that the device implements existing validated algorithms and normative databases (Cirrus/Humphrey Field Analyzer) for its analyses and report generation.

    1. Table of Acceptance Criteria and Reported Device Performance:

    While the document doesn't present a formal "acceptance criteria" table with specific quantitative thresholds, it describes the expected performance and verification that was conducted. The core acceptance criterion can be inferred as the device performing equivalently and as intended compared to the predicate devices and the original algorithms it implements.

    Acceptance Criteria (Inferred from testing)Reported Device Performance (Summary from document)
    Functional Equivalence: Device performs data management, display, and analysis of visual field and OCT data as described in the indications for use.Verified to manage, display, and analyze visual field and OCT exams, and create various reports as specified.
    Algorithm Implementation Equivalence: FORUM Glaucoma Workplace's implementation of Cirrus and HFA algorithms (e.g., STATPAC, GPA, RNFL thickness, GCA, ONH measurements) produces results equivalent to the original devices.Visual field reports generated on HFA II-i and OCT test reports generated on Cirrus HD-OCT and CIRRUS photo were compared to the reports generated by FORUM Glaucoma Workplace using the same test data. Results were found to be equivalent.
    Report Generation: Device generates the specified visual field reports (e.g., Single Field Analysis, Overview, GPA, Kinetic, Suprathreshold) and combined reports (24-2/30-2 and RNFL, 10-2 and GCA) accurately.Verification demonstrated the device's ability to generate all specified reports (visual field, combined) correctly and store them as DICOM Encapsulated PDFs.
    User Interaction/Interface: User interface and interactive features (e.g., changing baselines in GPA, comments, synchronized scrolling) function correctly.Validation participants (ophthalmologists) confirmed the functionality and usability of the interactive screens and features.
    Software Compatibility: Device operates correctly on specified operating systems (server and client).Evaluated and determined to be suitable for the specified Microsoft Windows and OS X operating systems.
    Safety and Effectiveness Equivalence: The device is as safe and effective as the predicate devices.All necessary testing conducted to ensure safety and effectiveness equivalence to predicate devices.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: The document mentions "representative data (sample data that is representative of true clinical cases)" was used for clinical functionalities validation by ophthalmologists. However, no specific numerical sample size for this test set is provided.
    • Data Provenance: The data used for comparative testing (HFA II-i, Cirrus HD-OCT, CIRRUS photo data compared against FORUM Glaucoma Workplace generated reports) is implied to be existing clinical data from these devices. The document does not specify the country of origin of this data, nor explicitly states if it was retrospective or prospective. Given it's "test data to verify that the results contained in both reports were equivalent," it's likely pre-existing (retrospective) data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts:

    • Number of Experts: "Ophthalmologists in three countries" participated in the validation of clinical functionalities. The exact number of ophthalmologists is not specified.
    • Qualifications of Experts: They are identified as "ophthalmologists," implying medical doctors specializing in eye care, which is highly relevant to the device's application. No specific years of experience or sub-specialization are mentioned.

    4. Adjudication Method for the Test Set:

    The document describes the ophthalmologists completing "questionnaires rating the various aspects of the software." It does not explicitly mention a formal adjudication method like 2+1 or 3+1 for resolving discrepancies in evaluations. The validation seems to have focused on confirming functionality and usability, rather than a diagnostic performance comparison that would typically require adjudicators for ground truth.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted as described in the provided text. The study involved ophthalmologists validating the software's functionalities and output, not comparing human reader performance with and without AI assistance for diagnostic tasks.
    • Effect Size: Therefore, there is no reported effect size for human reader improvement with or without AI assistance from this documentation. The device is primarily an advanced PACS/data management and analysis tool, not a new AI diagnostic algorithm.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    Yes, a form of standalone performance evaluation was done indirectly. The document states: "As part of the verification testing, the visual field reports generated on the HFA II-i and OCT test reports generated on the Cirrus HD-OCT and CIRRUS photo were compared to the reports generated by FORUM Glaucoma Workplace using the same test data to verify that the results contained in both reports were equivalent." This comparison verifies that the algorithms implemented within FORUM Glaucoma Workplace (which are existing, validated algorithms from the HFA and Cirrus devices) produce the same outputs as the original devices when given the same input data, essentially confirming the "algorithm only" performance.

    7. The Type of Ground Truth Used:

    The ground truth for the comparison appears to be the outputs (reports/results) from the predicate devices themselves (HFA II-i, Cirrus HD-OCT, CIRRUS photo). The FORUM Glaucoma Workplace's outputs (reports and analyses) were verified to be equivalent to these established predicate device outputs. For the clinical functionalities validation, the "ground truth" was likely the expected correct functioning and usability based on expert opinion and predefined functional requirements, evaluated by ophthalmologists.

    8. The Sample Size for the Training Set:

    The document does not specify any training set sample size. This is expected, as FORUM Glaucoma Workplace implements existing and validated algorithms and normative databases from the HFA and Cirrus devices. It does not describe a new, independently developed machine learning algorithm that would require its own separate training set. The "training" of the algorithms (e.g., GPA, normative databases) would have been done during the development and validation of the original HFA and Cirrus devices, which are referenced as predicates.

    9. How the Ground Truth for the Training Set Was Established:

    As no new algorithm with an independent training set is described, this question is not directly applicable to the FORUM Glaucoma Workplace itself. The ground truth for the underlying algorithms (HFA's GPA, Cirrus's normative databases, etc.) would have been established during their original development and validation, likely through extensive clinical studies and expert consensus on diagnostic criteria for glaucoma and visual field/OCT measurements. The current submission relies on the established validity of these pre-existing algorithms.

    Ask a Question

    Ask a specific question about this device

    K Number
    K130648
    Date Cleared
    2013-07-23

    (134 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FORUM GLAUCOMA WORKPLACE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM Glaucoma Workplace is a FORUM software application intended for the management, display, and analysis of visual field data. The FORUM Glaucoma Workplace is indicated as an aid to the detection, measurement, and management of progression of visual field loss.

    Device Description

    FORUM Glaucoma Workplace is a FORUM software application that provides a means to review and analyze data from various visual field examinations to identify statistically significant and progressive visual field loss. FORUM Glaucoma Workplace utilizes Humphrey® Field Analyzer (HFA) algorithms and databases including STATPAC and Guided Progression Analysis (GPA) to process visual field data and generate visual field reports. GPA compares visual field test results of follow-up tests to an established baseline over time and determines if there is statistically significant change.

    The following are the main functionalities of FORUM Glaucoma Workplace:

    • . Data retrieval and report storage
    • . Managing, analyzing and displaying visual field exams
    • Creation of visual field reports .

    FORUM Glaucoma Workplace retrieves HFA visual field test data from the FORUM Archive. uses the HFA algorithms and databases to process the visual field raw data, then generates and displays visual field reports. The reports generated by FORUM Glaucoma Workplace are stored as DICOM Encapsulated PDFs in the FORUM Archive.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the FORUM Glaucoma Workplace device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state quantitative (e.g., sensitivity, specificity, accuracy) acceptance criteria with numerical targets. Instead, the acceptance criteria are generally framed around demonstrating functional equivalence to predicate devices and adherence to design specifications. The performance testing focused on verifying that the device performs as intended and that its generated reports are equivalent to those from predicate devices.

    Acceptance Criterion (Implicit)Reported Device Performance
    Functional Equivalence: Management, display, and analysis of visual field dataFORUM Glaucoma Workplace provides management, analysis, and display of visual field exams, creating reports. It utilizes Humphrey® Field Analyzer (HFA) algorithms and databases (STATPAC and GPA) to process visual field data and generate reports. It retrieves HFA visual field test data, processes it, generates and displays reports. The reports contain the same information as previously provided on the HFA instrument and utilize the same algorithms and databases.
    Functional Equivalence: Detection, measurement, and management of progression of visual field loss (GPA functionality)FORUM Glaucoma Workplace contains the same GPA algorithms and databases as offered in the Humphrey® Field Analyzer II and II-i. It enables progression analyses, compares follow-up test results to baselines, and determines statistically significant change, providing "Possible Progression" or "Likely Progression" messages consistent with the predicate. It offers the same GPA report types (Full GPA, GPA Summary, GPA Last Three Follow-up, SFA GPA).
    Report Equivalence: Generated visual field reports (Single Field Analysis, Overview, GPA) match those of predicate devicesThe visual field reports (Single Field Analysis, Overview, and Guided Progression Analysis) generated on the HFA II-i were compared to the reports generated by FORUM Glaucoma Workplace using the same test data to verify that the results contained in both reports were equivalent. This comparison was successful.
    Software Performance: Reliability, stability, and proper operation across supported operating systemsVerification and validation activities, including tests accompanying development (code inspections), module and integration testing, and system verification, were performed. The client and server operating systems were evaluated. Results determined suitability for various Windows client and server operating systems, including Windows XP, Windows 7, Windows Server 2003, and Windows Server 2008 R2. "Verification and validation activities were successfully completed and prove that the product FORUM Glaucoma Workplace meets its requirements and performs as intended."
    Usability/Clinical Functionality: Meets user requirements in a clinical environmentValidation of clinical functionalities was completed by ophthalmologists using the software with representative data and executing test cases simulating clinical use. They completed questionnaires rating various aspects of the software. (No specific rating results are provided, but the overall conclusion indicates successful completion).

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Description: The verification testing involved comparing visual field reports generated on the HFA II-i to reports generated by FORUM Glaucoma Workplace using the same test data. Clinical functionality validation used "representative data (sample data that is representative of true clinical cases)."
    • Sample Size for Test Set: Not explicitly stated. The document mentions "the same test data" for report comparison and "representative data" for clinical validation, but no specific number of cases or patients is provided for either.
    • Data Provenance:
      • For the report comparison: The data originated from the HFA II-i, which is a predicate device.
      • For clinical functionality validation: "representative data (sample data that is representative of true clinical cases)". The country of origin of this specific data is not stated or implied. However, the validation participants were ophthalmologists in "two countries."
      • Retrospective or Prospective: Not explicitly stated, but the mention of "same test data" and "representative data" suggests it was likely retrospective (pre-existing data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Unclear how many individual ophthalmologists participated in the clinical functionality validation. It states "ophthalmologists in two countries."
    • Qualifications of Experts: Ophthalmologists. No specific experience level (e.g., "10 years of experience") is provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly described. For the report comparison, it states "to verify that the results contained in both reports were equivalent," implying direct comparison. For clinical validation, ophthalmologists "completed questionnaires rating the various aspects of the software," which doesn't suggest a formal adjudication process for establishing a ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The submission describes functional equivalence testing and clinical validation, but not a comparative effectiveness study designed to measure human reader performance with and without AI assistance.
    • Effect Size: Not applicable, as no MRMC study was conducted.

    6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)

    • Was a standalone study done? Yes, implicitly. The core of the "Performance Data" section describes verification that the FORUM Glaucoma Workplace's algorithms and data processing produce results "equivalent" to the predicate HFA II-i algorithms. This involved comparing the outputs of the software (reports) directly against the predicate device's outputs using the "same test data." This is a form of standalone performance assessment, as it focuses on the algorithm's output matching a known, accepted standard.

    7. Type of Ground Truth Used

    • The ground truth for the comparison of reports and algorithms was based on the outputs and accepted performance of predicate devices (Humphrey® Field Analyzer II and II-i, and their GPA/STATPAC algorithms). Essentially, the "ground truth" was established by the existing, legally marketed and deemed safe/effective predicate technologies.

    8. Sample Size for the Training Set

    • Not applicable / Not explicitly stated. The FORUM Glaucoma Workplace primarily implements existing, validated HFA algorithms (STATPAC and GPA). The text does not describe a new machine learning algorithm that would require a distinct "training set" in the conventional sense of AI/ML development. It leverages established algorithms and databases.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable / Not explicitly stated. As there's no mention of a new machine learning model being trained by the applicant, the concept of a "training set ground truth" is not relevant in this submission. The algorithms themselves (STATPAC, GPA) were developed and validated years prior by the manufacturer of the Humphrey Field Analyzer.
    Ask a Question

    Ask a specific question about this device

    K Number
    K122938
    Date Cleared
    2012-11-02

    (39 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FORUM FORUM ARCHIVE FORUM ARCHIVE & VIEWER FORUM ASSIST MATCH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM is a software system intended for use in storage, management, processing, and display of patient, diagnostic, video and image data and measurement from computerized diagnostic instruments or documentation systems through networks. It is intended to work with other FORUM applications.

    FORUM is intended for use in review of patient, diagnostic and image data and measurement by trained healthcare professionals.

    Device Description

    FORUM is a computer software system designed for storage, processing, and review of images, videos and reports originating from computerized diagnostic instruments or other documentation systems.

    FORUM is available in two different product variants: FORUM Archive and FORUM Archive & Viewer.

    The FORUM Archive consists of a server and a client application. The server offers a DICOM interface to diagnostic instruments via a network. On this server, medical documents including reports, images, videos or raw data and patient data are archived. All data can be retrieved via the network by instruments or other applications using the DICOM interface. The client application provides a graphical user interface (GUI) for administering the server and the data stored therein.

    FORUM Viewer serves as an additional module to the client application that allows health care professionals to display and review the data stored in FORUM Archive. FORUM Viewer enables health care professionals to perform measurements in fundus images, based on the scaling information which is provided in the DICOM header and add comments to the saved data. FORUM Viewer provides the option for data transfer to and from other FORUM installations and the ability to import non-DICOM data. FORUM Viewer also includes a modality worklist (scheduling).

    AI/ML Overview

    The provided text is a 510(k) summary for the FORUM™ software system. It describes the device's intended use, comparison to predicate devices, and indicates that performance testing was conducted. However, it does not contain the specific details required to answer all parts of your request about acceptance criteria and a study proving those criteria.

    The document makes general statements about performance but lacks the quantitative data, study design, and specifics typically found in a detailed performance evaluation section.

    Here's a breakdown of what can and cannot be answered based only on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Cannot be fully extracted.
    The document states: "Performance testing was conducted on FORUM and was found to perform as intended. FORUM is DICOM compliant according to its DICOM conformance statement."

    This is a general statement and does not provide specific acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or quantitative measurement error thresholds) or reported device performance metrics against those criteria. The only specific performance claim is DICOM compliance.

    2. Sample size used for the test set and the data provenance

    Cannot be extracted.
    The document does not mention a "test set" in the context of device performance, nor does it provide any sample sizes or data provenance (country of origin, retrospective/prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Cannot be extracted.
    Since no specific test set or ground truth establishment process is described beyond general "performance testing," this information is not available.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    Cannot be extracted.
    No adjudication method is described.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No.
    The document describes a software system for image management, display, and measurement. It is not an AI-powered diagnostic device in the sense of directly assisting human readers in interpreting images or providing diagnostic output that would necessitate an MRMC study to compare human performance with and without AI assistance. The "FORUM ASSIST match" accessory software is mentioned for identifying potential duplicates, but no performance study for this specific feature or human improvement is detailed.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Partially applicable, but without metrics.
    The device is described as a software system performing functions like storage, processing, display, and measurement (specifically, line measurement in fundus images). The text states, "The line measurement function in FORUM has been verified and validated and the results indicate that the difference does not raise new questions of safety and effectiveness." This implies standalone verification of the measurement function, but no specific performance metrics or acceptance criteria for this measurement are provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Cannot be extracted definitively.
    For the mention of the "line measurement function," it's implied that some form of ground truth was used for verification and validation, but the type of ground truth (e.g., comparison to manual measurements by experts, a physical phantom, etc.) is not specified.

    8. The sample size for the training set

    N/A (Not Applicable in the traditional AI sense).
    This is not an AI/ML device in the context of supervised learning requiring a "training set." It's a software system for image management and measurement. Therefore, the concept of a training set as typically understood for AI algorithms does not apply.

    9. How the ground truth for the training set was established

    N/A. As above, no training set is described.


    Summary of what is present regarding performance:

    • General Statement: "Performance testing was conducted on FORUM and was found to perform as intended."
    • DICOM Compliance: "FORUM is DICOM compliant according to its DICOM conformance statement."
    • Line Measurement Function: "The line measurement function in FORUM has been verified and validated and the results indicate that the difference does not raise new questions of safety and effectiveness." This points to a specific function being tested, but without details on how or to what standard.
    • HL7 Standard: The HL7 interface for EMR integration "has been verified with FORUM and the results indicate that the difference does not raise new questions of safety and effectiveness." This confirms testing for an interface standard.

    Conclusion:

    The provided 510(k) summary focuses on demonstrating substantial equivalence to predicate devices based on functional characteristics and high-level claims of performance testing and compliance with standards (DICOM, HL7). It does not provide detailed clinical study results, specific acceptance criteria, or quantitative performance metrics as would be found for a device requiring more rigorous clinical or algorithmic performance evaluation (e.g., an AI diagnostic aid). This is typical for a device like an image management system where the primary concern is proper data handling, display, and adherence to communication standards rather than diagnostic accuracy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K090439
    Device Name
    FORUM
    Date Cleared
    2009-03-25

    (33 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    FORUM

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM is a software system application intended for use in storing, managing, and displaying patient data, diagnostic data, videos and images from computerized diagnostic instruments or video documentation systems through networks.

    Device Description

    FORUM is a personal computer software system designed for storage, retrieval, and review of DICOM images, videos and reports originating from ophthalmic instruments and surgical microscopy. FORUM consists of two components: the FORUM Archive and the FORUM Viewer. The FORUM Archive, which contains both a server and client application, provides an archive for storage and administration of medical documents and patient data. The FORUM Viewer is an additional module to the client application which allows images, reports and videos stored in the archive to be reviewed. The FORUM Viewer also includes a modality worklist scheduling function.

    When utilized together, the FORUM Archive and Viewer provide a complete workflow cycle from administering patient information via scheduling patients for examinations on connected instruments, through archiving the results of the examinations to retrieval and review of examination data.

    AI/ML Overview

    Here's an analysis of the provided text regarding the FORUM™ device's acceptance criteria and study information:

    The supplied documentation (510(k) summary) for the FORUM™ device does not contain acceptance criteria or detailed study results proving the device meets specific performance criteria. Instead, it focuses on demonstrating substantial equivalence to a predicate device (Nidek Advanced Vision Information System (NAVIS)).

    The relevant sections state:

    • "Evaluation performed on FORUM supports the indications for use statement, demonstrates that the device is substantially equivalent to the predicate device and does not raise new questions regarding safety and effectiveness."
    • "Performance testing was conducted on FORUM and was found to perform as intended. FORUM is DICOM compliant according to its DICOM conformance statement."
    • "As described in this 510(k) Summary, all testing deemed necessary was conducted on FORUM to ensure that the device is safe and effective for its intended use when used in accordance with its Instructions for Use."

    This indicates that internal performance testing was conducted to ensure the device functions as intended and is DICOM compliant, but specific quantitative acceptance criteria and the results of those tests are not disclosed in this summary. The primary "proof" of meeting requirements is established through the argument of substantial equivalence.

    Therefore, many of the requested items cannot be extracted from the provided text.

    Here is a summary of what can be inferred or explicitly stated based on the provided text, and what is missing:


    Acceptance Criteria and Device Performance

    The core assertion in the document is that the FORUM™ device performs "as intended" and is "substantially equivalent" to the predicate device. Specific quantitative acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or system response times) and their corresponding reported device performance values are not provided.

    Acceptance Criteria (Not Explicitly Stated/Quantified in document)Reported Device Performance (as stated in document)
    Functional equivalence to predicate device (NAVIS)FORUM™ is "functionally equivalent" to NAVIS.
    Perform as intended for storing, managing, and displaying data"Performance testing was conducted on FORUM and was found to perform as intended."
    DICOM compliance"FORUM is DICOM compliant according to its DICOM conformance statement."
    Safety and Effectiveness (no new questions)"does not raise new questions regarding safety and effectiveness."

    2. Sample size used for the test set and the data provenance

    • Sample size for test set: Not specified.
    • Data provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: Not applicable, as detailed test set and ground truth establishment are not described. The document pertains to a Picture Archiving and Communications System (PACS) rather than a diagnostic AI algorithm that would typically require expert-established ground truth for performance metrics like accuracy.
    • Qualifications of experts: Not applicable.

    4. Adjudication method for the test set

    • Adjudication method: Not applicable, as detailed test set and ground truth establishment are not described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: No. This device is a PACS system, not an AI-powered diagnostic tool. The document does not describe any MRMC study or AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone study: Not applicable in the context of an AI algorithm. The device itself is a standalone software system for managing and displaying data. The "performance testing" mentioned refers to its functionality as a PACS, not an AI algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of ground truth: Not applicable. For a PACS system, "ground truth" would typically relate to successful storage, retrieval, display of data, and DICOM compliance, generally verified through functional testing and adherence to standards rather than expert clinical diagnoses or pathology.

    8. The sample size for the training set

    • Sample size for training set: Not applicable. The device is a PACS system, not a machine learning or AI model that requires a training set.

    9. How the ground truth for the training set was established

    • Ground truth for training set: Not applicable, as the device is not an AI model.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1