Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252476
    Manufacturer
    Date Cleared
    2025-10-16

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning.

    The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • Viewer 5.4.2 (General Viewing)
    • Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0

    Universal Atlas Performer

    Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications

    Universal Atlas Transfer Performer

    Software that provides medical image data auto-segmentation information to Brainlab applications

    When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then "cloned" into the virtual image space of connected mixed reality glasses. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.

    AI/ML Overview

    The provided document describes the FDA 510(k) clearance for Brainlab AG's Viewer device. However, it explicitly states, "Viewer is not intended for diagnosis nor for treatment planning." This means the device primarily focuses on image display, manipulation, and basic measurements rather than making diagnostic or clinical decisions.

    As such, the performance data presented is related to technical functionality and accuracy of measurements rather than diagnostic accuracy against a ground truth for a medical condition. Therefore, many of the requested sections regarding diagnostic performance, ground truth, experts, and comparative effectiveness studies are not applicable in the context of this specific regulatory submission.

    Here's a breakdown of the requested information based on the provided document:


    Acceptance Criteria and Reported Device Performance

    Given the nature of the device (medical image management and processing system, not for diagnosis), the acceptance criteria and performance reported are largely functional and technical.

    Acceptance Criteria CategorySpecific Criteria/Test DescriptionReported Device Performance/Outcome
    Software FunctionalitySuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests. (General V&V)Passed: Documentation indicating successful completion of these tests was provided, as recommended by FDA guidance. (Enhanced level)
    Ambient Light TestDetermine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions.Passed: The display quality was determined to be sufficient. (Specific results not detailed beyond "sufficient visualization")
    Hospital Environment TestsTest compatibility of the Subject Device with various hardware platforms and compatible software.Passed: Compatibility was confirmed. (Specific platforms/software not detailed)
    Display Quality TestsMeasure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display (Magic Leap 2) to ensure seamless integration of real and virtual content, and maintenance of high visibility and image quality. Tests were conducted with and without segmented dimming.Passed: The tests ensured seamless integration, high visibility, and image quality. (Specific numerical results not detailed, but the outcome implies they met internal quality standards).
    Measurement Accuracy TestInexperienced test persons (3) able to place distance measurements using a Mixed Reality user interface (Magic Leap controller) with a maximal deviation of less than one millimeter in each axis compared to mouse and touch on desktop as input methods.Passed: The test concluded that the specified accuracy was achieved, meaning the maximal deviation was less than one millimeter.

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Measurement Accuracy Test: 3 inexperienced test persons. No information on the number of measurements or specific datasets used.
      • Other Tests (Ambient Light, Hospital Environment, Display Quality): The sample sizes for these bench tests are not explicitly stated in terms of patient data or specific items tested, but represent functional validation of the system and its components.
      • Data Provenance: Not applicable for the described functional and accuracy tests. The device deals with DICOM data, but the specific source of that data for these tests is not mentioned as the tests focus on the device's capabilities rather than clinical diagnostic performance on a dataset.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Measurement Accuracy Test: No experts were explicitly mentioned for establishing "ground truth" for the measurement accuracy test. The comparison was between different input methods (Magic Leap controller vs. mouse/touch). The "ground truth" for these measurements would likely be the known distances within the virtual environment or the established accuracy of the mouse/touch methods themselves, assumed to be accurate. The "inexperienced test persons (3)" were the subjects performing the measurements, not experts establishing ground truth.
      • Other Tests: Not applicable, as these were functional and technical performance tests not involving clinical ground truth.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. The tests described are bench tests and functional validations, not clinical studies requiring adjudication of findings.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was done or is applicable. This device is cleared as a "Medical Image Management And Processing System" and explicitly states it is "not intended for diagnosis nor for treatment planning." Therefore, there is no AI assistance for human readers in a diagnostic context described, and no effect size would be reported.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Not applicable in the conventional sense of a diagnostic algorithm. The device's core function is to display and manipulate images with some basic automated measurements. The measurement accuracy test (3D measurement placement) involves human interaction with the device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • For the Measurement Accuracy Test, the ground truth appears to be based on the established accuracy of desktop input methods (mouse and touch) for placing measurements, and the expectation of how accurately the mixed reality controller should perform relative to those. It is not an expert consensus on a clinical condition, pathology, or outcomes data.
      • For other tests (Ambient Light, Hospital Environment, Display Quality), the "ground truth" refers to engineering specifications and visual quality standards.
    7. The sample size for the training set:

      • Not applicable. This document describes a software update and clearance for an image viewing and manipulation device, not an AI/ML algorithm that requires a "training set" for diagnostic or predictive tasks.
    8. How the ground truth for the training set was established:

      • Not applicable, as there was no training set mentioned.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1