Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K252476
    Manufacturer
    Date Cleared
    2025-10-16

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning.

    The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • Viewer 5.4.2 (General Viewing)
    • Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0

    Universal Atlas Performer

    Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications

    Universal Atlas Transfer Performer

    Software that provides medical image data auto-segmentation information to Brainlab applications

    When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then "cloned" into the virtual image space of connected mixed reality glasses. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.

    AI/ML Overview

    The provided document describes the FDA 510(k) clearance for Brainlab AG's Viewer device. However, it explicitly states, "Viewer is not intended for diagnosis nor for treatment planning." This means the device primarily focuses on image display, manipulation, and basic measurements rather than making diagnostic or clinical decisions.

    As such, the performance data presented is related to technical functionality and accuracy of measurements rather than diagnostic accuracy against a ground truth for a medical condition. Therefore, many of the requested sections regarding diagnostic performance, ground truth, experts, and comparative effectiveness studies are not applicable in the context of this specific regulatory submission.

    Here's a breakdown of the requested information based on the provided document:


    Acceptance Criteria and Reported Device Performance

    Given the nature of the device (medical image management and processing system, not for diagnosis), the acceptance criteria and performance reported are largely functional and technical.

    Acceptance Criteria CategorySpecific Criteria/Test DescriptionReported Device Performance/Outcome
    Software FunctionalitySuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests. (General V&V)Passed: Documentation indicating successful completion of these tests was provided, as recommended by FDA guidance. (Enhanced level)
    Ambient Light TestDetermine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions.Passed: The display quality was determined to be sufficient. (Specific results not detailed beyond "sufficient visualization")
    Hospital Environment TestsTest compatibility of the Subject Device with various hardware platforms and compatible software.Passed: Compatibility was confirmed. (Specific platforms/software not detailed)
    Display Quality TestsMeasure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display (Magic Leap 2) to ensure seamless integration of real and virtual content, and maintenance of high visibility and image quality. Tests were conducted with and without segmented dimming.Passed: The tests ensured seamless integration, high visibility, and image quality. (Specific numerical results not detailed, but the outcome implies they met internal quality standards).
    Measurement Accuracy TestInexperienced test persons (3) able to place distance measurements using a Mixed Reality user interface (Magic Leap controller) with a maximal deviation of less than one millimeter in each axis compared to mouse and touch on desktop as input methods.Passed: The test concluded that the specified accuracy was achieved, meaning the maximal deviation was less than one millimeter.

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Measurement Accuracy Test: 3 inexperienced test persons. No information on the number of measurements or specific datasets used.
      • Other Tests (Ambient Light, Hospital Environment, Display Quality): The sample sizes for these bench tests are not explicitly stated in terms of patient data or specific items tested, but represent functional validation of the system and its components.
      • Data Provenance: Not applicable for the described functional and accuracy tests. The device deals with DICOM data, but the specific source of that data for these tests is not mentioned as the tests focus on the device's capabilities rather than clinical diagnostic performance on a dataset.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Measurement Accuracy Test: No experts were explicitly mentioned for establishing "ground truth" for the measurement accuracy test. The comparison was between different input methods (Magic Leap controller vs. mouse/touch). The "ground truth" for these measurements would likely be the known distances within the virtual environment or the established accuracy of the mouse/touch methods themselves, assumed to be accurate. The "inexperienced test persons (3)" were the subjects performing the measurements, not experts establishing ground truth.
      • Other Tests: Not applicable, as these were functional and technical performance tests not involving clinical ground truth.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. The tests described are bench tests and functional validations, not clinical studies requiring adjudication of findings.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was done or is applicable. This device is cleared as a "Medical Image Management And Processing System" and explicitly states it is "not intended for diagnosis nor for treatment planning." Therefore, there is no AI assistance for human readers in a diagnostic context described, and no effect size would be reported.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Not applicable in the conventional sense of a diagnostic algorithm. The device's core function is to display and manipulate images with some basic automated measurements. The measurement accuracy test (3D measurement placement) involves human interaction with the device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • For the Measurement Accuracy Test, the ground truth appears to be based on the established accuracy of desktop input methods (mouse and touch) for placing measurements, and the expectation of how accurately the mixed reality controller should perform relative to those. It is not an expert consensus on a clinical condition, pathology, or outcomes data.
      • For other tests (Ambient Light, Hospital Environment, Display Quality), the "ground truth" refers to engineering specifications and visual quality standards.
    7. The sample size for the training set:

      • Not applicable. This document describes a software update and clearance for an image viewing and manipulation device, not an AI/ML algorithm that requires a "training set" for diagnostic or predictive tasks.
    8. How the ground truth for the training set was established:

      • Not applicable, as there was no training set mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232759
    Manufacturer
    Date Cleared
    2024-05-21

    (256 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning. The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • · Viewer 5.4 (General Viewing)
    • · Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0
      Universal Atlas Performer: Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications
      Universal Atlas Transfer Performer: Software that provides medical image data autoseqmentation information to Brainlab applications
      When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then rendered on the connected stereoscopic head-mounted display. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.
    AI/ML Overview

    The Brainlab AG Viewer (5.4) and associated products (Elements Viewer, Mixed Reality Viewer, Smart Layout, Elements Viewer Smart Layout) are a medical image management and processing system. The device displays medical images and data, and includes functions for image review, manipulation, basic measurements, and 3D visualization.

    Here's an analysis of the acceptance criteria and supporting studies based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text details various tests performed to ensure the device's performance, particularly focusing on the mixed reality aspects and measurement accuracy. However, specific numerical "acceptance criteria" (e.g., "accuracy must be >95%") and corresponding reported performance values are not explicitly stated in a detailed quantitative manner in the summary. Instead, the document describes the types of tests conducted and generally states that they were successful or ensured certain qualities.

    Test CategoryAcceptance Criteria (Implied/General)Reported Device Performance (General)
    Software Verification & ValidationSuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests.Documentation provided as recommended by FDA guidance. Successful implementation, testing of risk controls, compatibility, and cybersecurity acknowledged for an enhanced level.
    Ambient LightSufficient visualization in a variety of ambient lighting conditions with Magic Leap 2.Test conducted to determine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions. (Implied successful)
    Hospital EnvironmentCompatibility with various hardware platforms and compatible software.Test conducted to test compatibility of the Subject Device with various hardware platforms and compatible software. (Implied successful)
    Display QualitySeamless integration of real and virtual content; maintenance of high visibility and image quality (optical transmittance, luminance non-uniformity, Michelson contrast).Tests carried out to measure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display to ensure seamless integration of real and virtual content and maintenance of high visibility and image quality, both with and without segmented dimming. (Implied successful)
    Measurement AccuracyAccurate 3D measurement placement using Mixed Reality user interface (Magic Leap control), comparable to mouse and touch input.Tests performed to evaluate the accuracy of 3D measurement placement using a Mixed Reality user interface (Magic Leap control) in relation to mouse and touch as input methods. (Implied successful, and supports equivalence to predicate's measurement capabilities, with added 3D functionality in MR)

    2. Sample Size for the Test Set and Data Provenance

    The document does not explicitly state the sample sizes used for any of the described tests (Ambient Light, Hospital Environment, Display Quality, Measurement Accuracy).

    Regarding data provenance, the document does not specify the country of origin for any data, nor whether the data used in testing was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not provide information on the number of experts used to establish ground truth for any of the described tests, nor their qualifications.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The study is focused on the device's technical performance and accuracy, not on human reader improvement with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    The document describes the "Viewer" as software for displaying and manipulating medical images. While it's a software-only device, the tests described (e.g., display quality, measurement accuracy) inherently assess the algorithm's performance in its intended functions without direct human-in-the-loop impact on the results being measured during those specific tests. However, it's not explicitly framed as an "algorithm-only" performance study in contrast to human performance, but rather as instrumental performance validation. The Mixed Reality functionality, while requiring a human operator, still has its underlying software/hardware performance (e.g., accuracy of 3D measurement placement) evaluated.

    7. Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any of the tests. For "Measurement accuracy test," it can be inferred that a known, precisely measured physical or digital standard would have been used as ground truth for comparison. For other tests like display quality or compatibility, the ground truth would be conformance to established technical specifications or standards for optical properties and functional compatibility, respectively.

    8. Sample Size for the Training Set

    The document does not provide any information regarding a training set sample size. This is consistent with the device being primarily a viewing, manipulation, and measurement tool rather than an AI/ML diagnostic algorithm that requires a "training set" in the conventional sense. The "Universal Atlas Performer" and "Universal Atlas Transfer Performer" modules do involve "analyzing and processing medical image data with Universal Atlas to create different output results" and "provides medical image data autosegmentation information," which might imply some form of algorithmic learning or rule-based processing. However, no details on training sets for these specific components are included.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or detailed, there is no information on how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1