Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K232759
    Manufacturer
    Date Cleared
    2024-05-21

    (256 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Viewer (5.4); Elements Viewer; Mixed Reality Viewer; Smart Layout; Elements Viewer Smart Layout

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning. The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • · Viewer 5.4 (General Viewing)
    • · Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0
      Universal Atlas Performer: Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications
      Universal Atlas Transfer Performer: Software that provides medical image data autoseqmentation information to Brainlab applications
      When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then rendered on the connected stereoscopic head-mounted display. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.
    AI/ML Overview

    The Brainlab AG Viewer (5.4) and associated products (Elements Viewer, Mixed Reality Viewer, Smart Layout, Elements Viewer Smart Layout) are a medical image management and processing system. The device displays medical images and data, and includes functions for image review, manipulation, basic measurements, and 3D visualization.

    Here's an analysis of the acceptance criteria and supporting studies based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text details various tests performed to ensure the device's performance, particularly focusing on the mixed reality aspects and measurement accuracy. However, specific numerical "acceptance criteria" (e.g., "accuracy must be >95%") and corresponding reported performance values are not explicitly stated in a detailed quantitative manner in the summary. Instead, the document describes the types of tests conducted and generally states that they were successful or ensured certain qualities.

    Test CategoryAcceptance Criteria (Implied/General)Reported Device Performance (General)
    Software Verification & ValidationSuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests.Documentation provided as recommended by FDA guidance. Successful implementation, testing of risk controls, compatibility, and cybersecurity acknowledged for an enhanced level.
    Ambient LightSufficient visualization in a variety of ambient lighting conditions with Magic Leap 2.Test conducted to determine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions. (Implied successful)
    Hospital EnvironmentCompatibility with various hardware platforms and compatible software.Test conducted to test compatibility of the Subject Device with various hardware platforms and compatible software. (Implied successful)
    Display QualitySeamless integration of real and virtual content; maintenance of high visibility and image quality (optical transmittance, luminance non-uniformity, Michelson contrast).Tests carried out to measure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display to ensure seamless integration of real and virtual content and maintenance of high visibility and image quality, both with and without segmented dimming. (Implied successful)
    Measurement AccuracyAccurate 3D measurement placement using Mixed Reality user interface (Magic Leap control), comparable to mouse and touch input.Tests performed to evaluate the accuracy of 3D measurement placement using a Mixed Reality user interface (Magic Leap control) in relation to mouse and touch as input methods. (Implied successful, and supports equivalence to predicate's measurement capabilities, with added 3D functionality in MR)

    2. Sample Size for the Test Set and Data Provenance

    The document does not explicitly state the sample sizes used for any of the described tests (Ambient Light, Hospital Environment, Display Quality, Measurement Accuracy).

    Regarding data provenance, the document does not specify the country of origin for any data, nor whether the data used in testing was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not provide information on the number of experts used to establish ground truth for any of the described tests, nor their qualifications.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The study is focused on the device's technical performance and accuracy, not on human reader improvement with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    The document describes the "Viewer" as software for displaying and manipulating medical images. While it's a software-only device, the tests described (e.g., display quality, measurement accuracy) inherently assess the algorithm's performance in its intended functions without direct human-in-the-loop impact on the results being measured during those specific tests. However, it's not explicitly framed as an "algorithm-only" performance study in contrast to human performance, but rather as instrumental performance validation. The Mixed Reality functionality, while requiring a human operator, still has its underlying software/hardware performance (e.g., accuracy of 3D measurement placement) evaluated.

    7. Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any of the tests. For "Measurement accuracy test," it can be inferred that a known, precisely measured physical or digital standard would have been used as ground truth for comparison. For other tests like display quality or compatibility, the ground truth would be conformance to established technical specifications or standards for optical properties and functional compatibility, respectively.

    8. Sample Size for the Training Set

    The document does not provide any information regarding a training set sample size. This is consistent with the device being primarily a viewing, manipulation, and measurement tool rather than an AI/ML diagnostic algorithm that requires a "training set" in the conventional sense. The "Universal Atlas Performer" and "Universal Atlas Transfer Performer" modules do involve "analyzing and processing medical image data with Universal Atlas to create different output results" and "provides medical image data autosegmentation information," which might imply some form of algorithmic learning or rule-based processing. However, no details on training sets for these specific components are included.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or detailed, there is no information on how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K191014
    Device Name
    Elements Viewer
    Manufacturer
    Date Cleared
    2020-01-23

    (281 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Elements Viewer

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Viewer is a software device for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements and 3D visualization (Multiplanar reconstructions and 3D volume rendering). It is not intended for primary image diagnosis or the review of mammographic images.

    Device Description

    Viewer is a software for viewing of DICOM data. The device provides basic measurement functionality for distances and anqles.

    These are the operating principles:

    • On desktop PCs the interaction with the software is mainly performed with mouse and/or keyboard. -
    • On touch screen PCs and on mobile devices the software is mainly used with a touch screen interface. -
    • -On Mixed Reality qlasses the interaction is performed with a dedicated pointing device.

    The subject device provides or integrates the following frequently used functions:

    • -Select medical images and other healthcare data to be displayed
    • -Select views (e.g. axial, coronal & sagittal reconstruction views and 3D volume rendering views)
    • Change view layout (e.g. maximize / minimize views, close / open / reorder views) -
    • Manipulate views (e.g. scroll, zoom, pan, change windowing) -
    • Perform measurements (e.g. distance or angle measurements) -
    • -Place annotations at points of interests
    AI/ML Overview

    The provided document is a 510(k) summary for the "Viewer" device from Brainlab AG. It describes the device, its intended use, and its comparison to a predicate device and a reference device to demonstrate substantial equivalence. However, it does not contain the detailed information required to fill out the table of acceptance criteria and the study that proves the device meets those criteria, specifically regarding device performance metrics (e.g., sensitivity, specificity, accuracy), sample sizes, ground truth establishment, or multi-reader multi-case studies for AI components.

    The document primarily focuses on verifying the software's functionality, user interface, DICOM compatibility, and integration, rather than clinical performance metrics of an AI algorithm. The device is a "Picture Archiving And Communications System" (PACS) that displays medical images and other healthcare data and is not intended for primary image diagnosis. This indicates that the regulatory requirements for performance metrics such as sensitivity and specificity, which are common for AI algorithms involved in diagnosis, would not apply to this specific device.

    Therefore, most of the information requested in your prompt cannot be extracted from this document because the device described is not an AI diagnostic algorithm, and the provided text focuses on software functionality verification rather than clinical performance studies.

    Here's what can be extracted and what cannot:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria CategoryTest Method SummaryReported Device Performance
    User interfaceInteractive testing of user interfaceAll tests passed
    DICOM compatibilityInteractive testing with companywide test data, which are identical for consecutive version of the SWAll tests passed
    ViewsInteractive testing of user interfaceAll tests passed
    Unit test /Automatic testsAutomated or semi-automated cucumber tests or unit tests are written on the applicable level for new functionalities of the Viewer in respect to previous versions. Existing tests have to pass.All tests passed
    Integration testInteractive testing on various platforms and combination with other products following test protocols, combined with explorative testing. The software is developed with daily builds, which are explanatively tested.All tests passed
    UsabilityUsability tests (ensure user interface can be used safely and effectively)All tests passed
    Communication & CybersecurityVerification of communication and cybersecurity between Viewer and Magic Leap Mixed Reality glassesSuccessfully passed

    Missing Information/Not Applicable: The document does not provide acceptance criteria or performance metrics related to diagnostic accuracy (e.g., sensitivity, specificity, AUC) because the device is explicitly stated as not intended for primary image diagnosis.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not specified. The verification tests mention "companywide test data" and "various platforms and combination with other products" but do not provide specific numbers of cases or images.
    • Data Provenance: Not specified. The document mentions "companywide test data" but does not detail the country of origin or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Not applicable/Not specified. Since the device is not for primary diagnosis and the tests focus on software functionality, there is no mention of experts establishing ground truth for diagnostic purposes. The "ground truth" for the software functionality tests would be the expected behavior of the software.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Not specified. The testing methods described are interactive testing, automated/semi-automated tests, and usability tests. There is no mention of an adjudication method typical for diagnostic performance studies.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Not applicable. The document does not describe an AI algorithm intended to assist human readers in diagnosis. It's a DICOM viewer. Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or reported.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Not applicable. This device is a viewer, not a standalone diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • The concept of "ground truth" in the context of diagnostic accuracy (e.g., pathology, expert consensus) does not apply here as the device is not for primary diagnosis. For its stated functions, the "ground truth" would be the expected, correct functioning of the software features (e.g., correct display of DICOM data, accurate measurements of distance/angle).

    8. The sample size for the training set

    • Not applicable/Not specified. The device is a viewer, not an AI model that undergoes a "training" phase with a dataset.

    9. How the ground truth for the training set was established

    • Not applicable. (See point 8).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1