Search Filters

Search Results

Found 47 results

510(k) Data Aggregation

    K Number
    K253639

    Validate with FDA (Live)

    Device Name
    View
    Manufacturer
    Date Cleared
    2026-01-08

    (50 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K252476

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-10-16

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning.

    The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • Viewer 5.4.2 (General Viewing)
    • Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0

    Universal Atlas Performer

    Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications

    Universal Atlas Transfer Performer

    Software that provides medical image data auto-segmentation information to Brainlab applications

    When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then "cloned" into the virtual image space of connected mixed reality glasses. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.

    AI/ML Overview

    The provided document describes the FDA 510(k) clearance for Brainlab AG's Viewer device. However, it explicitly states, "Viewer is not intended for diagnosis nor for treatment planning." This means the device primarily focuses on image display, manipulation, and basic measurements rather than making diagnostic or clinical decisions.

    As such, the performance data presented is related to technical functionality and accuracy of measurements rather than diagnostic accuracy against a ground truth for a medical condition. Therefore, many of the requested sections regarding diagnostic performance, ground truth, experts, and comparative effectiveness studies are not applicable in the context of this specific regulatory submission.

    Here's a breakdown of the requested information based on the provided document:


    Acceptance Criteria and Reported Device Performance

    Given the nature of the device (medical image management and processing system, not for diagnosis), the acceptance criteria and performance reported are largely functional and technical.

    Acceptance Criteria CategorySpecific Criteria/Test DescriptionReported Device Performance/Outcome
    Software FunctionalitySuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests. (General V&V)Passed: Documentation indicating successful completion of these tests was provided, as recommended by FDA guidance. (Enhanced level)
    Ambient Light TestDetermine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions.Passed: The display quality was determined to be sufficient. (Specific results not detailed beyond "sufficient visualization")
    Hospital Environment TestsTest compatibility of the Subject Device with various hardware platforms and compatible software.Passed: Compatibility was confirmed. (Specific platforms/software not detailed)
    Display Quality TestsMeasure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display (Magic Leap 2) to ensure seamless integration of real and virtual content, and maintenance of high visibility and image quality. Tests were conducted with and without segmented dimming.Passed: The tests ensured seamless integration, high visibility, and image quality. (Specific numerical results not detailed, but the outcome implies they met internal quality standards).
    Measurement Accuracy TestInexperienced test persons (3) able to place distance measurements using a Mixed Reality user interface (Magic Leap controller) with a maximal deviation of less than one millimeter in each axis compared to mouse and touch on desktop as input methods.Passed: The test concluded that the specified accuracy was achieved, meaning the maximal deviation was less than one millimeter.

    Study Details

    1. Sample size used for the test set and the data provenance:

      • Measurement Accuracy Test: 3 inexperienced test persons. No information on the number of measurements or specific datasets used.
      • Other Tests (Ambient Light, Hospital Environment, Display Quality): The sample sizes for these bench tests are not explicitly stated in terms of patient data or specific items tested, but represent functional validation of the system and its components.
      • Data Provenance: Not applicable for the described functional and accuracy tests. The device deals with DICOM data, but the specific source of that data for these tests is not mentioned as the tests focus on the device's capabilities rather than clinical diagnostic performance on a dataset.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Measurement Accuracy Test: No experts were explicitly mentioned for establishing "ground truth" for the measurement accuracy test. The comparison was between different input methods (Magic Leap controller vs. mouse/touch). The "ground truth" for these measurements would likely be the known distances within the virtual environment or the established accuracy of the mouse/touch methods themselves, assumed to be accurate. The "inexperienced test persons (3)" were the subjects performing the measurements, not experts establishing ground truth.
      • Other Tests: Not applicable, as these were functional and technical performance tests not involving clinical ground truth.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not applicable. The tests described are bench tests and functional validations, not clinical studies requiring adjudication of findings.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was done or is applicable. This device is cleared as a "Medical Image Management And Processing System" and explicitly states it is "not intended for diagnosis nor for treatment planning." Therefore, there is no AI assistance for human readers in a diagnostic context described, and no effect size would be reported.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • Not applicable in the conventional sense of a diagnostic algorithm. The device's core function is to display and manipulate images with some basic automated measurements. The measurement accuracy test (3D measurement placement) involves human interaction with the device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • For the Measurement Accuracy Test, the ground truth appears to be based on the established accuracy of desktop input methods (mouse and touch) for placing measurements, and the expectation of how accurately the mixed reality controller should perform relative to those. It is not an expert consensus on a clinical condition, pathology, or outcomes data.
      • For other tests (Ambient Light, Hospital Environment, Display Quality), the "ground truth" refers to engineering specifications and visual quality standards.
    7. The sample size for the training set:

      • Not applicable. This document describes a software update and clearance for an image viewing and manipulation device, not an AI/ML algorithm that requires a "training set" for diagnostic or predictive tasks.
    8. How the ground truth for the training set was established:

      • Not applicable, as there was no training set mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251820

    Validate with FDA (Live)

    Date Cleared
    2025-09-12

    (91 days)

    Product Code
    Regulation Number
    892.1550
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is a general purpose diagnostic ultrasound system intended for use by qualified and trained healthcare professionals for ultrasound imaging, measurement, display and analysis of the human body and fluid, which is intended to be used in a hospital or medical clinic.

    The ultrasound system is intended for use by a qualified physician for ultrasound evaluation of Ophthalmic; Fetal; Abdominal (renal, GYN/Pelvic); Intra-operative (abdominal, thoracic(cardiac), and vascular); Intra-operative (Neuro); Laparoscopic; Pediatric; Small organ (thyroid, breast, testes, etc); Neonatal cephalic; Adult Cephalic/Transcranial; Trans-rectal; Trans-vaginal; Trans-esoph.(non-Card); Musculoskeletal (Conventional); Musculoskeletal (Superficial); Cardiac Adult; Cardiac Pediatric; Trans-esoph. (Cardiac); Intra-cardiac; Peripheral vessel.

    Modes of operation include: B (Includes B-Mode and Harmonic (Contrast) imaging (HI)), M, PWD (Includes PWD-Mode imaging and High Pulse Repetition Rate PWD-Mode (HPRF)), CWD, Color Doppler (Includes Color Doppler (CD), Directional Power Doppler (DPD), and Power Doppler (PD)), Combined Modes (Includes B+M, B+M+CM, M+CM, B+CD+M+CM, B+CD+PWD where CD could represent(CD, DPD, PD, or BD)), Color M-Mode (CM), 3D/4D, CEUS (Contrast agent for Liver), ARFI, 2D SWI, Freehand tissue elasticity, CEUS (Contrast agent for LVO).

    Device Description

    This device is a general purpose diagnostic ultrasound system intended for use by qualified and trained healthcare professionals for ultrasound imaging, measurement, display and analysis of the human body and fluid, which is intended to be used in a hospital or medical clinic.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251250

    Validate with FDA (Live)

    Date Cleared
    2025-09-05

    (135 days)

    Product Code
    Regulation Number
    892.1550
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ViewFlex™ Xtra ICE Catheter
    The ViewFlex™ Xtra ICE Catheter is indicated for use in adult and adolescent pediatric patients to visualize cardiac, structures, blood flow and other devices within the heart.

    ViewFlex™ Eco Reprocessed ICE Catheter
    The ViewFlex™ Eco Reprocessed Catheter is indicated for use in adult and adolescent pediatric patients to visualize cardiac structures, blood flow and other devices within the heart.

    Advisor™ HD Grid Mapping Catheter, Sensor Enabled™
    The Advisor™ HD Grid Mapping Catheter, Sensor Enabled™, is indicated for multiple electrode electrophysiological mapping of cardiac structures in the heart, i.e., recording or stimulation only. This catheter is intended to obtain electrograms in the atrial and ventricular regions of the heart.

    Advisor™ HD Grid X Mapping Catheter, Sensor Enabled™
    The Advisor™ HD Grid X Mapping Catheter, Sensor Enabled™, is indicated for multiple electrode electrophysiological mapping of cardiac structures in the heart, i.e., recording or stimulation only. This catheter is intended to obtain electrograms in the atrial and ventricular regions of the heart.

    Agilis™ NxT Steerable Introducer
    The Agilis™ NxT Steerable Introducer is indicated for the introduction of various cardiovascular catheters into the heart, including the left side of the heart, during the treatment of cardiac arrhythmias.

    Agilis™ NxT Steerable Introducer Dual-Reach™
    The Agilis™ NxT Steerable Introducer Dual-Reach™ is indicated for the introduction of various cardiovascular catheters into the heart, including the left side of the heart, during the treatment of cardiac arrhythmias.

    Device Description

    The Agilis™ NxT Steerable Introducer Dual-Reach™ is a sterile, single-use device that con-sists of a dilator and steerable introducer, which is designed to provide flexible catheter positioning in the cardiac anatomy. The inner diameter of the steerable introducer is 13F. The steerable introducer includes a hemostasis valve to minimize blood loss during catheter intro-duction and/or exchange. It has a sideport with three-way stopcock for air or blood aspiration, fluid infusion, blood sampling, and pressure monitoring. The handle is equipped with a rotating collar to deflect the tip clockwise ≥180° and counterclockwise ≥90°. The steerable introducer features distal vent holes to facilitate aspiration and minimize cavitation and a radiopaque tip marker to improve fluoroscopic visualization.

    AI/ML Overview

    This FDA 510(k) clearance letter (K251211) and its accompanying 510(k) summary pertain to a change in workflow for several existing cardiovascular catheters, specifically allowing for a "Zero/Low Fluoroscopy Workflow."

    The key phrase here is "Special 510(k) – Zero/Low Fluoroscopy Workflow". This type of submission is for modifications to a previously cleared device that do not significantly alter its fundamental technology or intended use, but rather introduce a change in how it's used or processed.

    Crucially, this submission does NOT describe a new AI/software device that requires extensive performance testing against acceptance criteria in the manner you've outlined for AI/ML devices. Instead, it's about demonstrating that existing devices, when used with a new, less-fluoroscopy-dependent workflow, remain as safe and effective as before.

    Therefore, many of the questions you've asked regarding acceptance criteria, study details, ground truth, and expert adjudication are not applicable to the information provided in this 510(k) document. The document explicitly states:

    • "Bench-testing was not necessary to validate the Clinical Workflow modifications."
    • "Substantial Equivalence of the subject devices to the predicate devices using the zero/low fluoroscopy workflow has been supported through a summary of clinical data across multiple studies in which investigators used alternative visualization methods."

    This indicates that the "study" proving the device (or rather, the new workflow) meets acceptance criteria is a summary of existing clinical data where alternative visualization methods were already employed, rather than a prospective, controlled study of a new AI algorithm.

    Based on the provided document, here's what can be answered:

    1. A table of acceptance criteria and the reported device performance:

    • Acceptance Criteria: The implicit acceptance criterion is that the devices, when used with "zero/low fluoroscopy workflow," maintain substantial equivalence to their predicate devices in terms of safety and effectiveness. This means they must continue to perform as intended for visualizing cardiac structures, blood flow, mapping, or introducing catheters.
    • Reported Device Performance: The document states that "Substantial Equivalence... has been supported through a summary of clinical data across multiple studies in which investigators used alternative visualization methods." This implies that the performance (e.g., adequate visualization, successful mapping, successful catheter introduction) was maintained. Specific quantitative metrics of performance (e.g., accuracy, sensitivity, specificity, or inter-reader agreement for a diagnostic AI) are not provided or applicable here as this is not an AI/ML diagnostic clearance.

    2. Sample size used for the test set and the data provenance:

    • Sample Size: Not specified. The document refers to "a summary of clinical data across multiple studies." This suggests an aggregation of results from existing (likely retrospective) patient data where alternative visualization techniques (allowing for "zero/low fluoroscopy") were already utilized clinically. It's not a new, single, prospectively designed test set for an AI algorithm.
    • Data Provenance: Not specified regarding country of origin or specific patient demographics. It is implied to be clinical data collected from studies where these types of procedures were performed using alternative visualization. The data would be retrospective as it's a "summary of clinical data" that already exists.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable in the context of this 510(k). Ground truth in an AI/ML context typically refers to adjudicated labels for images or signals. Here, the "ground truth" is inferred from standard clinical practice and outcomes in the historical data summarized. There's no mention of a specific expert panel for new ground truth establishment for a diagnostic AI.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • Not applicable. This is not a study requiring adjudication of diagnostic outputs by multiple readers.

    5. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. This is not an AI-assisted diagnostic device. The workflow change is about using alternative non-fluoroscopic imaging modalities (e.g., intracardiac echocardiography, electro-anatomical mapping systems), not about AI improving human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No. This is not an AI algorithm. The predicate devices are physical catheters.

    7. The type of ground truth used:

    • The "ground truth" is inferred from clinical outcomes and established clinical practice using the devices with alternative visualization methods in real-world scenarios. It's not a specific, adjudicated dataset for an AI algorithm. The performance of the devices (such as successful navigation, visualization, and mapping) under the "zero/low fluoroscopy" workflow is assumed to be equivalent to their performance under full fluoroscopy, as demonstrated by prior clinical use where such methods were employed.

    8. The sample size for the training set:

    • Not applicable. There is no AI model being trained discussed in this document.

    9. How the ground truth for the training set was established:

    • Not applicable. No training set for an AI model.

    In summary:

    This 510(k) is for a workflow modification for existing medical devices (catheters), not for an AI/ML diagnostic or assistive software. Therefore, the detailed data performance evaluation typically required for AI models against specific acceptance criteria (as requested in your template) is not presented or relevant in this clearance letter. The "proof" relies on the concept of substantial equivalence to previously cleared predicate devices, supported by a summary of existing clinical data that used alternative visualization methods, implying that the devices function safely and effectively even with reduced fluoroscopy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251231

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-05-20

    (28 days)

    Product Code
    Regulation Number
    870.1200
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ViewFlex™ X ICE Catheter Sensor Enabled™ is indicated for use in adult and adolescent pediatric patients for intra-cardiac and intra-luminal visualization of cardiac and great vessels anatomy and physiology, as well as visualization of other devices in the heart. When used with a compatible three-dimensional mapping system, the catheter provides location information.

    Device Description

    The ViewFlex™ X ICE Catheter Sensor Enabled™ (SE) is a sterile, single use, temporary, radiopaque, intracardiac ultrasound catheter. The catheter shaft is a 9 French (F) catheter constructed with flexible tubing with a useable length of 90 cm. The shaft is compatible with a 10 French or larger introducer for insertion into the femoral or jugular veins. The catheter tip is a 64‑element linear phased array transducer. The distal portion of the shaft is deflectable utilizing two handle mechanisms which create four deflection directions including left, right, anterior and posterior. The distal tip contains an ultrasound transducer and 3-D location sensor providing 2-D imaging and 3-D location and orientation information when used with a compatible ultrasound system and the EnSite X Cardiac Mapping System.

    AI/ML Overview

    This document is a 510(k) clearance letter for the ViewFlex™ X ICE Catheter, Sensor Enabled™. The provided text does not contain any information regarding acceptance criteria or the study that proves the device meets those criteria, especially in the context of an AI/ML-enabled device as implied by the prompt's request for information about human readers, AI assistance, ground truth, and training sets.

    The device described is an intracardiac ultrasound catheter that provides 2-D imaging and 3-D location/orientation information. The mention of "Sensor Enabled™" and "3-D location sensor" suggests a technological upgrade, but there is no indication that this involves the use of artificial intelligence or machine learning for diagnostic interpretation.

    Therefore, I cannot provide the requested information based on the provided text. The prompt's questions (acceptance criteria, study details, sample sizes, expert qualifications, adjudication, MRMC studies, standalone performance, ground truth types, training set details) are all relevant to the evaluation of AI/ML-enabled medical devices, which is not what this 510(k) summary describes.

    In summary, the provided document explains the ViewFlex™ X ICE Catheter, Sensor Enabled™ as a traditional medical device (an intravascular ultrasound catheter) with an added 3-D location sensor for mapping systems. It details the substantial equivalence to a predicate device based on non-clinical testing (bench design verification, biocompatibility, mechanical integrity, etc.). It does not mention any AI or ML components, nor does it describe studies with human readers, AI assistance, or data sets for machine learning model evaluation.

    Ask a Question

    Ask a specific question about this device

    K Number
    K242244

    Validate with FDA (Live)

    Device Name
    Viewer+
    Manufacturer
    Date Cleared
    2025-03-14

    (226 days)

    Product Code
    Regulation Number
    864.3700
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use

    Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

    It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.

    Device Description

    Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.

    Viewer+ is operated as follows:

      1. Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
      1. Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
      1. The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriterionReported Device Performance
    Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner)The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate.
    Turnaround time (for opening, panning, and zooming an image)Found to be adequate for the intended use of the device.
    Measurement accuracy (using scanned images of biological slides)Viewer+ was found to perform accurate measurements with respect to its intended use.
    Usability testingDemonstrated that the subject device is safe and effective for the intended users, uses, and use environments.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.

    The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.

    7. Type of Ground Truth Used

    • Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
    • Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
    • Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.

    8. Sample Size for the Training Set

    The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned for this device, information on how its ground truth was established is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241300

    Validate with FDA (Live)

    Device Name
    ViewPoint 6
    Manufacturer
    Date Cleared
    2024-07-02

    (54 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ViewPoint 6 is intended to be used in medical practices and in clinical departments and serves the purposes of diagnostic interpretation of images, electronic documentation of examinations in the form of text and images and generation of medical reports primarily for diagnostic ultrasound.

    ViewPoint 6 provides the user the ability to include images, drawings, and charts into medical reports. ViewPoint 6 is designed to accept, transfer, display, calculate, store and process medical images and data, and enables the user to measure and annotate the images. The medical images, which ViewPoint 6 displays to the user, can be used for diagnostic purposes.

    ViewPoint 6 is intended for professional use only. ViewPoint 6 is not intended to be used as an automated diagnosis system.

    ViewPoint 6 is not intended to operate medical devices in surgery related procedures.

    Device Description

    ViewPoint 6 is an image archiving and reporting software for medical practices and clinical radiological departments. It is used for diagnostic interpretation of images and other data. ViewPoint 6 is for professional use only and enables quick diagnostic reporting with standardized terminology. It is designed with intuitive graphical user interfaces (GUIs) and is based on Microsoft Windows® with defined hardware requirements.

    Viewpoint 6 provides exam type specific reporting forms for various medical care areas. Forms are composed of different sections with data entry fields. The documentation can include measurements, exam findings, images and graphs. All data is saved in the ViewPoint 6 database and can be compiled to a professional report. Images and image sequences can be reviewed in the ViewPoint 6 display area based on user preference.

    ViewPoint 6 supports both a single workstation and a client/browser - server setup. The number of user licenses determines how many workstations in the network have concurrent access to the database. Access can be limited to read-only functionality.

    ViewPoint 6 software is a server-based application with client-server architecture, accessed via client computers or mobile devices as well as browser-based systems. Viewpoint 6 is installed on client provided servers within a hospital network.

    The software comes with features to view, annotate, measure, calculate, save and retrieve clinical data (including images via DICOM format) to support patient documentation and record keeping related to ultrasound image scans. Additionally, the software is available for patient administrative tasks such as appointment scheduling and exam billing.

    This product does not control or alter any of the medical devices providing data across the hospital network.

    AI/ML Overview

    The provided text is a 510(k) Summary for a medical device called "ViewPoint 6." This type of document focuses on demonstrating substantial equivalence to a predicate device and typically does not contain detailed primary study results with acceptance criteria and specific performance metrics as would be found in a clinical study report. It primarily outlines the scope of V&V activities and voluntary standards adhered to.

    Based on the provided text, the following information can be extracted regarding the device performance and acceptance criteria:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a specific table of acceptance criteria with corresponding reported device performance values in terms of clinical accuracy (e.g., sensitivity, specificity). Instead, it states that:

    "Successful completion of design verification and validation testing was performed to confirm that software and user requirements have been met."

    This implies that the acceptance criteria are tied to the fulfillment of software and user requirements, which are assessed through various testing activities, but the specific numerical performance metrics are not detailed in this summary. The general statement about "Performance dependent on customer hardware but minimum hardware requirements for acceptable performance are defined in the System Requirements" suggests that performance acceptance is related to system specifications rather than clinical efficacy metrics.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify any sample sizes for a clinical test set, nor does it provide data provenance (e.g., country of origin, retrospective or prospective nature). The V&V activities mentioned (Risk Analysis, Requirements Reviews, Design Reviews, Testing on unit level, Integration testing, Performance testing, Safety testing) are typically software engineering and system-level tests and do not involve a clinical test set with patient data for performance evaluation in the context of diagnostic accuracy.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    No clinical test set is described, and therefore, there is no information about the number or qualifications of experts used to establish ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    No clinical test set is described, so no adjudication method is mentioned.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document states:

    "The similarities and differences between the subject device and the predicate device, were determined not to have a significant impact on the device's performance, the clinical performance, and the actual use scenarios. Therefore, the subject of this premarket submission, ViewPoint 6, did not require clinical studies to support substantial equivalence."

    This explicitly indicates that no MRMC comparative effectiveness study, or any clinical study, was conducted or deemed necessary for this 510(k) submission. Therefore, there is no information about AI assistance or its effect size on human reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is described as "an image archiving and reporting software for medical practices and clinical radiological departments. It is used for diagnostic interpretation of images and other data." The indications for use state it "is not intended to be used as an automated diagnosis system." This confirms it's a tool to assist clinicians, not a standalone diagnostic algorithm. No standalone algorithmic performance study was conducted or mentioned.

    7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)

    As no clinical studies were performed for diagnostic accuracy, no specific ground truth type (expert consensus, pathology, outcomes data, etc.) is mentioned. The V&V activities focused on software and system requirements.

    8. The sample size for the training set

    The document does not refer to any AI/ML components that would require a training set. Therefore, no training set sample size is provided.

    9. How the ground truth for the training set was established

    Since no training set is mentioned for AI/ML, there is no information on how its ground truth would be established.

    In summary:

    This 510(k) submission for ViewPoint 6 primarily relies on demonstrating substantial equivalence to a predicate device (ViewPoint 6 v6.12 K203677) through software validation and verification activities, adherence to voluntary standards, and a comparison of technological characteristics. It explicitly states that clinical studies were not required because the changes from the predicate device were not deemed to have a significant impact on clinical performance. Therefore, detailed information regarding clinical performance acceptance criteria, sample sizes for test or training sets, expert adjudication, or AI performance metrics is not present in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K232759

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2024-05-21

    (256 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.

    Device Description

    Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning. The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.

    • · Viewer 5.4 (General Viewing)
    • · Universal Atlas Performer 6.0
    • Universal Atlas Transfer Performer 6.0
      Universal Atlas Performer: Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications
      Universal Atlas Transfer Performer: Software that provides medical image data autoseqmentation information to Brainlab applications
      When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then rendered on the connected stereoscopic head-mounted display. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.
    AI/ML Overview

    The Brainlab AG Viewer (5.4) and associated products (Elements Viewer, Mixed Reality Viewer, Smart Layout, Elements Viewer Smart Layout) are a medical image management and processing system. The device displays medical images and data, and includes functions for image review, manipulation, basic measurements, and 3D visualization.

    Here's an analysis of the acceptance criteria and supporting studies based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text details various tests performed to ensure the device's performance, particularly focusing on the mixed reality aspects and measurement accuracy. However, specific numerical "acceptance criteria" (e.g., "accuracy must be >95%") and corresponding reported performance values are not explicitly stated in a detailed quantitative manner in the summary. Instead, the document describes the types of tests conducted and generally states that they were successful or ensured certain qualities.

    Test CategoryAcceptance Criteria (Implied/General)Reported Device Performance (General)
    Software Verification & ValidationSuccessful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests.Documentation provided as recommended by FDA guidance. Successful implementation, testing of risk controls, compatibility, and cybersecurity acknowledged for an enhanced level.
    Ambient LightSufficient visualization in a variety of ambient lighting conditions with Magic Leap 2.Test conducted to determine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions. (Implied successful)
    Hospital EnvironmentCompatibility with various hardware platforms and compatible software.Test conducted to test compatibility of the Subject Device with various hardware platforms and compatible software. (Implied successful)
    Display QualitySeamless integration of real and virtual content; maintenance of high visibility and image quality (optical transmittance, luminance non-uniformity, Michelson contrast).Tests carried out to measure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display to ensure seamless integration of real and virtual content and maintenance of high visibility and image quality, both with and without segmented dimming. (Implied successful)
    Measurement AccuracyAccurate 3D measurement placement using Mixed Reality user interface (Magic Leap control), comparable to mouse and touch input.Tests performed to evaluate the accuracy of 3D measurement placement using a Mixed Reality user interface (Magic Leap control) in relation to mouse and touch as input methods. (Implied successful, and supports equivalence to predicate's measurement capabilities, with added 3D functionality in MR)

    2. Sample Size for the Test Set and Data Provenance

    The document does not explicitly state the sample sizes used for any of the described tests (Ambient Light, Hospital Environment, Display Quality, Measurement Accuracy).

    Regarding data provenance, the document does not specify the country of origin for any data, nor whether the data used in testing was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not provide information on the number of experts used to establish ground truth for any of the described tests, nor their qualifications.

    4. Adjudication Method for the Test Set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) used for the test set.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The study is focused on the device's technical performance and accuracy, not on human reader improvement with or without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    The document describes the "Viewer" as software for displaying and manipulating medical images. While it's a software-only device, the tests described (e.g., display quality, measurement accuracy) inherently assess the algorithm's performance in its intended functions without direct human-in-the-loop impact on the results being measured during those specific tests. However, it's not explicitly framed as an "algorithm-only" performance study in contrast to human performance, but rather as instrumental performance validation. The Mixed Reality functionality, while requiring a human operator, still has its underlying software/hardware performance (e.g., accuracy of 3D measurement placement) evaluated.

    7. Type of Ground Truth Used

    The document does not explicitly state the type of ground truth used for any of the tests. For "Measurement accuracy test," it can be inferred that a known, precisely measured physical or digital standard would have been used as ground truth for comparison. For other tests like display quality or compatibility, the ground truth would be conformance to established technical specifications or standards for optical properties and functional compatibility, respectively.

    8. Sample Size for the Training Set

    The document does not provide any information regarding a training set sample size. This is consistent with the device being primarily a viewing, manipulation, and measurement tool rather than an AI/ML diagnostic algorithm that requires a "training set" in the conventional sense. The "Universal Atlas Performer" and "Universal Atlas Transfer Performer" modules do involve "analyzing and processing medical image data with Universal Atlas to create different output results" and "provides medical image data autosegmentation information," which might imply some form of algorithmic learning or rule-based processing. However, no details on training sets for these specific components are included.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned or detailed, there is no information on how its ground truth might have been established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231588

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2023-06-30

    (29 days)

    Product Code
    Regulation Number
    870.1200
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ViewFlex™ Eco Reprocessed ICE Catheter is indicated for use in adult and adolescent pediatric patients to visualize cardiac structures, blood flow and other devices within the heart.

    Device Description

    The ViewFlex™ Eco Reprocessed ICE Catheter is a temporary intracardiac ultrasound catheter intended for use in patients to accurately visualize cardiac structures, blood flow and other devices within the heart when connected to a compatible intracardiac ultrasound console via the compatible ViewFlex™ Catheter Interface Module. Examples of the types of other devices that can be visualized include, and are not limited to, intracardiac catheters, septal occluders, delivery wires, delivery sheaths, sizing balloons and transseptal needles. The use of these images is limited to visualization with no direct or indirect diagnostic use. The ViewFlex™ Eco Reprocessed ICE Catheter has a useable length of 90 cm, with a 9 French (F) shaft with an ultrasound transducer. A 10F introducer is recommended for use with this catheter for insertion into the femoral or jugular veins. The catheter tip has four-directional deflection allowing for Left-Right and Posterior-Anterior deflection, with an angle of at least 120 degrees in each direction.
    The ViewFlex™ Eco Reprocessed ICE Catheter is compatible with the ViewMate™ Z, ViewMate™, ViewMate™ Multi and Philips CX50 ultrasound consoles.
    The ViewFlex™ Eco Reprocessed ICE Catheter is reprocessed by Abbott no more than two (2) times. Each catheter includes marking on the proximal handle and connector that identify the catheter status. The device is taken out of service after reaching the maximum number of reprocessing cycles. Abbott restricts its reprocessing to exclude devices previously reprocessed by other reprocessors.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information based on the provided FDA 510(k) summary for the ViewFlex™ Eco Reprocessed ICE Catheter:

    Summary of Acceptance Criteria and Device Performance:

    Acceptance Criteria CategoryReported Device Performance
    BiocompatibilityMet established performance specifications.
    Design ValidationMet established performance specifications.
    Design VerificationMet established performance specifications.
    Cleaning ValidationMet established performance specifications.
    Risk ManagementNo new or modified hazards identified as a result of the proposed modification (additional reprocessing cycle).

    Study Information:

    1. Table of Acceptance Criteria and Reported Device Performance: This information is provided in the table above. The document states that all testing performed met the established performance specifications for each category.

    2. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: The document does not explicitly state the sample size for the test set used in the non-clinical performance evaluation. It only mentions that "Design verification activities were performed with their respective acceptance criteria."
      • Data Provenance: Not specified, but generally, non-clinical tests like these are conducted in a laboratory setting by the manufacturer (Abbott Medical, USA).
    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

      • Not applicable. This device is a medical catheter and the tests described are primarily engineering and biocompatibility evaluations, not diagnostic image interpretation studies requiring expert clinicians to establish ground truth.
    4. Adjudication Method for the Test Set:

      • Not applicable. The tests are objective measurements against predefined engineering and material specifications, not subjective interpretations requiring adjudication.
    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

      • No. The document explicitly states: "No clinical investigations have been performed for the subject or predicate devices. Clinical data is not required for the demonstration of substantial equivalent based on the risk assessment in Section 10 and verification in Section 12 and validation in Section 13 summarized in this 510(k)." Therefore, no MRMC study was conducted.
    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

      • Not applicable. This is a physical medical device (catheter), not an AI algorithm. Its performance is evaluated through engineering, material, and reprocessing verification and validation, not standalone algorithmic performance.
    7. The Type of Ground Truth Used:

      • The "ground truth" for the non-clinical studies (biocompatibility, design validation/verification, cleaning validation) would be defined by established engineering specifications, material standards, and regulatory guidance documents. For example, for biocompatibility, it would be adherence to ISO 10993 standards and a lack of adverse biological responses. For cleaning validation, it would be a verifiable reduction in contaminants below a specified threshold.
    8. The Sample Size for the Training Set:

      • Not applicable. As this is not an AI/algorithm-based device, there is no "training set" in the context of machine learning. The "development" and "testing" referred to are related to engineering design and physical performance testing.
    9. How the Ground Truth for the Training Set Was Established:

      • Not applicable, as there is no training set for an AI algorithm.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 5