Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K240465
    Date Cleared
    2024-06-21

    (126 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151000, K173664

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm™ O2 Imaging System is a mobile x-ray system, designed for 2D and 3D imaging for adult and pediatric patients weighing 60 lbs or greater and having an abdominal thickness greater than 16 cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomic structures and objects with high x-ray attenuation such as bony anatomy and metallic objects. The O-arm™ O2 Imaging System is compatible with certain image guided surgery systems.

    Device Description

    The O-arm™ O2 Imaging System is a mobile x-ray system that provides 3D and 2D imaging. The O-arm™ O2 Imaging System consists of two main assemblies that are used together: The Image Acquisition System (IAS) and The Mobile View Station (MVS). The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition, the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected. The O-arm™ O2 Imaging System operates off standard line voltage within the following voltages: VAC 100, 120 or 240, Frequency 60Hz or 50Hz, Power Requirements 1440 VA.

    AI/ML Overview

    The Medtronic O-arm™ O2 Imaging System with 4.3.0 software introduces three new features: Medtronic Implant Resolution (MIR) (referred to as KCMAR in the document), 3D Long Scan (3DLS), and Spine Smart Dose (SSD). The device's performance was evaluated through various studies to ensure substantial equivalence to the predicate device (O-arm™ O2 Imaging System 4.2.0 software) and to verify that the new features function as intended without raising new safety or effectiveness concerns.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implicit from Study Design/Results)Reported Device Performance
    Spine Smart Dose (SSD)Clinical equivalence to predicate 3D acquisition modes (Standard and HD) by board-certified neuroradiologists.Deemed clinically equivalent to O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes by board-certified neuroradiologists in a blinded review of 100 clinical image pairs.
    SSD Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, Uniformity, and Geometric accuracy.Met all system-level requirements.
    SSD Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Medtronic Implant Resolution (KCMAR)Clinical utility of KCMAR images to be statistically better than corresponding non-KCMAR images from the predicate device by board-certified radiologists.Statistically better clinical value when compared to corresponding images from the Predicate Device (O-arm O2 Imaging System version 4.2.0) under specified indications.
    KCMAR Metal Artifact Reduction (Bench Testing)Qualitative comparison to demonstrate metal artifact reduction between non-KCMAR and KCMAR processed images.Demonstrated metal artifact reduction.
    KCMAR Implant Location Accuracy (Bench Testing)Quantitative assessment of implant location accuracy in millimeters and degrees to meet system requirements.Met all system-level requirements.
    3D Long Scan (3DLS) Clinical UtilityClinical utility of Standard 3DLS and SSD 3DLS to be statistically equivalent to the corresponding Standard acquisition mode available in the predicate system by board-certified radiologists.Statistically equivalent clinical utility when compared to the corresponding Standard acquisition mode available in the predicate system (version 4.2.0).
    3DLS Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, and Geometric accuracy.Met all system-level requirements.
    3DLS Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Usability (3DLS, SSD, KCMAR)Pass summative validation with critical tasks and new workflows for intended users in simulated use environments.Passed summative validation, providing objective evidence of safety and effectiveness for intended users, uses, and environments.
    Dosimetry (SSD, 3DLS)Confirm dose accuracy (kV, mA, CTDI, DLP) meets system-level requirements for new acquisition features.All dosimetry testing passed system-level requirements.

    2. Sample Size for the Test Set and Data Provenance

    • Spine Smart Dose (SSD) Clinical Equivalence:
      • Sample Size: 100 clinical image pairs.
      • Data Provenance: "Clinical" images, suggesting retrospective or prospective clinical data. No specific country of origin is mentioned.
    • KCMAR Clinical Equivalence:
      • Sample Size:
        • Initial study: 40 image pairs from four cadavers (small, medium, large, and extra-large habitus).
        • Subsequent study: 33 image pairs from two cadavers (small and extra-large habitus).
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.
    • 3D Long Scan (3DLS) Clinical Utility:
      • Sample Size: 45 paired samples from acquisitions of three cadavers (small, medium, and extra-large habitus). Two cadavers were instrumented with pedicle screw hardware.
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Spine Smart Dose (SSD) Clinical Equivalence: "board-certified neuroradiologist" (singular, implied one, but could be more based on typical study designs not explicitly stated as count of 1). The document mentions "board-certified neuroradiologist involving 100 clinical image pairs".
    • KCMAR Clinical Equivalence: "Board-certified radiologists" (plural).
    • 3D Long Scan (3DLS) Clinical Utility: "Board-certified radiologists" (plural).

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It describes a "blinded review" for SSD and "clinical utility scores (1-5 scale)" for KCMAR and 3DLS, implying individual assessments that were then potentially aggregated or statistically compared.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The studies for SSD, KCMAR, and 3DLS involved multiple readers (board-certified radiologists/neuroradiologists) evaluating images.

    • SSD: Compared "O-arm™ O2 Imaging System 4.3.0 SSD images" to "O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes." The outcome was clinical equivalence, not an improvement in human reader performance with AI assistance. It states the SSD leverages Machine Learning technology to reduce noise.
    • KCMAR: Compared images reconstructed "without KCMAR feature" to images "with KCMAR feature." The outcome was "statistically better" clinical value for KCMAR. This indicates that the feature itself (which uses an algorithm for metal artifact reduction) resulted in better images, which would indirectly benefit the reader, but it doesn't quantify an improvement in human reader performance directly.
    • 3DLS: Compared "Standard 3DLS and SSD 3DLS" to "corresponding Standard acquisition mode." The outcome was "statistically equivalent clinical utility." This specifically relates to the utility of the scan modes, not an AI-assisted interpretation by readers.

    Therefore, while MRMC-like studies were conducted to assess the performance of the features, the focus was on the characteristics of the images produced by the device (clinical equivalence/utility/better value) rather than quantifying an effect size of how much human readers improve with AI versus without AI assistance in their diagnostic accuracy or efficiency.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, aspects of standalone performance were evaluated through bench testing.

    • SSD Bench Testing: Evaluated image quality parameters (3D Line pair, Contrast, MTF, Uniformity, Geometric accuracy) and navigational accuracy.
    • KCMAR Bench Testing: Qualitatively compared metal artifact reduction and quantitatively assessed implant location accuracy.
    • 3DLS Bench Testing: Verified system-level requirements for image quality (3D Line pair, Contrast, MTF, Geometric accuracy) and navigational accuracy.

    These bench tests assess the algorithmic output directly against defined performance metrics, independent of human interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Clinical Equivalence/Utility for SSD, KCMAR, 3DLS: Ground truth was established by expert assessment/consensus from board-certified neuroradiologists/radiologists providing clinical utility scores and making equivalence/superiority judgments.
    • Bench Testing: Ground truth was based on phantom measurements and objective system-level requirements for image quality, geometric accuracy, and navigational accuracy.

    8. The Sample Size for the Training Set

    The document states that the Spine Smart Dose (SSD) feature "leverages Machine Learning technology with existing O-arm™ images to achieve reduction in dose..." However, it does not specify the sample size of the training set used for this Machine Learning model.

    9. How the Ground Truth for the Training Set was Established

    For the Spine Smart Dose (SSD) feature, which uses Machine Learning, the document mentions "existing O-arm™ images." It does not explicitly state how the ground truth for these training images was established. Typically, for such denoising or image enhancement tasks, the "ground truth" might be considered the higher-dose, higher-quality images, with the ML model learning to reconstruct a similar quality image from lower-dose acquisitions. The document implies that the model's output (low-dose reconstruction) was then validated against expert opinion for clinical equivalence.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200074
    Manufacturer
    Date Cleared
    2020-04-24

    (101 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K151000

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm O2 Imaging System is a mobile x-ray system designed for adults and pediatric patients weighing 601bs or greater and having an abdominal thickness greater than 16cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomical structures and objects with high x-ray attenuation such as bony anatomy and metallic objects.

    The O-arm O2 Imaging System in compatible with certain image guided surgery systems.

    Device Description

    The O-arm™ O2 Imaging System is a mobile x-ray system that provides 3D and 2D imaging. O-arm O2™ Imaging System was originally cleared for market under the original 510(k) K151000 and subsequently via special 510(k) K173664. The device is classified under primary product code OWB (secondary product codes OXO, JAA) ref 21 CFR 892.1650.

    This submission for the O-arm™ O2 Imaging System 4.2.0 software release introduces following features:

    • . "2D Long Film" Imaging Protocol (Intraoperative Radiographic Scan)
    • . Gantry Rotor Angle and Tilt Angle Display
    • User Access Enhancements ●

    The O-arm™ O2 Imaging System consists of two main assemblies that are used together:

    • The Image Acquisition System (IAS) ●
    • The Mobile View Station (MVS) .

    The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition, the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected.

    The O-arm™ O2 Imaging System operates off standard line voltage within the following voltages:

    • VAC 100, 120 or 240 ●
    • Frequency 60Hz or 50Hz ●
    • Power Requirements 1440 VA
    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Medtronic O-arm™ O2 Imaging System 4.2.0 software. This submission aims to demonstrate substantial equivalence to a predicate device (O-arm™ O2 Imaging System 4.1.0 software).

    The key takeaway is that this submission modifies an existing imaging device, adding new features, rather than being a new AI/ML-driven diagnostic/CADe device that would typically have specific AI-driven acceptance criteria. Therefore, the "acceptance criteria" and "study" described in the document are primarily focused on demonstrating that the modified device remains safe and effective and performs as intended, similar to the predicate device, for its original intended use. There is no mention of AI/ML components in this submission.

    Given this context, I will address the questions as they relate to the information provided about the device's modifications and the performance testing conducted to support its substantial equivalence.

    1. A table of acceptance criteria and the reported device performance

    Based on the provided document, the acceptance criteria are implicitly tied to demonstrating substantial equivalence to the predicate device and showing that the new features function as intended without introducing new risks. The "reported device performance" is described through various testing types.

    Acceptance Criteria (Implicit)Reported Device Performance (as tested)
    Maintain Intended Use: The device continues to fulfill its stated Indications for Use.The Indications for Use for the O-arm™ O2 Imaging System 4.2.0 software are identical to the predicate device, with only a minor phrasing change ("fluoroscopic" removed, as explained due to multiple 2D modes). This indicates the device maintains its core purpose.
    Safety Conformity: The device meets relevant electrical, electromagnetic compatibility (EMC), radiation safety, and usability standards.- AAMI/ANSI ES 60601-1 2005+A1:2012 (Basic Safety & Essential Performance)
    • IEC 60601-1-2:2014 (EMC)
    • IEC 60601-1-3:2008 + A1:2013 (Radiation Protection)
    • IEC 60601-2-28:2010 (X-ray Source Assemblies Safety)
    • IEC 60601-2-43:2010 + A1:2017 (Specific Safety Requirements)
    • Software Verification and Validation testing
    • Hardware verification
    • Dosimetry Report (measures radiation dose for various modes) |
      | Performance of New Features: New features (2D Long Film, Gantry Rotor/Tilt Angle Display, User Access Enhancements) function as designed. | - "2D Long Film" Imaging Protocol: Added as an "Automatically stitched 2D Radiographic (Long Film)" feature, leveraging gantry motion for sequential image acquisition and stitching. This implies successful implementation and functionality.
    • Gantry Rotor Angle and Tilt Angle Display: Added feature, providing real-time display on the pendant. This implies successful implementation.
    • User Access Enhancements (LDAP, etc.): Added feature, implying successful implementation of these cybersecurity and user management functionalities. |
      | Image Quality Equivalence: Image quality of the modified device is comparable to the predicate device, especially for existing imaging protocols. | Image Quality Assessment: "quantitative image quality assessment of the O-arm O2 Imaging System with 4.2.0 in comparison to the predicate O-arm™ O2 Imaging System with 4.1.0." This testing demonstrates comparable image quality. |
      | Clinical Utility (New Features): The new features are clinically useful and do not negatively impact existing clinical utility. | O-arm™ Cadayer Image Pair Study: "evaluated the clinical utility of the images obtained using the O-arm™ O2 Imaging System with 4.2.0 compared to the images obtained using the predicate O-arm™ O2 Imaging System with 4.1.0." This study aimed to confirm clinical utility. |
      | No New Risks: The modifications do not introduce new safety or effectiveness concerns. | The conclusion states: "These aspects, along with the functional testing conducted to the FDA recognized standards, demonstrate that Oarm™ O2 Imaging System with 4.2.0 software does not raise new risks of safety and effectiveness when compared to the predicate." |

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify exact sample sizes for all tests beyond general statements like "Usability Testing was conducted" or "The Image Quality Assessment... provides a quantitative image quality assessment."

    The "O-arm™ Cadayer Image Pair Study" is the most relevant test mentioned for clinical utility:

    • Sample Size: Not explicitly stated in the provided text.
    • Data Provenance: Not explicitly stated (e.g., country of origin). It's likely a controlled, prospective study given the nature of a "Cadayer Image Pair Study" for comparing device versions. However, the specific type (retrospective vs. prospective) is not detailed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the excerpt. For the "O-arm™ Cadayer Image Pair Study," it's highly probable that medical professionals (e.g., radiologists, surgeons) were involved in evaluating image utility, but their number and specific qualifications are not mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not specify any adjudication methods for the "O-arm™ Cadayer Image Pair Study" or other performance tests.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: The "O-arm™ Cadayer Image Pair Study" mentioned sounds like it could potentially involve multiple readers evaluating images. However, the document does not explicitly state that it was an MRMC study, nor does it describe its design in detail.
    • AI Assistance: This device and the listed modifications do not involve AI/ML. The effect size of human readers improving with AI assistance is therefore not applicable and not discussed. The study's purpose was to compare the image utility of the new software version to the predicate software version of the imaging system itself.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: This question is typically asked for AI/ML algorithms. Since the O-arm™ O2 Imaging System 4.2.0 software does not contain an AI/ML algorithm within the scope of this 510(k) submission, this concept is not applicable. The device is an imaging system, and its performance is evaluated as a system (hardware and software combined), not as a standalone algorithm. The "2D Long Film" feature is described as an automated stitching process which is a traditional image processing function, not an AI/ML algorithm requiring separate "standalone" validation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "O-arm™ Cadayer Image Pair Study," the "ground truth" would likely be the clinical utility or perceptual quality of the images as evaluated by medical professionals. This would closer resemble expert opinion/consensus on image quality and usefulness in a clinical context, rather than a definitive "true positive/negative" based on pathology or outcomes data, as it's a comparison of imaging system versions. However, the document doesn't explicitly define what "ground truth" means for this specific study.

    8. The sample size for the training set

    This device does not appear to utilize AI/ML algorithms that would necessitate a "training set." Therefore, this question is not applicable.

    9. How the ground truth for the training set was established

    As there is no mention of a "training set" for AI/ML, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1