Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K240465
    Date Cleared
    2024-06-21

    (126 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    O-arm O2 Imaging System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm™ O2 Imaging System is a mobile x-ray system, designed for 2D and 3D imaging for adult and pediatric patients weighing 60 lbs or greater and having an abdominal thickness greater than 16 cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomic structures and objects with high x-ray attenuation such as bony anatomy and metallic objects. The O-arm™ O2 Imaging System is compatible with certain image guided surgery systems.

    Device Description

    The O-arm™ O2 Imaging System is a mobile x-ray system that provides 3D and 2D imaging. The O-arm™ O2 Imaging System consists of two main assemblies that are used together: The Image Acquisition System (IAS) and The Mobile View Station (MVS). The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition, the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected. The O-arm™ O2 Imaging System operates off standard line voltage within the following voltages: VAC 100, 120 or 240, Frequency 60Hz or 50Hz, Power Requirements 1440 VA.

    AI/ML Overview

    The Medtronic O-arm™ O2 Imaging System with 4.3.0 software introduces three new features: Medtronic Implant Resolution (MIR) (referred to as KCMAR in the document), 3D Long Scan (3DLS), and Spine Smart Dose (SSD). The device's performance was evaluated through various studies to ensure substantial equivalence to the predicate device (O-arm™ O2 Imaging System 4.2.0 software) and to verify that the new features function as intended without raising new safety or effectiveness concerns.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance Criteria (Implicit from Study Design/Results)Reported Device Performance
    Spine Smart Dose (SSD)Clinical equivalence to predicate 3D acquisition modes (Standard and HD) by board-certified neuroradiologists.Deemed clinically equivalent to O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes by board-certified neuroradiologists in a blinded review of 100 clinical image pairs.
    SSD Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, Uniformity, and Geometric accuracy.Met all system-level requirements.
    SSD Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Medtronic Implant Resolution (KCMAR)Clinical utility of KCMAR images to be statistically better than corresponding non-KCMAR images from the predicate device by board-certified radiologists.Statistically better clinical value when compared to corresponding images from the Predicate Device (O-arm O2 Imaging System version 4.2.0) under specified indications.
    KCMAR Metal Artifact Reduction (Bench Testing)Qualitative comparison to demonstrate metal artifact reduction between non-KCMAR and KCMAR processed images.Demonstrated metal artifact reduction.
    KCMAR Implant Location Accuracy (Bench Testing)Quantitative assessment of implant location accuracy in millimeters and degrees to meet system requirements.Met all system-level requirements.
    3D Long Scan (3DLS) Clinical UtilityClinical utility of Standard 3DLS and SSD 3DLS to be statistically equivalent to the corresponding Standard acquisition mode available in the predicate system by board-certified radiologists.Statistically equivalent clinical utility when compared to the corresponding Standard acquisition mode available in the predicate system (version 4.2.0).
    3DLS Image Quality (Bench Testing)Meet system-level requirements for 3D Line pair, Contrast, MTF, and Geometric accuracy.Met all system-level requirements.
    3DLS Navigational Accuracy (Bench Testing)Meet system-level requirements in terms of millimeters.Met all system-level requirements.
    Usability (3DLS, SSD, KCMAR)Pass summative validation with critical tasks and new workflows for intended users in simulated use environments.Passed summative validation, providing objective evidence of safety and effectiveness for intended users, uses, and environments.
    Dosimetry (SSD, 3DLS)Confirm dose accuracy (kV, mA, CTDI, DLP) meets system-level requirements for new acquisition features.All dosimetry testing passed system-level requirements.

    2. Sample Size for the Test Set and Data Provenance

    • Spine Smart Dose (SSD) Clinical Equivalence:
      • Sample Size: 100 clinical image pairs.
      • Data Provenance: "Clinical" images, suggesting retrospective or prospective clinical data. No specific country of origin is mentioned.
    • KCMAR Clinical Equivalence:
      • Sample Size:
        • Initial study: 40 image pairs from four cadavers (small, medium, large, and extra-large habitus).
        • Subsequent study: 33 image pairs from two cadavers (small and extra-large habitus).
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.
    • 3D Long Scan (3DLS) Clinical Utility:
      • Sample Size: 45 paired samples from acquisitions of three cadavers (small, medium, and extra-large habitus). Two cadavers were instrumented with pedicle screw hardware.
      • Data Provenance: Cadavers (ex-vivo data). No country of origin specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Spine Smart Dose (SSD) Clinical Equivalence: "board-certified neuroradiologist" (singular, implied one, but could be more based on typical study designs not explicitly stated as count of 1). The document mentions "board-certified neuroradiologist involving 100 clinical image pairs".
    • KCMAR Clinical Equivalence: "Board-certified radiologists" (plural).
    • 3D Long Scan (3DLS) Clinical Utility: "Board-certified radiologists" (plural).

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It describes a "blinded review" for SSD and "clinical utility scores (1-5 scale)" for KCMAR and 3DLS, implying individual assessments that were then potentially aggregated or statistically compared.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The studies for SSD, KCMAR, and 3DLS involved multiple readers (board-certified radiologists/neuroradiologists) evaluating images.

    • SSD: Compared "O-arm™ O2 Imaging System 4.3.0 SSD images" to "O-arm™ O2 Imaging System 4.2.x Standard and Predicate High-Definition modes." The outcome was clinical equivalence, not an improvement in human reader performance with AI assistance. It states the SSD leverages Machine Learning technology to reduce noise.
    • KCMAR: Compared images reconstructed "without KCMAR feature" to images "with KCMAR feature." The outcome was "statistically better" clinical value for KCMAR. This indicates that the feature itself (which uses an algorithm for metal artifact reduction) resulted in better images, which would indirectly benefit the reader, but it doesn't quantify an improvement in human reader performance directly.
    • 3DLS: Compared "Standard 3DLS and SSD 3DLS" to "corresponding Standard acquisition mode." The outcome was "statistically equivalent clinical utility." This specifically relates to the utility of the scan modes, not an AI-assisted interpretation by readers.

    Therefore, while MRMC-like studies were conducted to assess the performance of the features, the focus was on the characteristics of the images produced by the device (clinical equivalence/utility/better value) rather than quantifying an effect size of how much human readers improve with AI versus without AI assistance in their diagnostic accuracy or efficiency.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, aspects of standalone performance were evaluated through bench testing.

    • SSD Bench Testing: Evaluated image quality parameters (3D Line pair, Contrast, MTF, Uniformity, Geometric accuracy) and navigational accuracy.
    • KCMAR Bench Testing: Qualitatively compared metal artifact reduction and quantitatively assessed implant location accuracy.
    • 3DLS Bench Testing: Verified system-level requirements for image quality (3D Line pair, Contrast, MTF, Geometric accuracy) and navigational accuracy.

    These bench tests assess the algorithmic output directly against defined performance metrics, independent of human interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Clinical Equivalence/Utility for SSD, KCMAR, 3DLS: Ground truth was established by expert assessment/consensus from board-certified neuroradiologists/radiologists providing clinical utility scores and making equivalence/superiority judgments.
    • Bench Testing: Ground truth was based on phantom measurements and objective system-level requirements for image quality, geometric accuracy, and navigational accuracy.

    8. The Sample Size for the Training Set

    The document states that the Spine Smart Dose (SSD) feature "leverages Machine Learning technology with existing O-arm™ images to achieve reduction in dose..." However, it does not specify the sample size of the training set used for this Machine Learning model.

    9. How the Ground Truth for the Training Set was Established

    For the Spine Smart Dose (SSD) feature, which uses Machine Learning, the document mentions "existing O-arm™ images." It does not explicitly state how the ground truth for these training images was established. Typically, for such denoising or image enhancement tasks, the "ground truth" might be considered the higher-dose, higher-quality images, with the ML model learning to reconstruct a similar quality image from lower-dose acquisitions. The document implies that the model's output (low-dose reconstruction) was then validated against expert opinion for clinical equivalence.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200074
    Manufacturer
    Date Cleared
    2020-04-24

    (101 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    O-arm O2 Imaging System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm O2 Imaging System is a mobile x-ray system designed for adults and pediatric patients weighing 601bs or greater and having an abdominal thickness greater than 16cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomical structures and objects with high x-ray attenuation such as bony anatomy and metallic objects.

    The O-arm O2 Imaging System in compatible with certain image guided surgery systems.

    Device Description

    The O-arm™ O2 Imaging System is a mobile x-ray system that provides 3D and 2D imaging. O-arm O2™ Imaging System was originally cleared for market under the original 510(k) K151000 and subsequently via special 510(k) K173664. The device is classified under primary product code OWB (secondary product codes OXO, JAA) ref 21 CFR 892.1650.

    This submission for the O-arm™ O2 Imaging System 4.2.0 software release introduces following features:

    • . "2D Long Film" Imaging Protocol (Intraoperative Radiographic Scan)
    • . Gantry Rotor Angle and Tilt Angle Display
    • User Access Enhancements ●

    The O-arm™ O2 Imaging System consists of two main assemblies that are used together:

    • The Image Acquisition System (IAS) ●
    • The Mobile View Station (MVS) .

    The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition, the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected.

    The O-arm™ O2 Imaging System operates off standard line voltage within the following voltages:

    • VAC 100, 120 or 240 ●
    • Frequency 60Hz or 50Hz ●
    • Power Requirements 1440 VA
    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Medtronic O-arm™ O2 Imaging System 4.2.0 software. This submission aims to demonstrate substantial equivalence to a predicate device (O-arm™ O2 Imaging System 4.1.0 software).

    The key takeaway is that this submission modifies an existing imaging device, adding new features, rather than being a new AI/ML-driven diagnostic/CADe device that would typically have specific AI-driven acceptance criteria. Therefore, the "acceptance criteria" and "study" described in the document are primarily focused on demonstrating that the modified device remains safe and effective and performs as intended, similar to the predicate device, for its original intended use. There is no mention of AI/ML components in this submission.

    Given this context, I will address the questions as they relate to the information provided about the device's modifications and the performance testing conducted to support its substantial equivalence.

    1. A table of acceptance criteria and the reported device performance

    Based on the provided document, the acceptance criteria are implicitly tied to demonstrating substantial equivalence to the predicate device and showing that the new features function as intended without introducing new risks. The "reported device performance" is described through various testing types.

    Acceptance Criteria (Implicit)Reported Device Performance (as tested)
    Maintain Intended Use: The device continues to fulfill its stated Indications for Use.The Indications for Use for the O-arm™ O2 Imaging System 4.2.0 software are identical to the predicate device, with only a minor phrasing change ("fluoroscopic" removed, as explained due to multiple 2D modes). This indicates the device maintains its core purpose.
    Safety Conformity: The device meets relevant electrical, electromagnetic compatibility (EMC), radiation safety, and usability standards.- AAMI/ANSI ES 60601-1 2005+A1:2012 (Basic Safety & Essential Performance)
    • IEC 60601-1-2:2014 (EMC)
    • IEC 60601-1-3:2008 + A1:2013 (Radiation Protection)
    • IEC 60601-2-28:2010 (X-ray Source Assemblies Safety)
    • IEC 60601-2-43:2010 + A1:2017 (Specific Safety Requirements)
    • Software Verification and Validation testing
    • Hardware verification
    • Dosimetry Report (measures radiation dose for various modes) |
      | Performance of New Features: New features (2D Long Film, Gantry Rotor/Tilt Angle Display, User Access Enhancements) function as designed. | - "2D Long Film" Imaging Protocol: Added as an "Automatically stitched 2D Radiographic (Long Film)" feature, leveraging gantry motion for sequential image acquisition and stitching. This implies successful implementation and functionality.
    • Gantry Rotor Angle and Tilt Angle Display: Added feature, providing real-time display on the pendant. This implies successful implementation.
    • User Access Enhancements (LDAP, etc.): Added feature, implying successful implementation of these cybersecurity and user management functionalities. |
      | Image Quality Equivalence: Image quality of the modified device is comparable to the predicate device, especially for existing imaging protocols. | Image Quality Assessment: "quantitative image quality assessment of the O-arm O2 Imaging System with 4.2.0 in comparison to the predicate O-arm™ O2 Imaging System with 4.1.0." This testing demonstrates comparable image quality. |
      | Clinical Utility (New Features): The new features are clinically useful and do not negatively impact existing clinical utility. | O-arm™ Cadayer Image Pair Study: "evaluated the clinical utility of the images obtained using the O-arm™ O2 Imaging System with 4.2.0 compared to the images obtained using the predicate O-arm™ O2 Imaging System with 4.1.0." This study aimed to confirm clinical utility. |
      | No New Risks: The modifications do not introduce new safety or effectiveness concerns. | The conclusion states: "These aspects, along with the functional testing conducted to the FDA recognized standards, demonstrate that Oarm™ O2 Imaging System with 4.2.0 software does not raise new risks of safety and effectiveness when compared to the predicate." |

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify exact sample sizes for all tests beyond general statements like "Usability Testing was conducted" or "The Image Quality Assessment... provides a quantitative image quality assessment."

    The "O-arm™ Cadayer Image Pair Study" is the most relevant test mentioned for clinical utility:

    • Sample Size: Not explicitly stated in the provided text.
    • Data Provenance: Not explicitly stated (e.g., country of origin). It's likely a controlled, prospective study given the nature of a "Cadayer Image Pair Study" for comparing device versions. However, the specific type (retrospective vs. prospective) is not detailed.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the excerpt. For the "O-arm™ Cadayer Image Pair Study," it's highly probable that medical professionals (e.g., radiologists, surgeons) were involved in evaluating image utility, but their number and specific qualifications are not mentioned.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not specify any adjudication methods for the "O-arm™ Cadayer Image Pair Study" or other performance tests.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: The "O-arm™ Cadayer Image Pair Study" mentioned sounds like it could potentially involve multiple readers evaluating images. However, the document does not explicitly state that it was an MRMC study, nor does it describe its design in detail.
    • AI Assistance: This device and the listed modifications do not involve AI/ML. The effect size of human readers improving with AI assistance is therefore not applicable and not discussed. The study's purpose was to compare the image utility of the new software version to the predicate software version of the imaging system itself.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: This question is typically asked for AI/ML algorithms. Since the O-arm™ O2 Imaging System 4.2.0 software does not contain an AI/ML algorithm within the scope of this 510(k) submission, this concept is not applicable. The device is an imaging system, and its performance is evaluated as a system (hardware and software combined), not as a standalone algorithm. The "2D Long Film" feature is described as an automated stitching process which is a traditional image processing function, not an AI/ML algorithm requiring separate "standalone" validation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the "O-arm™ Cadayer Image Pair Study," the "ground truth" would likely be the clinical utility or perceptual quality of the images as evaluated by medical professionals. This would closer resemble expert opinion/consensus on image quality and usefulness in a clinical context, rather than a definitive "true positive/negative" based on pathology or outcomes data, as it's a comparison of imaging system versions. However, the document doesn't explicitly define what "ground truth" means for this specific study.

    8. The sample size for the training set

    This device does not appear to utilize AI/ML algorithms that would necessitate a "training set." Therefore, this question is not applicable.

    9. How the ground truth for the training set was established

    As there is no mention of a "training set" for AI/ML, this question is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K173664
    Manufacturer
    Date Cleared
    2017-12-29

    (30 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Medtronic O-arm O2 Imaging System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm O2 Imaging System is a mobile x-ray system, designed for 2D fluoroscopic and 3D imaging for adult and pediatric patients weighing 60 lbs or greater and having an abdominal thickness greater than 16cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomic structures and objects with high x-ray attenuation such as bony anatomy and metallic objects.

    The O-arm O2 Imaging System is compatible with certain image guided surgery systems.

    Device Description

    The O-arm O2 Imaging System is a mobile x-ray system that provides 3D imaging as well as 2D fluoroscopic imaging. It was originally cleared for market under 510(k) K151000. The device is classified under primary product code OWB (secondary product codes OXO, JAA) ref 21 CFR 892.1650.

    This is a special submission for the addition of the following software features:

    • Angular Annotation
    • Easy Image Transfer
    • Enhance Dynamic Range
    • Cybersecurity Enhancements

    These changes are only changes in software. There are no changes related to radiation performance. There are no changes to the device hardware required for these software changes. These features are described in more detail below and in the Device Description. This software also includes defect corrections.

    The O-arm O2 Imaging System consists of two main assemblies that are used together:

    • The Image Acquisition System (IAS)
    • . The Mobile View Station (MVS)

    The two units are interconnected by a single cable that provides power and signal data. The IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected.

    The O-arm O2 operates off standard line voltage within the following voltages:

    • VAC 100, 120 or 240
    • Frequency 60Hz or 50Hz
    • . Power Requirements 1440 VA
    AI/ML Overview

    This FDA 510(k) summary focuses on a software update (revision 4.1) for the Medtronic O-arm O2 Imaging System. The submission claims substantial equivalence to the previously cleared device and therefore does not include detailed clinical performance studies to establish new acceptance criteria. Instead, it relies on performance testing to demonstrate that the updated software does not negatively impact the device's intended performance.

    Here's a breakdown of the requested information based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly define new acceptance criteria directly related to clinical performance metrics (like sensitivity, specificity, accuracy) for the new software features. Because this is a special 510(k) for software changes, the acceptance criteria are primarily focused on maintaining the safety and effectiveness of the previously cleared device. The "reported device performance" in this context refers to the successful completion of testing according to recognized standards, indicating that the device continues to perform as intended and its safety and effectiveness are not compromised by the software changes.

    The performance testing conducted was against these standards to demonstrate that the product "will perform as intended according to the outlined design requirements."

    Acceptance Criteria (Implied)Reported Device Performance
    Device performs as intended according to outlined design requirements (including new software features) and applicable recognized standards.Performance testing was conducted in accordance with:
    • AAMI/ANSI ES 60601-1:2012 (Basic Safety & Essential Performance)
    • IEC 60601-1-2:2007 (Electromagnetic Compatibility)
    • IEC 60601-1-3:2008 (Radiation Protection)
    • IEC 60601-2-28:2010 (X-ray Source Assemblies & Tube Assemblies Safety)
    • IEC 60601-2-43:2010 (X-ray Equipment for Interventional Procedures)
    • Risk / Hazard Analysis performed.
    • Verification and/or validation activities performed based on Risk/Hazard Analysis with appropriate methods/tests and pass/fail criteria.

    Conclusion: "The O-arm O2 Imaging system is similar in technological characteristics, imaging performance and indications for use as the predicate devices listed. These aspects, along with the functional testing conducted to the FDA recognized standards, demonstrate that 0-arm 02 Imaging System with 4.1 software does not raise new risks of safety and effectiveness when compared to the predicates." |
    | Cybersecurity enhancements effectively address identified risks. | Cybersecurity Enhancements (removal of hardcoded passwords, added user authentication, software integrity check) performed within a Risk / Hazard Analysis framework. |
    | New software features maintain the safety and effectiveness of the device as compared to the predicate. | Functional testing conducted to FDA recognized standards demonstrates no new risks of safety and effectiveness. |

    2. Sample Size Used for the Test Set and Data Provenance

    This document does not describe a clinical study with a "test set" in the traditional sense of evaluating diagnostic or clinical performance of the device's new software features on patient data. The "testing" mentioned refers to engineering and quality assurance tests against recognized standards. Therefore, specific details about sample size (e.g., number of patients/cases) for a clinical test set or data provenance (country, retrospective/prospective) are not provided as they are not relevant to this type of 510(k) submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    Not applicable. As noted above, there was no clinical "test set" or ground truth established by experts for the performance evaluation described in this document. The evaluation focuses on engineering performance, safety, and functionality.

    4. Adjudication Method for the Test Set

    Not applicable. There was no clinical "test set" requiring adjudication.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This type of study would typically be conducted to evaluate the impact of a device on human reader performance, which is not the focus of this software update notification.

    6. Standalone Performance Study

    The document does not describe a "standalone" performance study in the sense of evaluating the algorithm's performance on its own against a ground truth. The performance testing conducted was to verify that the upgraded system (hardware + new software) meets safety and performance standards for an imaging device.

    7. Type of Ground Truth Used

    Not applicable. For this special 510(k) submission focused on software updates, the "ground truth" refers to compliance with engineering standards and predefined design requirements, rather than clinical ground truth (e.g., pathology, expert consensus on images).

    8. Sample Size for the Training Set

    Not applicable. This document is a 510(k) submission for a software update to an existing medical imaging system. It does not describe the development of a new AI/machine learning algorithm that would require a dedicated training set. The software features added (Angular Annotation, Easy Image Transfer, Enhance Dynamic Range, Cybersecurity Enhancements) are functional improvements, not machine learning algorithms that are "trained" on data.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as there was no training set for a new AI/ML algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K151000
    Manufacturer
    Date Cleared
    2015-08-06

    (113 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    O-Arm O2 Imaging System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The O-arm® O2 Imaging System is a mobile x-ray system designed for 2D fluoroscopic and 3D imaging for adult and pediatric patients weighing 60 lbs or greater and having an abdominal thickness greater than 16cm, and is intended to be used where a physician benefits from 2D and 3D information of anatomic structures and objects with high x-ray attenuation such as bony anatomy and metallic objects.

    The O-arm® O2 Imaging System is compatible with certain image guided surgery systems.

    Device Description

    The O-arm® O2 Imaging System is a mobile x-ray system that provides 3D imaging as well as 2D fluoroscopic imaging. It was originally cleared for market in 2005 via K050996. Additional submissions were made in 2006 (K060344) and 2009 (K092564). The device is classified under primary product code OWB (secondary OXO, JAA) , ref 21 CFR 892.1650.

    O-arm® O2 Imaging System, also referred to as “O-arm® O2”, adds an extended field of view imaging mode that offers twice the lateral field of view as the prior design to provide clinicians further visualization options in larger anatomic regions and anatomical structures. It accomplishes this task with essentially the same hardware design as described within.

    The system consists of two parts: the O-arm® Image Acquisition System (IAS), comprising of a x-ray generator, amorphous silicon flat panel x-ray detector and the x-ray control user interface and the Mobile View Station (MVS), comprising of the image processors, a user interface for image and patient handling and viewing monitor.

    The O-arm® O2 Imaging System consists of two main assemblies that are used together during fluoroscopic imaging:
    • The Mobile View Station (MVS)
    • The Image Acquisition System (IAS)

    The two units are interconnected by a single cable that provides power and signal data. The O-arm® IAS has an internal battery pack that provides power for motorized transportation and gantry positioning. In addition the battery pack is used to power the X-ray tank. The MVS has an internal UPS to support its function when mains power is disconnected.

    The O-arm® operates off standard line voltage within the following voltages:
    • VAC 100, 120 or 240
    • Frequency 60Hz; 50Hz
    • Power Requirements 1440 VA

    AI/ML Overview

    The provided text describes the O-arm® O2 Imaging System and its substantial equivalence to a predicate device, the O-arm® 1000. The information given does not detail specific acceptance criteria for performance metrics (such as sensitivity, specificity, accuracy) that would typically be presented for an AI/ML device, nor does it describe a study specifically designed to prove the device meets such criteria in the context of an AI/ML product.

    Instead, the document details performance testing related to electrical safety, electromagnetic compatibility, radiation protection, and software/hardware verification, which are standard for medical imaging devices. It also mentions an "Image Quality Assessment" and a "Cadaver Image Pair Study" for comparison with the predicate device, but without specific acceptance criteria or detailed results of those studies in terms of quantitative performance metrics.

    Therefore, many of the requested items (e.g., sample size for test set, number of experts, adjudication method, MRMC study, standalone performance, ground truth for training set) are not discernible from the provided text, as the submission appears to focus on demonstrating substantial equivalence to a predicate device through engineering and safety testing rather than a clinical performance study with specific AI-related metrics.

    Here is the information that can be extracted or reasonably inferred from the provided text, formatted as requested where possible:

    1. A table of acceptance criteria and the reported device performance

    Based on the provided text, the acceptance criteria are largely related to compliance with electrical, safety, radiological, and software/hardware engineering standards, and demonstrating image quality comparable to the predicate device. Specific quantitative performance metrics with defined acceptance thresholds (e.g., for diagnostic accuracy, sensitivity, specificity) are not explicitly stated in the provided document. The device performance is generally reported as "will perform as intended" or "meets all prescribed design inputs" through compliance testing and comparative studies.

    Acceptance Criteria CategoryReported Device Performance
    Electrical SafetyCompliant with AAMI/ANSI ES 60601-1:2012
    Electromagnetic CompatibilityCompliant with IEC 60601-1-2:2007
    Radiation ProtectionCompliant with IEC 60601-1-3:2008, IEC 60601-2-28:2010, IEC 60601-2-43:2010
    Software VerificationVerification and Validation testing confirmed software perform as intended
    Hardware VerificationHardware requirements identified for the system perform as intended
    Image Quality AssessmentQuantitative image quality assessment in comparison to the predicate O-arm® 1000 device was conducted. (Specific results/metrics not provided)
    DosimetryDosimetry measurements for various modes documented. (Specific results/metrics not provided)
    UsabilityUsability Testing conducted according to FDA guidance. Users conducted imaging functions under simulated use conditions.
    Clinical UtilityO-arm® Cadaver Image Pair Study evaluated clinical utility compared to predicate O-arm® 1000 and reference Artis Zeego device. (Specific results/metrics not provided)

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set Sample Size:
      • The "O-arm® Cadaver Image Pair Study" is mentioned, which implies a test set of cadaver images. However, the specific sample size (number of cadavers or images) for this study is not provided.
      • For other engineering and safety tests, the "test set" would refer to the device itself or components under various test conditions, not patient data.
    • Data Provenance:
      • The "O-arm® Cadaver Image Pair Study" used cadaver images. Further details on the origin (e.g., country) or whether they were retrospective/prospective are not provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the document. The text does not describe a process for establishing ground truth, especially not with expert readers, for the comparative image quality or clinical utility studies.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    An MRMC study, particularly in the context of human readers improving with AI assistance, is not described in the document. The "O-arm® Cadaver Image Pair Study" is a comparative study of the device's images against a predicate and reference, not a study of human reader performance with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is an imaging system (hardware and associated software for image acquisition and reconstruction), not a standalone AI algorithm for interpretation. Therefore, a "standalone" AI algorithm performance study as typically understood is not applicable in this context. The closest would be the "Image Quality Assessment," which directly evaluates the system's output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for any of its comparative or image quality studies. For an "Image Quality Assessment," the ground truth might be based on physical phantoms with known properties or established image quality metrics. For the "Cadaver Image Pair Study," the ground truth for "clinical utility" would likely refer to the ability to visualize specific anatomical structures or metallic objects, but how this was objectively established is not detailed.

    8. The sample size for the training set

    The device described is an imaging system (hardware and software) that performs 2D fluoroscopic and 3D imaging and image reconstruction. It is not presented as a machine learning inference algorithm that would typically undergo a "training" phase with a large dataset. Therefore, a "training set sample size" is not applicable in the context of this device description.

    9. How the ground truth for the training set was established

    As the device is an imaging system and not explicitly described as an AI/ML inference algorithm, the concept of a "training set" and its "ground truth" establishment is not applicable from the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1