K Number
K151730
Date Cleared
2015-07-23

(27 days)

Product Code
Regulation Number
870.1290
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Hansen Medical Magellan Robotic System and accessory components are intended to be used to facilitate navigation to anatomical targets in the peripheral vasculature and subsequently provide a conduit for manual placement of therapeutic devices.

The Magellan Robotic System is intended to be used with compatible Hansen Medical robotically steerable catheters.

Device Description

The Hansen Medical Magellan Robotic System and Accessory Components are designed to facilitate navigation to anatomical targets in the peripheral vasculature and subsequently provide a conduit for manual placement of therapeutic devices. The fundamental concept of the system is based on a master/slave control system that enables and visualizes positioning of a steerable catheter tip at a desired point inside the vasculature, while enabling a physician to remain seated and away from the x-ray radiation source. The modification to the Magellan Robotic System is software update referred to as Magellan v1.9.1.

AI/ML Overview

This document is a 510(k) premarket notification for the Hansen Medical Magellan Robotic System and Accessory Components (K151730). It primarily details a software update (Magellan v1.9.1) to an already cleared device (K141614). The core argument for substantial equivalence relies on the fact that the modifications do not change the intended use, fundamental scientific technology, or operating principles.

As such, this submission does not describe a study to prove a device meets acceptance criteria in the way a new or significantly modified device submission might. Instead, it aims to demonstrate that a software update to an existing device does not degrade performance and maintains substantial equivalence.

Therefore, many of the requested elements (like sample size for test sets, number of experts for ground truth, MRMC studies, standalone performance details, training set size, etc.) are not applicable or not provided in this type of submission because the focus is on maintaining existing safety and effectiveness rather than establishing new performance benchmarks.

However, I can extract information related to the acceptance criteria and the study type that was mentioned:


1. Table of acceptance criteria and the reported device performance

Acceptance Criteria TypeReported Device Performance
Software Verification TestingAll pre-determined acceptance criteria were met.
System Validation TestingAll pre-determined acceptance criteria were met.

Important Note: The document states that "All of the pre-determined acceptance criteria were met," but it does not explicitly list what those specific acceptance criteria were (e.g., specific thresholds for accuracy, reliability, or safety metrics). The document focuses on confirming that the updated software did not introduce new risks or deviations from the predicate device's expected performance.

2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Sample Size: Not specified.
  • Data Provenance: Not specified. Given the nature of software verification and system validation, these would typically be internal laboratory tests.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Not applicable/Not specified. This type of information is typically relevant for clinical studies or studies involving human interpretation of data, which was not the focus here. The performance was assessed through engineering and system-level tests.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Not applicable/Not specified. Adjudication methods are typically used in clinical trials where multiple human readers assess cases, which is not the case for this software update submission.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No. An MRMC comparative effectiveness study was not done. This submission concerns a software update to a robotic navigation system, not an AI-assisted diagnostic tool for human readers.

6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

  • Yes, in a sense. The "Software Verification Testing" and "System Validation Testing" represent standalone evaluations of the updated software and system to confirm they perform as expected without human intervention impacting the robotic movement or calculations directly. However, the performance is evaluated against the system's designed specifications, not necessarily an "algorithm only" in the AI sense.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • For "Software Verification Testing" and "System Validation Testing," the ground truth would be based on the design specifications, functional requirements, and safety standards established for the device. For example, a navigation system's ground truth could be its ability to accurately move the catheter to a programmed position within a defined tolerance. It does not involve medical ground truth like pathology or expert consensus on a diagnosis.

8. The sample size for the training set

  • Not applicable/Not specified. This document pertains to a software update to an existing robotic control system, not a machine learning or AI model that uses a training set in the conventional sense.

9. How the ground truth for the training set was established

  • Not applicable. (See #8)

§ 870.1290 Steerable catheter control system.

(a)
Identification. A steerable catheter control system is a device that is connected to the proximal end of a steerable guide wire that controls the motion of the steerable catheter.(b)
Classification. Class II (performance standards).