K Number
K191660
Date Cleared
2019-07-20

(29 days)

Product Code
Regulation Number
870.1425
Panel
CV
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The intended use of the CARTO® 3 System is catheter-based cardiac electrophysiological (EP) procedures. The CARTO® 3 System provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure. The system has no special contraindications.

Device Description

The CARTO® 3 EP Navigation System, Version 7.1 is a catheter-based atrial and ventricular mapping diagnostic system designed to acquire and analyze data points, and use this information to display 3D anatomical and electroanatomical diagnostic maps of the human heart. The location information needed to create the cardiac maps and the local electrograms are acquired using specialized mapping catheters and reference devices. The system allows electrograms and cardiac maps display based on the received intracardiac signals from the catheters. The CARTO® 3 System V7.1 uses the same two distinct types of location technology as the predicate device - magnetic sensor technology and Advanced Catheter Location (ACL) technology.

The CARTO® 3 System V7.1 consists of the following hardware components:

  • Patient Interface Unit (PIU) and Cables
  • 3D Graphics Workstation
  • Wide-Screen monitors, keyboard, and mouse
  • Intracardiac In Port
  • Intracardiac Out Port
  • Power Supply
  • Patches Connection Box and Cables
  • Pedals
  • Location Pad

All hardware components of the CARTO® 3 System V7.1 are identical to those described for the predicate device.

AI/ML Overview

The provided text is a 510(k) summary for the Biosense Webster CARTO® 3 EP Navigation System, Version 7.1. This document primarily focuses on demonstrating substantial equivalence to a predicate device rather than detailing a specific clinical study with acceptance criteria for an AI/algorithm-driven diagnostic device.

Therefore, many of the requested details about acceptance criteria, study design, sample sizes, expert ground truth adjudication, MRMC studies, standalone performance, and ground truth establishment for training data specifically for an AI/algorithm are not present in this document. The CARTO® 3 System is a navigation and mapping system for electrophysiological procedures, and its V7.1 update primarily involves software modifications to existing functions and usability improvements, rather than introducing a new AI diagnostic algorithm that requires a detailed validation study against ground truth as might be seen for an imaging AI.

However, I can extract the information that is available and clearly state what is not provided based on the document.


Analysis of Acceptance Criteria and Device Performance (based on provided text)

The document describes the CARTO® 3 EP Navigation System, Version 7.1 as having undergone "extensive bench and pre-clinical testing under simulated clinical conditions to verify the new and modified features and to demonstrate with regression testing that these modifications did not negatively affect existing features in the predicate and reference devices." It concludes that "All testing passed in accordance with appropriate test criteria and standards, and the modified device did not raise new questions of safety or effectiveness."

This indicates that the acceptance criteria were primarily related to functional verification, regression testing, and ensuring the modified device's performance was at least as good as, and did not negatively impact, the predicate device. Specific quantitative acceptance criteria or raw performance numbers are not provided in this 510(k) summary.

Therefore, a table of specific quantitative acceptance criteria and reported device performance (e.g., sensitivity, specificity, accuracy) cannot be created from this document. The performance is generally stated as "all testing passed."


Details Noted from the Document:

  1. A table of acceptance criteria and the reported device performance:

    • Not provided in detail. The document states: "All testing passed in accordance with appropriate test criteria and standards." No specific quantitative metrics (e.g., % accuracy, sensitivity, specificity, or specific error tolerances) are listed as acceptance criteria, nor are corresponding numerical results. The performance is generally concluded as not negatively affecting existing features and not raising new safety/effectiveness questions.
  2. Sample sizes used for the test set and the data provenance:

    • Not specified. The document mentions "extensive bench and pre-clinical testing under simulated clinical conditions." It does not provide the number of cases, patients, or specific data provenance (e.g., country, retrospective/prospective).
  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified. Given the nature of the device (navigation and mapping), ground truth for performance testing would likely involve physical accuracy measurements and functional verification against known standards, rather than expert interpretation of medical images. The document does not describe a process involving expert readers for ground truth establishment.
  4. Adjudication method for the test set:

    • Not applicable/Not specified. As there's no mention of expert interpretation or ground truth derived from multiple experts, an adjudication method is not described.
  5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs. without AI assistance:

    • Not applicable/Not specified. This type of study is typically done for AI-assisted diagnostic imaging, where the AI provides interpretations or assists human readers. The CARTO® 3 System is a navigation and mapping system used during EP procedures, not a diagnostic imaging AI in the context of interpreting images or assisting human readers with a diagnostic task.
  6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Not explicitly described in terms of a standalone diagnostic algorithm performance. The testing encompassed "bench and pre-clinical testing" which implies functional verification of the system's capabilities (e.g., mapping accuracy, signal processing, display features). This is different from the standalone performance metrics (like AUC for classification) seen in diagnostic AI algorithms.
  7. The type of ground truth used:

    • Inferred to be functional measurements and physical verification. For a mapping and navigation system, ground truth would likely relate to the accuracy of 3D anatomical and electroanatomical mapping, catheter location, and signal acquisition. This would be established through precise physical measurements, calibration, and comparison against known standards in simulated environments, rather than expert consensus on medical images or pathology.
  8. The sample size for the training set:

    • Not applicable/Not specified. This device is described as a "programmable diagnostic computer" that processes signals and displays maps. The description of modifications primarily involves merging existing functions and improving user experience ("LAT Histogram," "Map Consistency Display," "Parallel Mapping," "Advanced Reference Annotation," "Power line noise rejection," "GUI improvement"). This suggests software development and refinement rather than a machine learning model that requires a distinct "training set" for model parameters. The document does not indicate the use of a machine learning or deep learning algorithm requiring a separate training set.
  9. How the ground truth for the training set was established:

    • Not applicable/Not specified. As no training set for an AI/ML model is mentioned, this information is not relevant to the document provided.

In summary: The provided 510(k) summary for the CARTO® 3 EP Navigation System, Version 7.1, focuses on demonstrating substantial equivalence through a series of functional and regression tests ("bench and pre-clinical testing") verifying new features and ensuring existing features were not negatively impacted. It does not present a detailed clinical study with quantitative performance metrics, specific acceptance criteria, or ground truth establishment methods typically associated with the validation of an AI/algorithm-driven diagnostic device. The device is a navigation and mapping system, not a diagnostic AI in the common sense addressed by the questions.

§ 870.1425 Programmable diagnostic computer.

(a)
Identification. A programmable diagnostic computer is a device that can be programmed to compute various physiologic or blood flow parameters based on the output from one or more electrodes, transducers, or measuring devices; this device includes any associated commercially supplied programs.(b)
Classification. Class II (performance standards).