(113 days)
The EnSite™ X EP System is a suggested diagnostic tool in patients for whom electrophysiology studies have been indicated.
The EnSite™ X EP System provides information about the electrical activity of the heart and displays catheter location during conventional electrophysiological (EP) procedures.
EnSite™ X EP System Contact Force Software License:
When used with the TactiSys™ Quartz Equipment, the EnSite™ X EP System Contact Force Module is intended to provide visualization of force information from compatible catheters.
EnSite™ X EP System Surface Electrode Kit:
The EnSite™ X EP Surface Electrode Kit is indicated for use with the EnSite™ X EP System in accordance with the EnSite™ X EP System indications for use.
The EnSite™ X EP System is a catheter navigation and mapping system. A catheter navigation and mapping system is capable of displaying the 3-dimensional (3-D) position of conventional and Sensor Enabled™ (SE) electrophysiology catheters, as well as displaying cardiac electrical activity as waveform traces and as three-dimensional (3D) isopotential and isochronal maps of the cardiac chamber. The contoured surfaces of the 3D maps are based on the anatomy of the patient's own cardiac chamber. The system creates a model by collecting and labeling the anatomic locations within the chamber. A surface is created by moving a selected catheter to locations within a cardiac structure. As the catheter moves, points are collected at and between all electrodes on the catheter. A surface is wrapped around the outermost points.
The provided document is a 510(k) summary for the EnSite™ X EP System, outlining its substantial equivalence to a predicate device. This type of submission focuses on demonstrating safety and effectiveness compared to an already legally marketed device, not necessarily on novel AI algorithm performance studies as might be found in a De Novo submission.
Therefore, the specific information requested about acceptance criteria and study details often associated with AI/ML device performance (like sample size for test sets, data provenance, expert ground truth adjudication, MRMC studies, standalone performance, and training set details) is not directly available in this document.
The document discusses "Design verification activities" and "Performance Testing of updated feature functionality" for the software updates, but these are general engineering and software validation tests rather than clinical performance studies demonstrating diagnostic accuracy with human readers or standalone AI performance.
Here's how to address the requested points based on the available information:
1. A table of acceptance criteria and the reported device performance
The document states that "Design verification activities were performed and met their respective acceptance criteria to ensure that the devices in scope of this submission are safe and effective." However, it does not provide a table specifying the explicit acceptance criteria for each software update or the detailed reported performance metrics against those criteria. It lists the types of testing, implying that the outcomes of these tests met their internal acceptance criteria.
2. Sample sized used for the test set and the data provenance
Not explicitly stated for the "performance testing of updated feature functionality." The document mentions "Bench studies to evaluate substantial equivalence" and "Preclinical Validation Testing to confirm the system could meet user requirements." These usually involve in-vitro or simulated data, rather than large clinical test sets with specified patient populations. The provenance of such data (e.g., country of origin, retrospective/prospective) is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable or not specified. This level of detail for expert ground truth is typically provided for diagnostic accuracy studies involving human interpretation of clinical data, which is not the primary focus of this 510(k) for software updates to an existing electrophysiology system.
4. Adjudication method for the test set
Not applicable or not specified.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned. This submission is about software updates to an existing system, not the introduction of an AI algorithm requiring a comparative effectiveness claims with human readers. The new features (e.g., calculated waveforms, activation direction arrows, wave speed maps, deflection direction indicators, real-time map points) are presented as direct display enhancements or functional improvements rather than AI-driven diagnostic assistance to human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This refers to an "EnSite™ X EP System" which is a catheter navigation and mapping system, intended to be used by trained medical professionals. It's an interactive diagnostic tool, not a standalone AI algorithm producing a diagnosis without human interaction. Therefore, a standalone (algorithm only) performance study as typically understood for AI/ML devices is not applicable or described here. Its function is to provide information for a human clinician.
7. The type of ground truth used
For the software updates, the "ground truth" would likely be defined by internal engineering specifications, established scientific principles of electrophysiology, and potentially comparisons against existing validated methods or simulations, rather than clinical 'ground truth' such as pathology or long-term outcomes data, which is more relevant for diagnostic accuracy claims. The validation would ensure the calculated waveforms, map displays, etc., are accurate representations of the underlying electrophysiological data according to accepted standards.
8. The sample size for the training set
Not applicable. The document describes software updates for an electrophysiology system, not a machine learning algorithm that undergoes a training phase with a specific dataset.
9. How the ground truth for the training set was established
Not applicable, as there is no mention of a machine learning model with a training set.
§ 870.1425 Programmable diagnostic computer.
(a)
Identification. A programmable diagnostic computer is a device that can be programmed to compute various physiologic or blood flow parameters based on the output from one or more electrodes, transducers, or measuring devices; this device includes any associated commercially supplied programs.(b)
Classification. Class II (performance standards).