Search Results
Found 1 results
510(k) Data Aggregation
(68 days)
EP-WORKMATE, EP-NURSEMATE, EP-NURSEMATE WITH PHYSIO MODULE
The WorkMate(TM) Claris(TM) System is indicated for use during clinical electrophysiology procedures.
The WorkMate Claris System is a computer-based electrophysiological recording and monitoring system that is used to capture, display, store, and retrieve surface and intracardiac electrical signals during electrophysiology studies. It consists of a computer, two 23" high-resolution monitors, a multi-channel signal amplifier and filtering system (signal conditioning unit), one or two catheter input modules (CIMs), a printer, and carts. The system may also be configured with an integrated EP-4TM Cardiac Stimulator and touch-screen computer monitor (cleared in K092913).
The WorkMate Claris System is connected to electrophysiology catheters that are guided into various locations within the heart, and to surface electrocardiogram (ECG) cables. Intracardiac and ECG signals are then acquired from electrodes on the indwelling catheters and ECG leads connected to the amplifier, which amplifies and conditions the signals before they are received by the WorkMate Claris System computer for display, measurement and storage.
During the procedure, cardiac signals are acquired and an automated software waveform detector (trigger) performs online recognition of cardiac activation on preselected leads. Temporal interval measurements are computed on multiple channels on a beat-by-beat basis and dynamically displayed on the real-time display. Menu-driven software is utilized for data acquisition and analysis, interval posting, and instant data retrieval with waveform markers and intervals displayed.
Signals are also presented on a review monitor for measurement and analysis. Continuous capture of the digitized signals can be invoked, and the user can also retrieve and display earlier passages of the current study without interruption of the realtime display. The system can also acquire, display and record data from other interfaced devices in use during the procedure, such as imaging devices and ablation generators.
The WorkMate Scribe Module consists of a PC, a touch screen LCD monitor and cart connected via Ethernet to a WorkMate Claris System. Vital signs measurements can be imported from an optional external Physiological Module (Smiths Medical Advisor™ Vital Signs Monitor herein referred to as Physio Monitor). Patient data stored on the WorkMate Claris System can be reviewed, measured and annotated. Real Time signals currently being acquired by the WorkMate Claris System can be viewed. The product is an add-on extension of the WorkMate Claris System that allows a second user to view and annotate a study in parallel with the System user.
The provided text describes the St. Jude Medical WorkMate Claris System, a computer-based electrophysiological recording and monitoring system, and its associated Scribe Module. However, it does not explicitly detail a study conducted to demonstrate the device's fulfillment of specific acceptance criteria in the manner requested.
Instead, the document focuses on the regulatory submission process (510(k)) and demonstrates substantial equivalence to predicate devices. The "Summary on Non-Clinical Testing" section mentions that the device was designed and tested to applicable safety standards and St. Jude Medical SOPs, including design controls and risk analysis. It states that "Design verification activities for mechanical and functional testing were performed with their respective acceptance criteria to ensure that the hardware and limited software modifications do not affect the safety or effectiveness of the device. All testing performed met the established performance specifications." This indicates that internal testing was conducted against performance specifications, which serve as acceptance criteria, but no specific study details are provided.
Therefore, many of the requested details cannot be extracted from this document, as a formal clinical or comparative effectiveness study with specified sample sizes, ground truth establishment, or expert adjudication, as commonly seen for AI/ML device evaluations, is not described.
Here's a breakdown of what can be gleaned and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance
- Acceptance Criteria: The document mentions "established performance specifications" and implies that "design verification activities for mechanical and functional testing were performed with their respective acceptance criteria." However, the specific, quantitative acceptance criteria themselves are not listed.
- Reported Device Performance: Similarly, the document states "All testing performed met the established performance specifications," but the specific performance results that demonstrate this achievement are not provided.
Acceptance Criteria (Inferred) | Reported Device Performance (Inferred) |
---|---|
Design verification activities for mechanical and functional testing performed to ensure hardware and software modifications do not affect safety or effectiveness. | All testing performed met the established performance specifications. |
Adherence to applicable safety standards and St. Jude Medical SOPs (design controls, risk analysis). | Designed and tested to applicable safety standards and St. Jude Medical SOPs. |
2. Sample size used for the test set and the data provenance
Not explicitly stated in the document. The document refers to "design verification activities" and "bench testing," which implies internal testing rather than a clinical study with a "test set" in the context of data-driven performance evaluation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/mentioned. The document does not describe a process of establishing ground truth using experts, as might be done for AI/ML performance evaluation. The testing appears to be primarily focused on meeting hardware and software functional specifications.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable/mentioned. An adjudication method is typically used to establish ground truth in studies involving human interpretation or performance assessment, which is not detailed here.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is described. The device is a "Programmable Diagnostic Computer" for recording and monitoring, not an AI-assisted diagnostic tool that would typically be evaluated with an MRMC study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This document describes a medical device system, not a standalone algorithm. The "automated software waveform detector (trigger)" performs online recognition, which is an algorithmic function. The document states that "design verification activities... for... limited software modifications do not affect the safety or effectiveness," implying standalone software testing was part of the internal verification, but no detailed study or results focusing solely on this algorithmic performance are provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not explicitly mentioned for any "test set." The "ground truth" for the device's functional performance would have been defined by its established engineering specifications and expected operational parameters, as opposed to clinical "ground truth" in the diagnostic sense.
8. The sample size for the training set
Not applicable/mentioned. The device is not described as an AI/ML device that undergoes a training phase using a specific dataset.
9. How the ground truth for the training set was established
Not applicable/mentioned. (Refer to point 8).
Ask a specific question about this device
Page 1 of 1