Search Results
Found 1 results
510(k) Data Aggregation
(60 days)
The VX1+ assists operators in the real-time manual or automatic annotation of 3D anatomical and electrical maps of human atria for the presence of multipolar intra-cardiac atrial electrograms exhibiting spatiotemporal dispersion during atrial fibrillation or atrial tachycardia.
The clinical significance of utilizing the VX1+ software to help identify areas with intra-cardiac atrial electrograms exhibiting spatiotemporal dispersion for catheter ablation of atrial arrhythmias, such as atrial fibrillation, has not been established by clinical investigations.
The VX1+ is a machine and deep learning based-algorithm designed to assist operators in the real-time manual or automatic annotation of 3D anatomical and electrical maps of the human heart for the presence of electrograms exhibiting spatio-temporal dispersion, i.e., dispersed electrograms (DEs).
The VX1+ device is a non-sterile reusable medical device, composed of a computing platform and a software application. VX1+ works with all existing 510(k)-cleared catheters that meet specific dimension requirements and with one of the three specific data acquisition systems:
- two compatible EP recording systems (identical to VX1 (Volta Medical (K201298)): the LabSystem Pro EP Recording System (Boston Scientific) (K141185) or the MacLab CardioLab EP Recording System (General Electric) (K130626),
- a 3D mapping system (novelty compared to VX1): EnSite X 3D mapping system (Abbott) (K221213).
A connection cable is used to connect the corresponding data acquisition system to the VX1+ system, depending on the type of communication used:
- Unidirectional analog communication with the EP recording systems via a custom-made cable (two different variants: DSUB, Octopus) and an Advantech PCI-1713U analog-todigital converter, which acquires analog data, digitizes it, and transmits the digital signals to the computer that hosts the VX1+ software.
- -Bidirectional digital communication with the EnSite 3D mapping system via an ethernet cable (four different lengths: 20,10, 5 or 2m) which transmits the digital signals directly to the computer.
The computer and its attached display are located outside the sterile operating room area. The VX1+ software analyzes the patient's electrograms to cue operators in real-time to intracardiac electrograms of interest for atrial regions harboring DEs as well as a cycle length estimation from electrograms recorded with the mapping and the coronary sinus catheters. The results of the analysis are graphically presented on the attached computer display and/or on a secondary medical screen or on an operating room widescreen. The identified regions of interest are either manually (all configurations) or automatically (only available in digital bidirectional communication with the EnSite X 3D mapping system) tagged in the corresponding 3D mapping system.
The provided text describes the acceptance criteria and a study for the Volta Medical VX1+ device. However, it does not contain a detailed table of acceptance criteria with specific performance metrics (e.g., sensitivity, specificity, accuracy, F1-score) and corresponding reported device performance, nor does it detail a multi-reader multi-case (MRMC) comparative effectiveness study.
Based on the available information, here's a breakdown of what can be extracted and what is missing:
Acceptance Criteria and Device Performance
The document describes non-clinical and clinical tests performed, implying certain underlying acceptance criteria were met for substantial equivalence to the predicate device (VX1). However, explicit quantitative acceptance criteria (e.g., "sensitivity > 90%") are not provided in the text. The reported device performance is described generally as "acceptably correlate" and "reliably assists."
Table of Acceptance Criteria and Reported Device Performance (as inferred and with missing specifics):
| Criterion Description (Inferred) | Acceptance Criteria (Explicitly Stated? Not in document) | Reported Device Performance (from document) |
|---|---|---|
| Non-Clinical – Algorithm Performance (Dispersion Adjudication Correlation) | Not explicitly stated (e.g., a specific correlation coefficient or concordance rate). | VX1+ dispersion algorithm "acceptably correlate[s] with unlimited-time expert visual analysis" (replayed from VX1's 510(k) study). |
| Non-Clinical – Usability | Not explicitly stated (e.g., number of critical usability errors < X). | Usability evaluation "did not raise any safety issues and confirmed the relevance of the related risks identified." |
| Clinical – Reliability of Dispersion Detection & Auto-Tagging | Not explicitly stated (e.g., a specific agreement rate with human operators or ground truth). | "The results indicate that VX1+ reliably assists operators in the detection and auto-tagging of regions harboring dispersed electrograms during AF/AT." |
| Clinical – Safety Profile | Not explicitly stated (e.g., absence of critical adverse events). | "[with] no associated additional risks or procedure time." |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Clinical Study (OUS):
- Sample Size: 22 patients.
- Data Provenance: OUS (Outside US) clinical study. The text does not specify the country beyond "OUS." It is a prospective clinical study as it "involved 1 center, 4 operators, and 22 patients" and was "aimed at evaluating the reliability of VX1+ detection of dispersed electrograms and automatic tagging function."
- Non-Clinical (Algorithm Performance):
- The text alludes to a "Reader Study described in VX1's 510(k) (K201298) and intended to show that the algorithm's adjudications acceptably correlate with unlimited-time expert visual analysis, was replayed with VX1+ dispersion algorithm." The specific sample size for this "replayed" test set is not provided in the current document, nor is its provenance explicitly stated, other than being "replayed" data.
- Clinical Study (OUS):
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the non-clinical algorithm performance, it refers to "unlimited-time expert visual analysis" for ground truth. The number and qualifications of these experts are not detailed in this document; they would presumably be in the predicate VX1's 510(k) (K201298).
- For the clinical study, the text states "4 operators" were involved. It's unclear if these operators are considered the "experts" for ground truth or if an independent "expert" review was performed. Their qualifications are not stated.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- The document does not specify the adjudication method used for either the non-clinical re-analysis or the clinical study.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A formal MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance vs. without AI assistance is not explicitly described in this document. The clinical study aimed at evaluating the reliability of VX1+ detection and auto-tagging, and its assistance to operators, rather than directly measuring an improvement in human reader performance (e.g., diagnostic accuracy or speed).
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, the "Reader Study described in VX1's 510(k) (K201298) and intended to show that the algorithm's adjudications acceptably correlate with unlimited-time expert visual analysis, was replayed with VX1+ dispersion algorithm." This suggests a standalone evaluation of the algorithm's output against an expert-derived ground truth.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for the non-clinical algorithm evaluation was based on "unlimited-time expert visual analysis." This implies expert review/consensus.
- For the clinical study, the ground truth is implicitly tied to the "reliable assistance" to operators in identifying dispersed electrograms, suggesting it was established through the clinical workflow and potentially by the operating physicians themselves. The method of establishing definitive ground truth (e.g., independent adjudication, follow-up outcomes) is not explicitly stated.
-
The sample size for the training set:
- The document does not provide the sample size for the training set used for the VX1+ machine and deep learning-based algorithm.
-
How the ground truth for the training set was established:
- The document does not provide details on how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1