Search Results
Found 345 results
510(k) Data Aggregation
(119 days)
DQK
Ask a specific question about this device
(115 days)
DQK
Ask a specific question about this device
(29 days)
DQK
The intended use of the CARTO™ 3 System is catheter-based cardiac electrophysiological (EP) procedures. The CARTO™ 3 System provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure. The system has no special contraindications.
The CARTO™ 3 EP Navigation System V8.0, is a catheter-based atrial and ventricular mapping system designed to acquire and analyze navigation catheter's location and intracardiac ECG signals and use this information to display 3D anatomical and electroanatomical maps of the human heart. The location information needed to create the cardiac maps and the local electrograms are acquired using specialized mapping catheters and reference devices. The CARTO™ 3 System uses two distinct types of location technology – magnetic sensor technology and Advanced Catheter Location (ACL) technology.
The CARTO™ 3 System V8.1 consists of the following hardware components:
- Patient Interface Unit (PIU)
- Workstation with Graphic User Interface (GUI)
- Wide-Screen monitors, keyboard, and mouse
- Intracardiac In Port
- Intracardiac Out Port
- Power Supply
- Patches Connection Box and Cables (PU)
- Pedals
- Location Pad (LP)
- Signal Processing Unit (SPU)
All hardware components of the CARTO™ 3 system V8.1 are the same as those found in the predicate device.
The provided FDA 510(k) clearance letter for the CARTO™ 3 EP Navigation System V8.1 does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the study that proves the device meets them. The document primarily focuses on demonstrating substantial equivalence to a predicate device (CARTO™ 3 EP Navigation System V8.0) and outlines the V&V testing performed at a high level.
Specifically, the document does not report specific quantitative acceptance criteria or reported device performance metrics in a readily extractable table format. It states that "All tests were successfully completed and met the acceptance criteria" for various testing phases, but the acceptance criteria themselves are not provided. Similarly, actual performance metrics (e.g., accuracy values, false positive rates, etc.) are not listed.
Regarding the "study that proves the device meets the acceptance criteria," the document describes verification and validation testing, but this is not presented as a single, comprehensive "study" with a specific design (like an MRMC study or a standalone performance study) and reported results in the same way one might describe a clinical trial. Instead, it's a summary of different types of engineering and software testing.
Given these limitations, I will extract and infer information where possible based on the provided text, and explicitly state when information is not available in the document.
Overview of Device Acceptance and Performance (Based on Available Information)
The acceptance of the CARTO™ 3 EP Navigation System V8.1 is broadly based on the successful completion of various verification and validation (V&V) tests, ensuring the device meets its design specifications and performs as intended, especially with new features and existing functionalities. The primary "proof" of acceptance is the statement that "All tests were successfully completed and met the acceptance criteria," even if those criteria are not quantitatively detailed.
Since quantitative acceptance criteria and reported numerical performance are not explicitly provided, a table with specific metrics cannot be generated. The document focuses on conceptual and functional "acceptance."
Detailed Breakdown of Available Information:
1. A table of acceptance criteria and the reported device performance
Not explicitly provided in the document in a quantitative, tabular format.
The document states:
- "All tests were successfully completed and met the acceptance criteria" for "Proof of Design."
- "All system features were found to perform according to specifications and met the tests acceptance criteria" for "Functional verification."
- "All tests were successfully completed and met the acceptance criteria" for "Unit Tests."
- "All testing performed were successfully completed and met the acceptance criteria" for "Retrospective Validation Tests."
- "All test protocol steps were successfully completed and expected results were achieved" for "Animal Testing."
While these statements confirm the device met its internal acceptance criteria, the specific numerical values of these criteria (e.g., "accuracy > X mm," "sensitivity > Y%") and the actual measured performance values are not disclosed in this 510(k) letter.
Inferred Performance Claims:
- The device maintains "identical magnetic location sensor and ACL location accuracy" as the predicate device (V8.0). However, the specific accuracy values are not stated for either version.
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated.
- For "Retrospective Validation Tests," it mentions "clinical recorded data from historic EP procedures." The number of procedures or specific data points is not provided.
- For "Animal Testing," it indicates "animal testing was conducted," but the number of animals or specific test cases is not provided.
- Data Provenance:
- Country of Origin: Not explicitly stated. The company Biosense Webster has facilities in Irvine, CA, USA, and Yokneam, Israel. The data could originate from clinical sites globally.
- Retrospective or Prospective:
- "Retrospective Validation Tests" explicitly used "clinical recorded data from historic EP procedures." This indicates retrospective data.
- "Animal Testing" would be considered prospective in the context of controlled experimental animal studies.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not explicitly stated. The document does not describe the process of establishing ground truth for any of the V&V tests, nor the involvement or qualifications of experts for this purpose.
4. Adjudication method for the test set
Not explicitly stated. Given that expert involvement for ground truth is not detailed, an adjudication method is also not mentioned.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size
No, an MRMC comparative effectiveness study was NOT done or reported. The document focuses on demonstrating substantial equivalence through technical V&V testing and software feature improvements, not on comparative effectiveness with human readers. The CARTO™ 3 System is a navigation system, not an AI for image interpretation that typically undergoes MRMC studies.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, aspects of standalone performance were evaluated, though not explicitly labeled as such.
- "Proof of Design" and "Unit Tests" would inherently involve evaluating the device's (or its software components') performance against design specifications in a standalone manner, without direct human-in-the-loop interaction beyond setup and execution of the tests.
- The statement "identical magnetic location sensor and ACL location accuracy" implies a standalone assessment of the system's core navigational accuracy.
However, specific quantitative metrics for this standalone performance (e.g., location accuracy in mm, precision, etc.) are not provided.
7. The type of ground truth used
Not explicitly stated for specific tests.
Inferred types of ground truth based on the nature of the device and tests:
- Engineering Specifications/Reference Standards: For "Proof of Design," "Functional verification," and "Unit Tests," the ground truth would likely be defined by internal engineering design specifications, simulated environments, and established reference measurements. For accuracy testing of location, highly precise physical measurement systems or phantoms would be used as ground truth.
- Retrospective Clinical Data: For "Retrospective Validation Tests," ground truth would presumably come from existing clinical records of "historic EP procedures," although how this ground truth was established within those records (e.g., confirmed diagnoses, procedural outcomes, expert review) is not detailed.
- Direct Observation/Measurement in Animal Models: For "Animal Testing," ground truth would be based on direct measurements and observations within the animal during the simulated procedures.
8. The sample size for the training set
Not applicable/Not mentioned. The CARTO™ 3 System is described as a navigation system with improved software features (e.g., catheter support, legacy feature enhancements). It is not presented as an AI/ML model that undergoes a distinct "training set" development phase in the typical sense of deep learning models requiring large datasets for training. The changes appear to be more in line with traditional software development and feature integration.
9. How the ground truth for the training set was established
Not applicable/Not mentioned (as it's not described as an AI/ML system with a training set).
Summary of Limitations of the Document for this Request:
The provided FDA 510(k) clearance letter serves its purpose of demonstrating substantial equivalence based on the provided V&V testing summary. However, it is not a detailed technical report or clinical study publication that would typically include the specific quantitative acceptance criteria, full performance metrics, detailed sample sizes, expert qualifications, or ground truth methodologies you are requesting for a comprehensive analysis of a device's proven performance. The document implies successful adherence to internal specifications without detailing those specifications or the resultant performance values.
Ask a specific question about this device
(99 days)
DQK
The EnSite X EP System is a suggested diagnostic tool in patients for whom electrophysiology studies have been indicated.
The EnSite X EP System provides information about the electrical activity of the heart and displays catheter location during conventional electrophysiological procedures.
The EnSite™ X EP System is a catheter navigation and mapping system. A catheter navigation and mapping system is capable of displaying the 3-dimensional (3-D) position of conventional and Sensor Enabled™ (SE) electrophysiology catheters, as well as displaying cardiac electrical activity as waveform traces and as three-dimensional (3D) isopotential and isochronal maps of the cardiac chamber.
The contoured surfaces of the 3D maps are based on the anatomy of the patient's own cardiac chamber. The system creates a model by collecting and labeling the anatomic locations within the chamber. A surface is created by moving a selected catheter to locations within a cardiac structure. As the catheter moves, points are collected at and between all electrodes on the catheter. A surface is wrapped around the outermost points.
The provided FDA 510(k) clearance letter for the EnSite™ X EP System (K251234) details the device's regulatory pathway and general testing conducted. However, it does not contain the specific information required to populate a table of acceptance criteria and reported device performance. It focuses on the regulatory aspects, substantial equivalence to a predicate device, and the general types of testing performed (e.g., software verification, amplifier design verification, system design validation) to demonstrate that the device meets user requirements and its intended use.
The document states: "Design verification activities were performed and met their respective acceptance criteria to ensure that the devices in scope of this submission are substantially equivalent to the predicate device." However, the specific acceptance criteria (e.g., a numerical threshold for accuracy or precision) and the reported device performance values against those criteria are not presented in this public clearance letter.
Similarly, the letter does not provide details regarding:
- Sample sizes used for test sets (beyond stating "design verification" and "system design validation" were performed).
- Data provenance (country of origin, retrospective/prospective).
- Number of experts, their qualifications, or adjudication methods for establishing ground truth for any test set.
- Whether a multi-reader multi-case (MRMC) comparative effectiveness study was done, or any effect size for human readers.
- Whether standalone (algorithm-only) performance was assessed.
- The type of ground truth used (expert consensus, pathology, outcomes data).
- The sample size for the training set.
- How ground truth for the training set was established.
This type of detailed performance data is typically found within the confidential 510(k) submission itself, not routinely published in the public clearance letter.
Therefore,Based on the provided FDA 510(k) clearance letter for the EnSite™ X EP System, the following information can be extracted regarding the device's acceptance criteria and the study that proves it meets those criteria:
Key Takeaway: The provided FDA 510(k) clearance letter asserts that acceptance criteria were met through various design verification and validation activities, demonstrating substantial equivalence to a predicate device. However, it does not disclose the specific numerical acceptance criteria or the quantitative results of the device's performance against those criteria. The details below are based on what is stated or can be inferred from the document.
1. Table of Acceptance Criteria and Reported Device Performance
As per the provided document, specific numerical acceptance criteria and reported device performance data are not explicitly stated or detailed. The document generally states:
"Design verification activities were performed and met their respective acceptance criteria to ensure that the devices in scope of this submission are substantially equivalent to the predicate device."
And
"System Design Validation to confirm the system could meet user requirements and its intended use after modifications"
Without specific numerical cut-offs or performance metrics (e.g., accuracy, precision, error rates), a table cannot be populated as requested. The clearance indicates that internal testing demonstrated the device met pre-defined acceptance criteria, but those criteria and the actual performance results are not publicly available in this document.
Acceptance Criteria Category (Presumed) | Specific Acceptance Criteria (Not specified in document) | Reported Device Performance (Not specified in document) | Met? (Inferred from clearance) |
---|---|---|---|
System Functionality | (e.g., Catheter position display accuracy, Cardiac electrical activity waveform fidelity, 3D map creation accuracy) | (Specific quantitative results, e.g., X mm accuracy) | Yes (Implied by clearance) |
Safety & Effectiveness | (e.g., Conformity to electromagnetic compatibility, software robustness, risk mitigation effectiveness) | (e.g., Passes all EMC tests, no critical software bugs identified) | Yes (Implied by clearance) |
User Requirements | (e.g., System usability, interface responsiveness) | (e.g., Demonstrates ability to meet intended use) | Yes (Implied by clearance) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size Used for Test Set: Not specified in the provided document. The document mentions "Design verification activities" and "System Design Validation" but does not give the number of cases, patients, or data points used for these tests.
- Data Provenance (e.g., country of origin of the data, retrospective or prospective): Not specified in the provided document.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
- (It's common for electrophysiology systems that ground truth would be established by electrophysiologists, but this document does not confirm that.)
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified. (e.g., 2+1, 3+1, none)
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No indication that an MRMC comparative effectiveness study was performed or required for this 510(k) clearance. The focus of this submission is on substantial equivalence to a predicate device, which often relies on non-clinical testing for software updates or minor changes, rather than clinical efficacy studies comparing human readers with and without AI assistance.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: Not applicable/Not provided, as an MRMC study is not mentioned.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Standalone Performance: The document describes "Software Verification at unit, software and system level" and "Amplifier Design Verification," which are types of standalone-like algorithmic or component-level testing. However, the exact metrics and results for pure "algorithm-only" performance (e.g., for automated mapping or analysis features if present) are not detailed. The system is described as a "diagnostic tool" that "provides information" and "displays catheter location," implying human interaction is integral.
7. The Type of Ground Truth Used
- Type of Ground Truth: Not explicitly stated. Given the nature of an EP system, ground truth would likely involve a combination of:
- Validated phantom models: For physical accuracy of catheter tracking and mapping.
- Clinical expert consensus: For validating the interpretation of electrical activity and the accuracy of generated 3D maps or anatomical models.
- Reference measurements: From other validated systems or direct measurements during testing.
- The document implies ground truth was used for "Design verification" and "System Design Validation," which "confirm the system could meet user requirements."
8. The Sample Size for the Training Set
- Training Set Sample Size: Not applicable/Not specified. This 510(k) is for a software update (v5.0) to an existing system (EnSite™ X EP System, predicate K242016). The document describes changes related to compatibility with new catheters and ultrasound systems, rather than the development of entirely new AI/ML algorithms requiring a "training set" in the conventional sense of deep learning. While software is involved, the primary testing discussed is verification and validation, not model training.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set Establishment: Not applicable/Not specified, as the document does not indicate the use of a "training set" in the context of machine learning model development. The 'ground truth' concept would apply more to the test and validation steps, as discussed in point 7.
Ask a specific question about this device
(226 days)
DQK
-
HemoSphere Advanced Monitor with HemoSphere Swan-Ganz Module: The HemoSphere advanced monitor when used with the HemoSphere Swan-Ganz module and Edwards Swan-Ganz catheters is indicated for use in adult and pediatric critical care patients requiring monitoring of cardiac output (continuous [CO] and intermittent [iCO]) and derived hemodynamic parameters in a hospital environment. Pulmonary artery blood temperature monitoring is used to compute continuous and intermittent CO with thermodilution technologies. It may also be used for monitoring hemodynamic parameters in conjunction with a perioperative goal directed therapy protocol in a hospital environment. Refer to the Edwards Swan-Ganz catheter and Swan-Ganz Jr catheter indications for use statements for information on target patient population specific to the catheter being used. Refer to the Intended Use statement for a complete list of measured and derived parameters available for each patient population.
-
HemoSphere Advanced Monitor with HemoSphere Oximetry Cable: The HemoSphere Advanced Monitor when used with the HemoSphere Oximetry Cable and Edwards oximetry catheters is indicated for use in adult and pediatric critical care patients requiring monitoring of venous oxygen saturation (SvO2 and ScvO2) and derived hemodynamic parameters in a hospital environment. Refer to the Edwards oximetry catheter indications for use statement for information on target patient population specific to the catheter being used. Refer to the Intended Use statement for a complete list of measured and derived parameters available for each patient population.
-
HemoSphere Advanced Monitor with HemoSphere Pressure Cable: The HemoSphere advanced monitor when used with the HemoSphere pressure cable is indicated for use in adult and pediatric critical care patients in which the balance between cardiac function, fluid status, vascular resistance and pressure needs continuous assessment. It may be used for monitoring of hemodynamic parameters in conjunction with a perioperative goal directed therapy protocol in a hospital environment. Refer to the Edwards FloTrac sensor, FloTrac Jr sensor, Acumen IQ sensor, and TruWave disposable pressure transducer indications for use statements for information on target patient populations specific to the sensor/transducer being used. The Edwards Acumen Hypotension Prediction Index software feature provides the clinician with physiological insight into a patient's likelihood of future hypotensive events and the associated hemodynamics. The Acumen HPI feature is intended for use in surgical or non-surgical patients receiving advanced hemodynamic monitoring. The Acumen HPI feature is considered to be additional quantitative information regarding the patient's physiological condition for reference only and no therapeutic decisions should be made based solely on the Acumen Hypotension Prediction Index (HPI) parameter. Refer to the Intended Use statement for a complete list of measured and derived parameters available for each patient population.
-
HemoSphere Advanced Monitor with Acumen Assisted Fluid Management Feature and Acumen IQ Sensor: The Acumen Assisted Fluid Management (AFM) software feature provides the clinician with physiological insight into a patient's estimated response to fluid therapy and the associated hemodynamics. The Acumen AFM software feature is intended for use in surgical patients >=18 years of age, that require advanced hemodynamic monitoring. The Acumen AFM software feature offers suggestions regarding the patient's physiological condition and estimated response to fluid therapy. Acumen AFM fluid administration suggestions are offered to the clinician; the decision to administer a fluid bolus is made by the clinician, based upon review of the patient's hemodynamics. No therapeutic decisions should be made based solely on the Assisted Fluid Management suggestions. The Acumen Assisted Fluid Management software feature may be used with the Acumen AFM Cable and Acumen IQ fluid meter.
-
HemoSphere Advanced Monitor with HemoSphere Technology Module and ForeSight Oximeter Cable: The non-invasive ForeSight oximeter cable is intended for use as an adjunct monitor of absolute regional hemoglobin oxygen saturation of blood under the sensors in individuals at risk for reduced-flow or no flow ischemic states. The ForeSight Oximeter Cable is also intended to monitor relative changes of total hemoglobin of blood under the sensors. The ForeSight Oximeter Cable is intended to allow for the display of StO2 and relative change in total hemoglobin on the HemoSphere advanced monitor.
- When used with large sensors, the ForeSight Oximeter Cable is indicated for use on adults and transitional adolescents >=40 kg.
- When used with medium sensors, the ForeSight Oximeter Cable is indicated for use on pediatric subjects >=3 kg.
- When used with small sensors, the ForeSight Oximeter Cable is indicated for cerebral use on pediatric subjects
The HemoSphere Advanced Monitor was designed to simplify the customer experience by providing one platform with modular solutions for all hemodynamic monitoring needs. The user can choose from available optional sub-system modules or use multiple sub-system modules at the same time. This modular approach provides the customer with the choice of purchasing and/or using specific monitoring applications based on their needs. Users are not required to have all of the modules installed at the same time for the platform to function.
The provided FDA 510(k) clearance letter and summary for the Edwards Lifesciences HemoSphere Advanced Monitor (HEM1) and associated components outlines the device's indications for use and the testing performed to demonstrate substantial equivalence to predicate devices. However, it does not contain the detailed acceptance criteria or the specific study results (performance data) in the format typically required to answer your request fully, especially for acceptance criteria and performance of an AI/algorithm-based feature like the Hypotension Prediction Index (HPI) or Assisted Fluid Management (AFM).
The document states:
- "Completion of all verification and validation activities demonstrated that the subject devices meet their predetermined design and performance specifications."
- "Measured and derived parameters were tested using a bench simulation. Additionally, system integration and mechanical testing was successfully conducted to verify the safety and effectiveness of the device. All tests passed."
- "Software verification testing was conducted, and documentation was provided per FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices". All tests passed."
This indicates that internal performance specifications were met, but the specific metrics, thresholds, and study designs for achieving those specifications are not detailed in this public summary.
Therefore, I cannot populate the table with specific numerical performance data against acceptance criteria for the HPI or AFM features, nor can I provide details on sample size, expert ground truth establishment, or MRMC studies, as this information is not present in the provided text.
The text primarily focuses on:
- Substantial equivalence to predicate devices.
- Indications for Use for various HemoSphere configurations and modules.
- Description of software and hardware modifications (e.g., integration of HPI algorithm, new finger cuffs).
- General categories of testing performed (Usability, System Verification, Electrical Safety/EMC, Software Verification) with a blanket statement that "All tests passed."
Based on the provided document, here's what can and cannot be stated:
1. A table of acceptance criteria and the reported device performance
Cannot be provided with specific numerical data or thresholds from the given text. The document only states that "all verification and validation activities demonstrated that the subject devices meet their predetermined design and performance specifications." No specific acceptance criteria values (e.g., "Accuracy > X%", "Sensitivity > Y%", "Mean Absolute Error
Ask a specific question about this device
(105 days)
DQK
The Globe PF System is indicated for catheter-based cardiac anatomical and electrophysiological mapping and stimulation of cardiac tissue.
The Globe Pulsed Field System (Globe PF System) comprises the following components and accessories to support anatomical and electrophysiological mapping, and pacing stimulation of cardiac tissue:
• Globe Controller: Used for the acquisition and processing of signals for cardiac anatomical and electrophysiological mapping, generation of mapping energy and stimulation pulses.
• Globe Workstation: A PC workstation configured with the Globe Software, which the clinician uses to assess contact between the mapping catheter electrodes and the atrial wall, map the atrial electrical activity, and apply stimulation pulses for diagnostic purposes.
• Globe Positioning System (GPS™) Electrodes and GPS Cable: Surface electrodes and cables for localization of the mapping catheter.
The provided FDA 510(k) clearance letter and summary for the Globe® Pulsed Field System do not contain the detailed information required to answer all parts of your request. This document primarily focuses on establishing substantial equivalence to a predicate device based on intended use, indications for use, and a high-level comparison of technological characteristics.
Specifically, the document does not include:
- Specific acceptance criteria values (e.g., minimum sensitivity, specificity, or accuracy targets).
- The reported device performance against such criteria.
- Detailed information about the study design for clinical or performance evaluation (e.g., test set sample size, provenance, expert qualifications, ground truth establishment methods, or whether MRMC studies were conducted).
- Training set details.
Therefore, I can only provide information directly extractable from the given text.
Here's what can be extracted and what is not available:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance | Comments |
---|---|---|
Specific quantitative performance metrics (e.g., accuracy, sensitivity, specificity for mapping/stimulation) | NOT PROVIDED | The submission states "The test results demonstrate that the Globe PF System meets the performance criteria for its intended use" but does not specify what those criteria are or the quantitative results. |
Bench testing | Meets performance criteria | Confirmed to be performed. |
Biocompatibility testing | Meets performance criteria | Confirmed to be performed. |
Summative usability testing | Meets performance criteria | Confirmed to be performed. |
Electrical safety and EMC testing | Meets performance criteria | Confirmed to be performed. |
Software verification and validation testing | Meets performance criteria | Confirmed to be performed. |
Cybersecurity testing | Meets performance criteria | Confirmed to be performed. |
Packaging validation | Meets performance criteria | Confirmed to be performed. |
Does not raise new questions on safety or effectiveness compared to the predicate device | Concluded by FDA | This is the overarching "acceptance" by the FDA for 510(k) clearance. |
2. Sample size used for the test set and the data provenance
- Sample Size: NOT PROVIDED. The document mentions "performance testing" but does not specify the sample size for any clinical or test data used to evaluate the device.
- Data Provenance: NOT PROVIDED. No information is given regarding the country of origin of data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- NOT PROVIDED. The document does not detail any expert involvement for ground truth establishment in performance testing.
4. Adjudication method for the test set
- NOT PROVIDED. No information is available regarding any adjudication methods (e.g., 2+1, 3+1) for establishing ground truth or evaluating test results.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- NOT PROVIDED. The document does not mention any MRMC comparative effectiveness study or any evaluation of human reader improvement with AI assistance. The device description focuses on its mapping and stimulation capabilities, not AI-assisted interpretation.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- NOT PROVIDED. While software verification and validation were performed, the document does not specify whether "standalone" performance (without human-in-the-loop) was a distinct part of the performance evaluation, or what specific algorithms were evaluated in such a manner. The device is a "Programmable diagnostic computer" that aids clinicians.
7. The type of ground truth used
- NOT PROVIDED. The general "performance testing" and "software verification and validation" are mentioned, but the specific type of ground truth (e.g., expert consensus, pathology, outcomes data) used for evaluation is not described. For the CONTACT and FLOW maps, it mentions they are "based on the same principles of operation as the reference device (Swan-Ganz catheter, K160084)" and that "the scientific methods used to evaluate the safety and effectiveness... are adequate." This suggests a comparison to established methods or a reference standard, but not explicit "ground truth" as you might see for diagnostic classifications.
8. The sample size for the training set
- NOT PROVIDED. The document does not mention a "training set" or any details about it. This submission is for a medical device that includes software, but it doesn't specify if it employs machine learning or requires a distinct "training set" in the common understanding of AI/ML development.
9. How the ground truth for the training set was established
- NOT PROVIDED. As no training set is mentioned, naturally, no information on its ground truth establishment is available.
Summary of Device and Study Information (based on available text):
- Device Name: Globe® Pulsed Field System
- Intended Use/Indications for Use: Catheter-based cardiac anatomical and electrophysiological mapping and stimulation of cardiac tissue.
- Study Type: Performance testing (bench, biocompatibility, usability, electrical safety, EMC, software V&V, cybersecurity, packaging validation) to demonstrate substantial equivalence to a predicate device.
- Predicate Device: Affera Integrated Mapping System; Impedance Localization Patch Kit (K241828)
- Reference Device: Swan-Ganz Catheter (K160084) (for CONTACT and FLOW maps)
- Key Finding for Equivalence: "The Globe PF System meets the performance criteria for its intended use and does not raise new questions on safety or effectiveness compared to the predicate device."
The FDA 510(k) clearance letter and summary are high-level documents focused on regulatory substantial equivalence. They typically do not delve into the granular details of performance study designs, such as specific sample sizes, expert qualifications, or ground truth methodologies, to the extent that you are asking. Such detailed information would typically be found in the full 510(k) submission itself, which is not publicly released in its entirety.
Ask a specific question about this device
(84 days)
DQK
The PhysCade System is intended for the analysis, display, and storage of cardiac electrophysiological data and maps for analysis by a physician.
The clinical significance of utilizing the PhysCade System to help identify areas with intra-cardiac atrial electrograms exhibiting local early activated potentials and other features of interest during atrial arrhythmias has not been established by clinical investigations.
The PhysCade™ System (PhysCade) is an artificial intelligence (AI) enabled device intended to assist clinicians in their management of patients with heart rhythm disorders (arrhythmias). PhysCade is a medical decision support system which post-processes electrograms (EGMs) collected inside the heart during electrophysiology (EP) mapping procedures using compatible diagnostic EP catheters. The PhysCade software has advanced algorithms that analyze the collected EGMs to provide information on regions of interest in the heart that may be useful to the clinician to support clinical decisions together with other available patient-related information.
PhysCade provides specialized analyses of data from a compatible multipolar catheter. The primary output (coPILOT) indicates the predominant earliest site of activation relative to the catheter electrode array. Supporting outputs include determining activation times of successive beats in the EGM signal on each electrode (coMAP), voltage at each electrode, signal quality, and sequences of propagation over multiple beats of the arrhythmia on the catheter.
The PhysCade System consists of a computer workstation, display, and custom software and is not connected to other devices or medical equipment.
The provided FDA 510(k) Clearance Letter for the PhysCade System gives some information about the device's development and testing, particularly regarding its AI algorithms. However, it does not explicitly detail the acceptance criteria or the specific results of a study proving the device meets those criteria, nor does it provide the requested levels of detail for the ground truth establishment, expert qualifications, or MRMC study results.
Based on the information provided, here's what can be extracted and inferred for the requested points. Where information is not present, it is explicitly stated.
Acceptance Criteria and Device Performance Study
The document states, "Design Validation confirmed that the AI/ML system is accurate for its intended use." This indicates that performance testing was conducted. However, the specific quantitative acceptance criteria (e.g., a specific F1 score, accuracy, sensitivity, or precision threshold) and the resulting performance metrics the device achieved are not explicitly stated in this document.
Table of Acceptance Criteria and Reported Device Performance:
Performance Metric | Acceptance Criteria (Threshold) | Reported Device Performance |
---|---|---|
Specific quantitative metrics for AI/ML performance | NOT PROVIDED | NOT PROVIDED |
Overall AI/ML system accuracy | "Accurate for its intended use" (Qualitative) | "Accurate for its intended use" |
Study Details:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size:
- Number of electrograms: ~5 Million
- Number of patients: 109
- Data Provenance: The document does not explicitly state the country of origin.
- Retrospective or Prospective: The document does not explicitly state whether the data was collected retrospectively or prospectively. It references "datasets with the following characteristics" suggesting pre-collected (likely retrospective) data.
- Test Set Sample Size:
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: NOT PROVIDED.
- Qualifications of Experts: NOT PROVIDED. The document only mentions that the device is for "analysis by a physician" and is "intended to be operated by nurses, technicians, physicians, or other personnel who are trained and approved by each treating facility." It does not specify who established the ground truth or their qualifications.
-
Adjudication method for the test set:
- Adjudication Method: NOT PROVIDED. The document does not describe how disagreements, if any, among experts establishing ground truth were resolved.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. The performance testing described focuses on the device's accuracy ("Design Validation confirmed that the AI/ML system is accurate for its intended use"), rather than human performance improvement.
- Effect Size: NOT APPLICABLE as no MRMC study is mentioned.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: While not explicitly stated as "standalone performance," the phrase "Design Validation confirmed that the AI/ML system is accurate for its intended use" strongly implies a standalone performance evaluation of the algorithm's output against established ground truth. The device "provides information on regions of interest in the heart that may be useful to the clinician to support clinical decisions," suggesting the AI's output is evaluated directly.
-
The type of ground truth used:
- Type of Ground Truth: The document states the device "analyzes the collected EGMs to provide information on regions of interest in the heart that may be useful to the clinician." It further notes the "primary output (coPILOT) indicates the predominant earliest site of activation relative to the catheter electrode array." The ground truth would therefore pertain to the identification of these "regions of interest" or the "earliest site of activation" based on expert interpretation of electrophysiological data. The exact method of establishing this ground truth (e.g., expert consensus on EGM analysis, correlation with other diagnostic modalities, or clinical outcomes) is NOT EXPLICITLY STATED.
-
The sample size for the training set:
- Training Set Sample Size:
- Number of electrograms: ~15 Million
- Number of patients: 174
- Training Set Sample Size:
-
How the ground truth for the training set was established:
- Ground Truth Establishment for Training Set: The document does not explicitly state how the ground truth for the training set was established. It only describes the characteristics of the training, tuning, and test cohorts. It is commonly understood that ground truth for training data is established by similar means to test data (e.g., expert annotation), but this is not detailed here.
Ask a specific question about this device
(28 days)
DQK
The OptiMap System is used to analyze electrogram (EGM) signals and display results in a visual format for evaluation by a physician in order to assist in the diagnosis of complex cardiac arrhythmias.
The OptiMap™ System is intended to be used during electrophysiology procedures on patients for whom an electrophysiology procedure has been prescribed and only by qualified medical professionals who are trained in electrophysiology.
The OptiMap™ System is an electrophysiology mapping system for assisting in the diagnosis of complex cardiac arrhythmias. The system consists of several hardware elements including an Amplifier, Cart, Monitor, and Workstation that contains proprietary mapping software. Signals from a 64-electrode mapping basket catheter are transmitted to the Workstation by the Amplifier, processed by the mapping software, and the results are displayed on the Monitor.
The OptiMap System utilizes proprietary algorithms to process intra-cardiac electrogram (EGM) signals from a 64-electrode unipolar mapping basket catheter. The software transforms the time domain waveform information from the electrodes into space domain information which calculates the Electrographic Flow™ (EGF™) vectors for Atrial Fibrillation. The system also has algorithms that display action potential wavefront propagation or Activation Cycle Path (ACP). The ACP maps are a reconstruction of the activation wavefront propagation and may be used to visualize organized atrial arrhythmias.
The software output includes static and dynamic EGF maps that graphically depict the temporal activity and location of sources of EGF with respect to the catheter electrodes. The software displays active sources of flow and passive flow phenomena, detects spatial and temporal stability of sources of flow and detects the prevalence of sources of flow. In addition, the software output includes ACP maps displaying isochrones and a wavefront animation for each cycle.
The provided text is a 510(k) Clearance Letter and a 510(k) Summary for the OptiMap™ System. While it details the device, its intended use, and substantial equivalence to a predicate, it does not contain the specific performance study results, acceptance criteria, or details regarding the methodologies of testing (e.g., sample sizes, ground truth establishment, expert qualifications, MRMC studies).
The relevant section, "VII. Summary of Non-Clinical Performance Testing," states:
"Software verification and validation testing was completed on the subject device demonstrating that the OptiMap System with Version 1.3 Software (including ACP functionality) successfully performed at the unit, integration and system levels. All open issues from the verification and validation activities have been resolved or documented as unresolved anomalies. The OptiMap System met the acceptance criteria listed in the test protocols, performs as designed, and is suitable for its intended use."
This statement confirms that testing was performed and acceptance criteria were met, but it does not provide the specific criteria or the quantitative results of these tests. Therefore, I cannot populate the requested tables and information based solely on the provided text.
To answer your request, the necessary information (specific performance metrics, acceptance thresholds, sample sizes, ground truth details, etc.) would typically be found in the actual validation study report, which is not part of this 510(k) clearance letter or summary.
If such a document were available, the information would likely be organized as follows:
Acceptance Criteria and Device Performance Study
Since the provided text does not contain the specific performance study details, the following tables and sections are illustrative, showing what information would be required to fulfill the request. This information was not found in the provided 510(k) document.
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
(Example: Sensitivity for arrhythmia detection) | (e.g., > 90%) | (e.g., 92.5%) |
(Example: Specificity for arrhythmia detection) | (e.g., > 85%) | (e.g., 88.1%) |
(Example: Accuracy of EGM signal processing) | (e.g., Error rate 0.8) | (e.g., Kappa = 0.85) |
2. Sample Size and Data Provenance
- Test Set Sample Size: (Not provided in the document. Would typically specify number of patient cases, EGM recordings, or arrhythmias analyzed.)
- Data Provenance: (Not provided in the document. Would specify country of origin, if retrospective or prospective data collection, and if multi-center.)
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: (Not provided in the document. Would specify the count of experts.)
- Qualifications of Experts: (Not provided in the document. Would specify their medical specialization, board certifications, and years of experience, e.g., "3 Board-Certified Electrophysiologists, each with >10 years of experience in cardiac arrhythmia diagnosis and treatment.")
4. Adjudication Method for the Test Set
- Adjudication Method: (Not provided in the document. Common methods include:
- 2+1: Two experts review independently, and a third adjudicates disagreements.
- 3+1: Three experts review independently, and a fourth adjudicates if necessary, or majority agreement is used.
- Consensus: All experts discuss and reach a consensus.
- None: A single expert's reading is considered ground truth, or adjudicated by a pre-defined process.)
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Conducted?: (Not provided in the document. Typically stated if human-in-the-loop performance with and without AI assistance was evaluated.)
- Effect Size (if applicable): (Not provided in the document. Would quantify the improvement in human reader performance, e.g., "Radiologists' diagnostic accuracy improved by X% (from Y% to Z%) when using OptiMap™ System assistance compared to without assistance.")
6. Standalone (Algorithm Only) Performance Study
- Standalone Performance Study Conducted?: Yes, the summary for "Software verification and validation" implies that the system's performance was evaluated independently, as it describes the system successfully performing at unit, integration, and system levels. However, the specific metrics and results are not detailed.
7. Type of Ground Truth Used
- Type of Ground Truth: (Not provided in the document. For cardiac arrhythmia diagnosis, this could be:
- Expert Consensus: Agreement among multiple expert electrophysiologists based on clinical data.
- Electrogram Analysis: Detailed analysis of raw EGM signals by experts, potentially correlated with clinical outcomes.
- Clinical Outcomes Data: Correlation with patient outcomes (e.g., successful ablation, recurrence of arrhythmia).
- Pathology/Histology: Less common for electrophysiology mapping, but relevant for some cardiac conditions.)
8. Sample Size for the Training Set
- Training Set Sample Size: (Not provided in the document. This is distinct from the test set and crucial for machine learning model development.)
9. How Ground Truth for the Training Set Was Established
- Training Set Ground Truth Establishment: (Not provided in the document. Similar methods to the test set ground truth would apply, but often with a larger scale and potentially more automated or semi-automated labeling steps initially, followed by expert review.)
Ask a specific question about this device
(60 days)
DQK
DeepRhythmAI is a cloud-based software that utilizes AI algorithms to assess cardiac arrhythmias using a single- or two-lead ECG data from adult patients. It is intended for use by a healthcare solution integrator to build web, mobile or another types of applications to let qualified healthcare professionals review and confirm the analytic result. The product supports downloading and analyzing data recorded in the compatible formats from ECG devices such as Holter, Event recorder, Outpatient Cardiac Telemetry devices or other similar recorders when the assessment of the rhythm is necessary. The product can be electronically interfaced and perform analysis with data transferred from other computer-based ECG systems, such as an ECG management system. DeepRhythmAI can be integrated into medical devices. In this case, the medical device manufacturer will identify the indication for use depending on the application of their device. DeepRhythmAI is not for use in life-supporting or sustaining systems or ECG Alarm devices. Interpretation results are not intended to be the sole means of diagnosis. It is offered to physicians and clinicians on an advisory basis only in conjunction with the physician's knowledge of ECG patterns, patient background, clinical history, symptoms and other diagnostic information.
The DeepRhythmAI is a cloud-based software utilizing CNN and transformer models for automated analysis of ECG data. It uses a scalable Application Programming Interface (API) to enable easy integration with other medical products. The main component of DeepRhythmAI is an automated proprietary deep-learning algorithm, which measures and analyzes ECG data to provide qualified healthcare professional with supportive information for review. DeepRhythmAI can be integrated into medical devices. The product supports downloading and analyzing data recorded in compatible formats from ECG devices such as Holter, Event recorder, Outpatient Cardiac Telemetry devices or other similar recorders used when assessment of the rhythm is necessary. The DRAI can also be electronically interfaced and perform analysis with data transferred from other computer-based ECG systems, such as an ECG management system. DeepRhythmAI doesn't have User Interface therefore it should be integrated with the external visualization software used by the ECG technicians for the ECG visualization and analysis reporting.
The provided FDA 510(k) clearance letter and summary for DeepRhythmAI offer general statements about performance testing but lack the specific details required to fully address all aspects of the request, especially quantifiable acceptance criteria and the results that prove them. The document primarily focuses on the substantial equivalence argument against a predicate device (which is itself DeepRhythmAI).
Based on the provided text, here's an attempt to extract and infer the information:
Acceptance Criteria and Device Performance:
The document mentions that the device was tested "according to the recognized consensus standards, ANSI/AAMI/IEC 60601-2-47:2012/(R)2016 and AAMI/ANSI/EC57:2012." These standards define performance requirements for ECG analysis devices, including aspects like beat detection accuracy, heart rate accuracy, and arrhythmia detection. However, the exact quantifiable acceptance criteria (e.g., "accuracy must be >X%") and the observed numeric device performance (e.g., "accuracy was Y%") are not reported in the provided text.
The closest we get to "reported performance" is the statement: "Overall, the software verification & validation testing was completed successfully and met all requirements. Testing demonstrated that the subject device performance was deemed to be acceptable." This is a qualitative statement, not quantitative performance data.
Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (Inferred from Standards) | Reported Device Performance (Not Quantified in Doc) |
---|---|
QRS detection accuracy (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Heart rate determination accuracy for non-paced adult (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
R-R interval detection accuracy (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Non-paced arrhythmias interpretation accuracy (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Non-paced ventricular arrhythmias calls accuracy (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Atrial fibrillation detection accuracy (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Cardiac beats detection accuracy (Ventricular ectopic beats, Supraventricular ectopic beats) (as per ANSI/AAMI standards) | Met all requirements; performance deemed acceptable. |
Cyber security requirements met | No vulnerabilities identified. |
Software requirements satisfied | All software requirements satisfied. |
Study Details:
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: The document states the algorithm was "tested against the proprietary database (MDG validation db) that includes a large number of recordings captured among the intended patient population." The exact number of recordings is not specified, only "a large number."
- Data Provenance: The data comes from a "proprietary database (MDG validation db)." The country of origin is not explicitly stated. The document indicates it includes data for both two-lead and single-lead patch recorders, implying diverse ECG device sources. It is implied to be retrospective data collected for validation purposes.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The document states a "proprietary database" was used for validation, but it does not detail how the ground truth within this database was established (e.g., by how many cardiologists or expert technicians, or their qualifications).
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- This information is not provided in the document.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- A MRMC comparative effectiveness study involving human readers and AI assistance is not mentioned in the provided text. The study described focuses on the standalone performance of the device against a ground truth. The device "is offered to physicians and clinicians on an advisory basis only" and results are "not intended to be the sole means of diagnosis," indicating a human-in-the-loop context, but no study is presented to quantify this human-AI interaction's effect on reader performance.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The document states the algorithm was "tested against the proprietary database (MDG validation db)." The entire summary of performance data refers to evaluation of the "DeepRhythmAI software for arrhythmia detection and automated analysis of ECG data." There is no mention of human interaction during this performance evaluation.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies the use of an "MDG validation db" but does not specify the type of ground truth used to annotate this database. It's common for such ECG databases to rely on expert adjudicated annotations, but this is not explicitly stated.
-
The sample size for the training set:
- The sample size for the training set is not provided. The document only discusses the "MDG validation db" which is used for testing/validation.
-
How the ground truth for the training set was established:
- As the training set sample size is not provided, neither is information on how its ground truth was established.
Summary of Missing Information:
The provided document, being a 510(k) clearance letter and summary, serves to establish substantial equivalence. It confirms that specific performance testing was conducted according to recognized standards and deemed acceptable, but it does not provide the detailed scientific study results that would include:
- Quantifiable acceptance criteria and the exact numeric performance results for each criterion.
- The raw sample size of the test set.
- Details on the experts involved in ground truth creation for the test set (number, qualifications, adjudication method).
- Information on any MRMC studies or effect sizes of AI assistance on human readers.
- Explicit details about the ground truth methodology for the validation database.
- Any information regarding the training dataset (size, ground truth methodology).
To fully answer the request, one would typically need access to the full 510(k) submission, which contains the detailed V&V (Verification and Validation) reports.
Ask a specific question about this device
(149 days)
DQK
The Volta AF-Xplorer assists operators in the real-time manual or automatic annotation of 3D anatomical and electrical maps of human atria for the presence of multipolar intra-cardiac atrial electrograms exhibiting spatiotemporal dispersion during atrial fibrillation or atrial tachycardia.
The Volta AF-Xplorer is a machine and deep learning based-algorithm designed to assist operators in the real-time manual or automatic annotation of 3D anatomical and electrical maps of the human heart for the presence of electrograms (EGMs) exhibiting spatio-temporal dispersion, i.e., dispersed EGMs.
The Volta AF-Xplorer device is a non-sterile reusable medical device, composed of a computing platform and a software application. Volta AF-Xplorer works with all existing 510(k)-cleared catheters that meet specific dimension requirements and with one of the three specific data acquisition systems:
- Two compatible EP recording systems: the LabSystem Pro EP Recording System (Boston Scientific) (K141185) or the MacLab CardioLab EP Recording System (General Electric) (K130626),
- a 3D mapping system: EnSite X 3D mapping system (Abbott) (K221213).
A connection cable is used to connect the corresponding data acquisition system to the Volta AF-Xplorer system, depending on the type of communication used:
- Unidirectional analog communication with the EP recording systems via a custom-made cable (two different variants: DSUB, Octopus) and an Advantech PCI-1713U analog-to-digital converter, which acquires analog data, digitizes it, and transmits the digital signals to the computer that hosts the Volta AF-Xplorer software.
- Bidirectional digital communication with the EnSite 3D mapping system via an ethernet cable (four different lengths: 20, 10, 5 or 2m) which transmits the digital signals directly to the computer.
The computer and its attached display are located outside the sterile operating room area. The Volta AF-Xplorer software analyzes the patient's electrograms to cue operators in real-time to intra-cardiac electrograms of interest for atrial regions harboring dispersed electrograms as well as a cycle length estimation from electrograms recorded with the mapping and the coronary sinus catheters. The results of the analysis are graphically presented on the attached computer display and/or on a secondary medical screen or on an operating room widescreen. The identified regions of interest are either manually (all configurations) or automatically (only available in digital bidirectional communication with the EnSite X 3D mapping system) tagged in the corresponding 3D mapping system.
Based on the provided FDA 510(k) clearance letter for the Volta AF-Xplorer, here's a breakdown of the acceptance criteria and the study used to demonstrate device performance. It's important to note that the document primarily focuses on demonstrating substantial equivalence to a predicate device, and the "acceptance criteria" discussed here are implicitly related to clinical effectiveness and safety, rather than specific performance metrics (like sensitivity/specificity) for the algorithm itself.
The core of the "study that proves the device meets acceptance criteria" is the Tailored-AF study, which the manufacturer uses to support an updated Indications for Use statement for the Volta AF-Xplorer. The acceptance criteria are essentially the favorable clinical outcomes demonstrated by this study, which allowed for the removal of cautionary language in the indications for use.
1. Table of Acceptance Criteria and Reported Device Performance
Given that this is a 510(k) clearance for an update based on clinical evidence, the "acceptance criteria" are interpreted as the clinical outcomes required to justify the change in the Indications for Use. The device performance is represented by the outcomes of the Tailored-AF study.
Acceptance Criteria (Implied) | Reported Device Performance (Tailored-AF Study - VX1 device) |
---|---|
Primary Effectiveness: Demonstrated superiority in freedom from AF | 88% of patients in the "Tailored" group (AI-assisted ablation + PVI) achieved freedom from AF (lasting > 30 seconds after 3-month blanking, through 12 months post-ablation, with or without AADs). |
70% of patients in the "Anatomical" group (PVI-only) achieved this outcome. | |
**18% difference, statistically significant (log-rank p |
Ask a specific question about this device
Page 1 of 35