Search Results
Found 2 results
510(k) Data Aggregation
(441 days)
KODEX EPD System 1.5.0
The KODEX-EPD™ System 1.5.0 is indicated for catheter-based cardiac electrophysiological (EP) procedures. The KODEX-EPD ™ System 1.5.0 provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure.
The KODEX-EPD™ System 1.5.0 is a catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time. The information needed to create the cardiac maps is acquired using standard electrophysiological (EP) catheters and proprietary external sensors.
KODEX - EPD™ continuously collects electromagnetic signals from all sensors and electrodes attached to it. The system then uses these to create a 3D image of the chamber, and superimposes the real time catheter position on the chamber image. In addition, the KODEX – EPD™ System 1.5.0 supports representation of the electrical activity of cardiac chambers, based on the intra cardiac signals received from the all catheters and body surface signals.
The provided document, a 510(k) Summary for the KODEX-EPD™ System 1.5.0, focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed acceptance criteria table with specific quantitative performance metrics like sensitivity, specificity, or AUC, as might be found in a study for an AI-driven diagnostic device.
The KODEX-EPD™ System 1.5.0 is a cardiac mapping and navigation system that provides real-time 3D electro-anatomical maps of the heart and catheter location. This is a medical device that generates data for clinical interpretation, rather than an AI or CADe/CADx device that interprets medical images or data. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed in terms of meeting design specifications, demonstrating functionality, and showing substantial equivalence to a legally marketed predicate device.
However, I can extract the relevant information from the document that addresses the spirit of your request, adapting it to the nature of this particular device's submission. The document emphasizes functional performance and safety as the primary acceptance criteria for the new features (Cryo Occlusion Viewer and Tissue Engagement Viewer) and the updated system.
Here's an attempt to present the information structured according to your request, with the understanding that the "acceptance criteria" are more about functionality and safety in this context:
Acceptance Criteria and Study for KODEX-EPD™ System 1.5.0
The acceptance criteria for the KODEX-EPD™ System 1.5.0, particularly for its new features (Cryo Occlusion Viewer and Tissue Engagement Viewer), are primarily focused on functionality, accuracy in providing supplemental information, and ensuring that these new features do not raise new questions of safety or effectiveness compared to the predicate device. The studies conducted were verification and validation tests aligned with these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Criteria (Functional/Safety) | Reported Device Performance / Study Finding |
---|---|---|
System Functionality | - Meets design specifications | "The testing demonstrated that the product meets its performance specifications and performs as intended." |
- Performs as intended | "The KODEX-EPD™ System 1.5.0 was found to be substantially equivalent to the predicate and reference devices." | |
Cryo Occlusion Viewer | - Software requirements met | "verified through system software verification that demonstrated that the software requirements specifications were met" |
- Regression testing passed (no impact on functionality) | "regression testing that demonstrated that the system functionality was not impacted by upgrade to version 1.5.0." | |
- Compatibility & interoperability with Medtronic Achieve Catheter | "verification of the compatibility and interoperability of the Medtronic Achieve Catheter when connected to the KODEX-EPD™ System." | |
- Validation of occlusion status | "validated through retrospective analysis of KODEX-EPD™ cases for which KODEX occlusion status was compared to venography, an acute animal study and as part of a summative usability study." | |
- Meets specifications & performs as intended | "Verification and validation testing demonstrated that the Cryo Occlusion viewer met its specifications and performed as intended." | |
Tissue Engagement Viewer (TEV) | - Indicate catheter tip touch status (Touch/No Touch/High Touch) | "assessed on the bench to verify the capability to indicate if the catheter tip touches the tissue or not" |
- Indicate touch level with low latency (up to 1 second) | "indicate the touch level in latency up to 1 second" | |
- Indicate high force touch | "to indicate if the catheter tip touches the tissue with high force." | |
- Validated in animal study (compared to clinical standards) | "validated in an acute GLP animal study in which the clinician first established contact guided by clinical standards (fluoroscopy, EGM, impedance, ICE) and then verified TEV utilizing KODEX-EPD™ SW 1.5.0. At each location the engagement was increased from no touch, to touch to high touch based on these standards and then compared TEV on KODEX-EPD™ SW v.1.5.0." | |
- Meets specifications & performs as intended | "Verification and validation testing demonstrated that the TEV feature met its specifications and performed as intended." | |
Safety & Effectiveness | - No new questions of safety or effectiveness compared to predicate | "The only technological differences... do not present different questions of safety or effectiveness as compared to the legally marketed predicate device because the features are only intended to provide the physician with additional information which supplements common clinical practice." |
2. Sample Sizes Used for the Test Set and Data Provenance
Due to the nature of the device (a medical system for creating maps/location, not a diagnostic AI), specific "test set" sample sizes in the typical AI sense (e.g., number of images or cases for classification) are not explicitly detailed. Instead, the testing involved:
-
Cryo Occlusion Viewer:
- Retrospective analysis of KODEX-EPD™ cases: The exact number of cases is not specified in the document.
- Acute animal study: The number of animals is not specified.
- Summative usability study: The number of participants/cases is not specified.
- Data Provenance: Not explicitly stated, but clinical cases likely from a medical center where KODEX-EPD™ is used. Retrospective implies existing data.
-
Tissue Engagement Viewer (TEV):
- Acute GLP animal study: The number of animals is not specified.
- Data Provenance: Not explicitly stated beyond "animal study."
-
General System Testing:
- Bench testing, software verification/validation, EMC testing, Electrical Safety Testing, Usability testing. Specific sample sizes (e.g., number of test runs, number of usability participants) are not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Cryo Occlusion Viewer: "KODEX occlusion status was compared to venography." This implies venography served as a ground truth or reference standard. The number or qualifications of experts interpreting the venography or other clinical data for the retrospective analysis is not specified.
- Tissue Engagement Viewer (TEV): "clinician first established contact guided by clinical standards (fluoroscopy, EGM, impedance, ICE)." This indicates that experienced clinicians judging by established clinical methods served as the "ground truth" for the animal study. The number or specific qualifications of these clinicians are not specified.
4. Adjudication Method for the Test Set
- Explicit adjudication methods (e.g., 2+1, 3+1 for discordant reads) are NOT mentioned.
- For the Cryo Occlusion Viewer, the comparison was made against venography, suggesting a direct comparison rather than a human consensus process for ground truth.
- For the TEV, the animal study relied on "clinician-established contact guided by clinical standards." This inherently involves expert judgment but without a specified formal adjudication process for multiple readers.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was NOT done to compare human readers with vs. without AI assistance. This device is not an AI-driven image interpretation tool in that typical sense. It is a system that generates mapping data and provides additional information (TEV, Cryo Occlusion Viewer) to the clinician. The new features supplement existing clinical practice rather than replacing or directly assisting in complex diagnostic interpretation in an MRMC comparative method.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done
- Yes, in spirit, elements of standalone performance were evaluated for the features of the device.
- Cryo Occlusion Viewer: Verified through software verification and compatibility/interoperability testing. Its "validation" involved comparing its output (occlusion status) to venography, which is akin to evaluating the algorithm's output against a reference.
- Tissue Engagement Viewer (TEV): Assessed "on the bench to verify the capability to indicate if the catheter tip touches the tissue or not, indicate the touch level in latency up to 1 second and to indicate if the catheter tip touches the tissue with high force." This is an algorithm-only performance evaluation of its core functionality before human-in-the-loop validation.
7. The Type of Ground Truth Used
- Cryo Occlusion Viewer: Compared to venography (a gold standard imaging technique for confirming occlusion) and other unnamed appropriate visualization techniques. Implicitly, clinical outcomes from the retrospective cases also contribute to validation.
- Tissue Engagement Viewer (TEV): Ground truth was established by clinicians guided by established clinical standards (fluoroscopy, EGM, impedance, ICE) in an animal model. This is a form of expert consensus based on established clinical practice.
- For general system performance, internal design specifications and regulatory standards (IEC, ANSI/AAMI) serve as the "ground truth" for verification.
8. The Sample Size for the Training Set
- This device is not an AI/ML model that explicitly undergoes "training" on a distinct dataset in the way a deep learning classification model would. It's a deterministic system whose new features are based on impedance/dielectric mapping principles.
- Therefore, there is no "training set" in the context of AI/ML model development. The system's underlying algorithms are based on established biophysical principles, not learned from data in an iterative training process.
9. How the Ground Truth for the Training Set Was Established
- As there is no "training set" in the AI/ML sense, this question is not applicable. The "ground truth" for the development of the system's principles and algorithms would be fundamental physics, physiology, and engineering principles validated through design and verification testing.
Ask a specific question about this device
(197 days)
KODEX EPD System
The KODEX-EPD™ System is indicated for catheter-based cardiac electrophysiological (EP) procedures. The KODEX-EPD™ System provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure.
The KODEX-EPD™ system is a catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time. The information needed to create the cardiac maps is acquired using standard EP catheters and proprietary external patches.
KODEX-EPD™ continuously collects electromagnetic signals from all patches and electrodes attached to it. The system then uses these to create a 3D image of the chamber, and superimposes the real time catheter position on the chamber image. In addition, the KODEX-EPD™ system supports representation of the electrical activity of cardiac chambers, based on the intra cardiac signals received from the all catheters and body surface signals.
The KODEX-EPD™ system includes the KODEX Processing Unit, BS Pin Box, Diagnostic Catheter Connection Box, Recording System Connection Box, Workstation, Foot Pedal and KODEX-EPD™ External Patches.
The provided text describes the KODEX-EPD™ System and its FDA 510(k) clearance, establishing substantial equivalence to a predicate device (CARTO 3 EP Navigation System). The document focuses on regulatory equivalence rather than a study demonstrating the device's adherence to specific AI performance acceptance criteria, as one might expect for an AI/ML-driven diagnostic device.
Based on the provided text, the KODEX-EPD™ System is described as a "catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time." It also "supports representation of the electrical activity of cardiac chambers." The device's primary function as described is mapping and navigation, rather than an AI-driven diagnostic interpretation. Therefore, the "acceptance criteria" and "device performance" would relate to the accuracy and reliability of its physiological measurements and mapping capabilities, not AI-specific metrics like sensitivity, specificity, or AUC for classification.
Crucially, the document does not explicitly state that the KODEX-EPD™ System uses Artificial Intelligence (AI) or Machine Learning (ML) for its core functionality of acquiring, analyzing, or displaying data. The "Programmable Diagnostic Computer" classification and description of its operation suggest it processes electromagnetic signals and electrophysiological data, which can be complex but not necessarily "AI" in the modern interpretative sense (e.g., image classification, natural language processing).
Given this, I cannot directly provide information on acceptance criteria and study details for an AI-driven device based on the provided text. The "Performance Data" section discusses general device verification and validation, but not the specific metrics typically associated with AI performance validation (e.g., sensitivity, specificity, accuracy, F1-score).
However, I can extract information related to the device's performance validation in general, as presented in the text.
Here's an attempt to answer your questions based solely on the provided text, interpreting "acceptance criteria" and "device performance" in the context of a medical device claiming substantial equivalence to a predicate, rather than an AI device:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table with quantitative acceptance criteria (e.g., specific accuracy thresholds for mapping) or reported performance metrics (e.g., mean mapping error, precision of catheter localization) for the KODEX-EPD™ System. Instead, it states that "the company conducted extensive bench and animal testing which demonstrated that the KODEX – EPD™ System meets its design specifications and is substantially equivalent to the predicate device."
General Statement on Performance:
"The testing demonstrated that the product meets its performance specifications and performs as intended."
"This collection of testing demonstrates the safety and effectiveness of the KODEX – EPD™ System its substantial equivalence to the predicate device."
Implicit "Acceptance Criteria" (Substantial Equivalence):
The core acceptance criterion for FDA 510(k) clearance in this context is demonstrating substantial equivalence to a predicate device. This is achieved by showing that the KODEX-EPD™ System has the same intended use, similar technological characteristics, and that any differences in technological characteristics do not raise new questions of safety or effectiveness.
2. Sample size used for the test set and the data provenance
The document lists "extensive bench and animal testing" and "GLP animal study" as conducted.
- Test Set Sample Size: Not specified for any particular test.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The studies were conducted by the applicant, Philips Medical Systems Nederland B.V.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/Not mentioned. The device's performance validation (meeting design specifications, safety, effectiveness) would likely involve engineering verification and validation, and animal studies, not expert-labeled ground truth in the way it's used for AI diagnostic systems (e.g., radiologists labeling images).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable/Not mentioned. This concept typically applies to human expert review for establishing ground truth in AI studies, which is not described here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable/Not mentioned. The KODEX-EPD™ System as described is a mapping and navigation system, not an AI-assisted diagnostic tool that aids human "readers" (e.g., interpreting images). Therefore, a MRMC study of this nature would not be relevant to the device's described function.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is inherently "human-in-the-loop" as it's an interactive diagnostic computer used during medical procedures. The document does not describe any "algorithm-only" performance metrics in the way one would for an AI diagnostic algorithm like an image classifier. The performance studies mentioned (bench, animal) validate the integrated system's functionality.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The documentation refers to "design specifications" and "performance specifications" as the benchmarks. For the animal study, it would be based on physiological measurements and direct observation. For bench testing, it would be engineering tolerances and established physical principles. There is no mention of "expert consensus," "pathology," or "outcomes data" being used as ground truth for validation in the typical sense of AI diagnostics.
8. The sample size for the training set
Not applicable. The document does not indicate the use of AI/ML models that would require a "training set." The system's functionality as described appears to be based on established physics and algorithms rather than learned patterns from a training dataset.
9. How the ground truth for the training set was established
Not applicable, as a training set for an AI/ML model is not mentioned.
Summary based on the provided text:
The provided document (FDA 510(k) clearance letter and summary) focuses on demonstrating substantial equivalence for the KODEX-EPD™ System to a predicate device. It describes the device's intended use, technological characteristics, and general performance validation activities (bench testing, animal studies, software verification, safety, EMC, etc.).
There is no indication or description of the KODEX-EPD™ System employing Artificial Intelligence or Machine Learning for its core functions of data analysis or display in a way that would require specific AI-centric acceptance criteria (e.g., sensitivity, specificity for a diagnostic task), nor are there details of studies (e.g., MRMC studies, training/test set splits with expert ground truth) typically associated with validating AI/ML-driven medical devices. The "Programmable Diagnostic Computer" classification suggests it processes data in a programmatic, deterministic way, not necessarily through learned AI models.
Ask a specific question about this device
Page 1 of 1