Search Results
Found 2 results
510(k) Data Aggregation
(197 days)
The KODEX-EPD™ System is indicated for catheter-based cardiac electrophysiological (EP) procedures. The KODEX-EPD™ System provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure.
The KODEX-EPD™ system is a catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time. The information needed to create the cardiac maps is acquired using standard EP catheters and proprietary external patches.
KODEX-EPD™ continuously collects electromagnetic signals from all patches and electrodes attached to it. The system then uses these to create a 3D image of the chamber, and superimposes the real time catheter position on the chamber image. In addition, the KODEX-EPD™ system supports representation of the electrical activity of cardiac chambers, based on the intra cardiac signals received from the all catheters and body surface signals.
The KODEX-EPD™ system includes the KODEX Processing Unit, BS Pin Box, Diagnostic Catheter Connection Box, Recording System Connection Box, Workstation, Foot Pedal and KODEX-EPD™ External Patches.
The provided text describes the KODEX-EPD™ System and its FDA 510(k) clearance, establishing substantial equivalence to a predicate device (CARTO 3 EP Navigation System). The document focuses on regulatory equivalence rather than a study demonstrating the device's adherence to specific AI performance acceptance criteria, as one might expect for an AI/ML-driven diagnostic device.
Based on the provided text, the KODEX-EPD™ System is described as a "catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time." It also "supports representation of the electrical activity of cardiac chambers." The device's primary function as described is mapping and navigation, rather than an AI-driven diagnostic interpretation. Therefore, the "acceptance criteria" and "device performance" would relate to the accuracy and reliability of its physiological measurements and mapping capabilities, not AI-specific metrics like sensitivity, specificity, or AUC for classification.
Crucially, the document does not explicitly state that the KODEX-EPD™ System uses Artificial Intelligence (AI) or Machine Learning (ML) for its core functionality of acquiring, analyzing, or displaying data. The "Programmable Diagnostic Computer" classification and description of its operation suggest it processes electromagnetic signals and electrophysiological data, which can be complex but not necessarily "AI" in the modern interpretative sense (e.g., image classification, natural language processing).
Given this, I cannot directly provide information on acceptance criteria and study details for an AI-driven device based on the provided text. The "Performance Data" section discusses general device verification and validation, but not the specific metrics typically associated with AI performance validation (e.g., sensitivity, specificity, accuracy, F1-score).
However, I can extract information related to the device's performance validation in general, as presented in the text.
Here's an attempt to answer your questions based solely on the provided text, interpreting "acceptance criteria" and "device performance" in the context of a medical device claiming substantial equivalence to a predicate, rather than an AI device:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table with quantitative acceptance criteria (e.g., specific accuracy thresholds for mapping) or reported performance metrics (e.g., mean mapping error, precision of catheter localization) for the KODEX-EPD™ System. Instead, it states that "the company conducted extensive bench and animal testing which demonstrated that the KODEX – EPD™ System meets its design specifications and is substantially equivalent to the predicate device."
General Statement on Performance:
"The testing demonstrated that the product meets its performance specifications and performs as intended."
"This collection of testing demonstrates the safety and effectiveness of the KODEX – EPD™ System its substantial equivalence to the predicate device."
Implicit "Acceptance Criteria" (Substantial Equivalence):
The core acceptance criterion for FDA 510(k) clearance in this context is demonstrating substantial equivalence to a predicate device. This is achieved by showing that the KODEX-EPD™ System has the same intended use, similar technological characteristics, and that any differences in technological characteristics do not raise new questions of safety or effectiveness.
2. Sample size used for the test set and the data provenance
The document lists "extensive bench and animal testing" and "GLP animal study" as conducted.
- Test Set Sample Size: Not specified for any particular test.
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The studies were conducted by the applicant, Philips Medical Systems Nederland B.V.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/Not mentioned. The device's performance validation (meeting design specifications, safety, effectiveness) would likely involve engineering verification and validation, and animal studies, not expert-labeled ground truth in the way it's used for AI diagnostic systems (e.g., radiologists labeling images).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not applicable/Not mentioned. This concept typically applies to human expert review for establishing ground truth in AI studies, which is not described here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not applicable/Not mentioned. The KODEX-EPD™ System as described is a mapping and navigation system, not an AI-assisted diagnostic tool that aids human "readers" (e.g., interpreting images). Therefore, a MRMC study of this nature would not be relevant to the device's described function.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The device is inherently "human-in-the-loop" as it's an interactive diagnostic computer used during medical procedures. The document does not describe any "algorithm-only" performance metrics in the way one would for an AI diagnostic algorithm like an image classifier. The performance studies mentioned (bench, animal) validate the integrated system's functionality.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The documentation refers to "design specifications" and "performance specifications" as the benchmarks. For the animal study, it would be based on physiological measurements and direct observation. For bench testing, it would be engineering tolerances and established physical principles. There is no mention of "expert consensus," "pathology," or "outcomes data" being used as ground truth for validation in the typical sense of AI diagnostics.
8. The sample size for the training set
Not applicable. The document does not indicate the use of AI/ML models that would require a "training set." The system's functionality as described appears to be based on established physics and algorithms rather than learned patterns from a training dataset.
9. How the ground truth for the training set was established
Not applicable, as a training set for an AI/ML model is not mentioned.
Summary based on the provided text:
The provided document (FDA 510(k) clearance letter and summary) focuses on demonstrating substantial equivalence for the KODEX-EPD™ System to a predicate device. It describes the device's intended use, technological characteristics, and general performance validation activities (bench testing, animal studies, software verification, safety, EMC, etc.).
There is no indication or description of the KODEX-EPD™ System employing Artificial Intelligence or Machine Learning for its core functions of data analysis or display in a way that would require specific AI-centric acceptance criteria (e.g., sensitivity, specificity for a diagnostic task), nor are there details of studies (e.g., MRMC studies, training/test set splits with expert ground truth) typically associated with validating AI/ML-driven medical devices. The "Programmable Diagnostic Computer" classification suggests it processes data in a programmatic, deterministic way, not necessarily through learned AI models.
Ask a specific question about this device
(126 days)
The CARTO® ENT System is intended for use during intranasal image-guided navigation procedures for patients who are eligible for sinus procedures.
The CARTO® ENT System is intended as an aid for precisely locating anatomical structures during intranasal and paranasal image-guided navigation procedures.
The CARTO® ENT System is intended to be used during intranasal and paranasal surgical procedures to help ENT physicians to track and display the real-time location of the tip of navigated instruments relative to pre-acquired reference images, such as CT. The CARTO® ENT device enables ENT physicians to access sphenoid, frontal, and maxillary sinuses by using the system magnetic tracking technology. The system incorporates a Navigation Console, Field Ring, Instrument Hub, Patient Tracker, Workstation and accessories. A magnetic field generated by the Field Ring induces a current in the magnetic sensor embedded in the tip of the flexible navigated tool, which helps to accurately calculate the tool tip position. A CT image is imported and registered to the patient coordinates and a tool tip icon is displayed on top of the registered image, indicating the position of the tool in reference to the patient anatomy. A Patient Tracker is fixed to the patient forehead to compensate for the head movement during the navigation procedure.
The CARTO ENT System
is an image-guided surgery system. The information provided describes the acceptance criteria and study proving its performance.
1. Acceptance Criteria and Reported Device Performance
Attribute | Acceptance Criteria (Predicate Fiagon Navigation System) | Reported Device Performance (CARTO® ENT Navigation System) |
---|---|---|
Bench test location accuracy | 0.9 mm (Standard deviation 0.34 mm) | 0.55 mm (Standard deviation 0.7 mm) |
Simulated Use Accuracy | 1.79 mm (Standard deviation 0.4 mm) | 0.63 mm (Standard deviation 0.2 mm) |
Location update rate | 15 to 45 Hz | 10 Hz |
2. Sample size used for the test set and data provenance
The document does not specify an exact "test set" sample size or data provenance in terms of country of origin or whether it was retrospective or prospective for the accuracy tests.
- Bench Test Location Accuracy: The test involved comparing the CARTO® ENT System's electromagnetic locations to those provided by a "very accurate robot system over the entire navigation volume." No specific sample size (e.g., number of measurements) is given.
- Simulated Use Accuracy: This involved performing "a complete CT image registration and instrument navigation workflow." Again, no specific sample size (e.g., number of simulated procedures or measurements) is provided.
- Pre-clinical (cadaver) tests: These were conducted to mimic surgical procedures "in a simulated clinical environment." No sample size (number of cadavers or procedures) is specified.
3. Number of experts used to establish the ground truth for the test set and their qualifications
This information is not provided. The accuracy tests rely on a "very accurate robot system" for bench accuracy and the overall system workflow for simulated use, rather than expert-established ground truth in the traditional sense of medical image interpretation. For pre-clinical cadaver tests, qualitative estimation of clinical accuracy was performed, but no details on expert involvement or qualifications are given.
4. Adjudication method for the test set
This information is not explicitly provided. Given the nature of the bench and simulated use accuracy tests (comparison to a robot system or workflow performance), an "adjudication method" as typically applied in human reader studies would not be directly applicable.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size
An MRMC comparative effectiveness study was not done. The document states, "Clinical data was not necessary for the CARTO® ENT System."
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the performance tests described (Bench test location accuracy, Simulated Use Accuracy) primarily evaluate the device's intrinsic capabilities and accuracy as an algorithm/system standalone. While a human operates the system during simulated use, the focus is on the system's ability to track and display positions accurately. The "pre-clinical (cadaver) tests" involved a human performing a sinuplasty procedure workflow, but the evaluation of "system clinical accuracy" was qualitative, and the primary accuracy metrics come from controlled technical tests.
7. The type of ground truth used
- Bench test location accuracy: Ground truth was established by a "very accurate robot system."
- Simulated Use Accuracy: Ground truth seems to be derived from the "complete CT image registration and instrument navigation workflow" itself, implying a reference standard within the controlled simulation.
- Pre-clinical (cadaver) tests: The "ground truth" for these tests was for "qualitatively estimate the system clinical accuracy" rather than a defined, measurable anatomical truth.
8. The sample size for the training set
This information is not provided. The document describes a medical navigation system, and while it involves software, there's no mention of an "algorithm training set" in the context of machine learning, which is typically where a training set sample size would be relevant. The system's functionality is based on electromagnetic tracking technology, not explicitly on a learned model from a training data set described here.
9. How the ground truth for the training set was established
This information is not provided as a "training set" is not mentioned in the context of this device's validation.
Ask a specific question about this device
Page 1 of 1