Search Results
Found 3 results
510(k) Data Aggregation
(441 days)
The KODEX-EPD™ System 1.5.0 is indicated for catheter-based cardiac electrophysiological (EP) procedures. The KODEX-EPD ™ System 1.5.0 provides information about the electrical activity of the heart and about catheter location during the procedure. The system can be used on patients who are eligible for a conventional electrophysiological procedure.
The KODEX-EPD™ System 1.5.0 is a catheter-based cardiac mapping system designed to acquire and analyze individual data points, and use this information to display 3D electro-anatomical maps of the human heart in real-time. The information needed to create the cardiac maps is acquired using standard electrophysiological (EP) catheters and proprietary external sensors.
KODEX - EPD™ continuously collects electromagnetic signals from all sensors and electrodes attached to it. The system then uses these to create a 3D image of the chamber, and superimposes the real time catheter position on the chamber image. In addition, the KODEX – EPD™ System 1.5.0 supports representation of the electrical activity of cardiac chambers, based on the intra cardiac signals received from the all catheters and body surface signals.
The provided document, a 510(k) Summary for the KODEX-EPD™ System 1.5.0, focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed acceptance criteria table with specific quantitative performance metrics like sensitivity, specificity, or AUC, as might be found in a study for an AI-driven diagnostic device.
The KODEX-EPD™ System 1.5.0 is a cardiac mapping and navigation system that provides real-time 3D electro-anatomical maps of the heart and catheter location. This is a medical device that generates data for clinical interpretation, rather than an AI or CADe/CADx device that interprets medical images or data. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed in terms of meeting design specifications, demonstrating functionality, and showing substantial equivalence to a legally marketed predicate device.
However, I can extract the relevant information from the document that addresses the spirit of your request, adapting it to the nature of this particular device's submission. The document emphasizes functional performance and safety as the primary acceptance criteria for the new features (Cryo Occlusion Viewer and Tissue Engagement Viewer) and the updated system.
Here's an attempt to present the information structured according to your request, with the understanding that the "acceptance criteria" are more about functionality and safety in this context:
Acceptance Criteria and Study for KODEX-EPD™ System 1.5.0
The acceptance criteria for the KODEX-EPD™ System 1.5.0, particularly for its new features (Cryo Occlusion Viewer and Tissue Engagement Viewer), are primarily focused on functionality, accuracy in providing supplemental information, and ensuring that these new features do not raise new questions of safety or effectiveness compared to the predicate device. The studies conducted were verification and validation tests aligned with these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Criteria (Functional/Safety) | Reported Device Performance / Study Finding |
|---|---|---|
| System Functionality | - Meets design specifications | "The testing demonstrated that the product meets its performance specifications and performs as intended." |
| - Performs as intended | "The KODEX-EPD™ System 1.5.0 was found to be substantially equivalent to the predicate and reference devices." | |
| Cryo Occlusion Viewer | - Software requirements met | "verified through system software verification that demonstrated that the software requirements specifications were met" |
| - Regression testing passed (no impact on functionality) | "regression testing that demonstrated that the system functionality was not impacted by upgrade to version 1.5.0." | |
| - Compatibility & interoperability with Medtronic Achieve Catheter | "verification of the compatibility and interoperability of the Medtronic Achieve Catheter when connected to the KODEX-EPD™ System." | |
| - Validation of occlusion status | "validated through retrospective analysis of KODEX-EPD™ cases for which KODEX occlusion status was compared to venography, an acute animal study and as part of a summative usability study." | |
| - Meets specifications & performs as intended | "Verification and validation testing demonstrated that the Cryo Occlusion viewer met its specifications and performed as intended." | |
| Tissue Engagement Viewer (TEV) | - Indicate catheter tip touch status (Touch/No Touch/High Touch) | "assessed on the bench to verify the capability to indicate if the catheter tip touches the tissue or not" |
| - Indicate touch level with low latency (up to 1 second) | "indicate the touch level in latency up to 1 second" | |
| - Indicate high force touch | "to indicate if the catheter tip touches the tissue with high force." | |
| - Validated in animal study (compared to clinical standards) | "validated in an acute GLP animal study in which the clinician first established contact guided by clinical standards (fluoroscopy, EGM, impedance, ICE) and then verified TEV utilizing KODEX-EPD™ SW 1.5.0. At each location the engagement was increased from no touch, to touch to high touch based on these standards and then compared TEV on KODEX-EPD™ SW v.1.5.0." | |
| - Meets specifications & performs as intended | "Verification and validation testing demonstrated that the TEV feature met its specifications and performed as intended." | |
| Safety & Effectiveness | - No new questions of safety or effectiveness compared to predicate | "The only technological differences... do not present different questions of safety or effectiveness as compared to the legally marketed predicate device because the features are only intended to provide the physician with additional information which supplements common clinical practice." |
2. Sample Sizes Used for the Test Set and Data Provenance
Due to the nature of the device (a medical system for creating maps/location, not a diagnostic AI), specific "test set" sample sizes in the typical AI sense (e.g., number of images or cases for classification) are not explicitly detailed. Instead, the testing involved:
-
Cryo Occlusion Viewer:
- Retrospective analysis of KODEX-EPD™ cases: The exact number of cases is not specified in the document.
- Acute animal study: The number of animals is not specified.
- Summative usability study: The number of participants/cases is not specified.
- Data Provenance: Not explicitly stated, but clinical cases likely from a medical center where KODEX-EPD™ is used. Retrospective implies existing data.
-
Tissue Engagement Viewer (TEV):
- Acute GLP animal study: The number of animals is not specified.
- Data Provenance: Not explicitly stated beyond "animal study."
-
General System Testing:
- Bench testing, software verification/validation, EMC testing, Electrical Safety Testing, Usability testing. Specific sample sizes (e.g., number of test runs, number of usability participants) are not specified.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
- Cryo Occlusion Viewer: "KODEX occlusion status was compared to venography." This implies venography served as a ground truth or reference standard. The number or qualifications of experts interpreting the venography or other clinical data for the retrospective analysis is not specified.
- Tissue Engagement Viewer (TEV): "clinician first established contact guided by clinical standards (fluoroscopy, EGM, impedance, ICE)." This indicates that experienced clinicians judging by established clinical methods served as the "ground truth" for the animal study. The number or specific qualifications of these clinicians are not specified.
4. Adjudication Method for the Test Set
- Explicit adjudication methods (e.g., 2+1, 3+1 for discordant reads) are NOT mentioned.
- For the Cryo Occlusion Viewer, the comparison was made against venography, suggesting a direct comparison rather than a human consensus process for ground truth.
- For the TEV, the animal study relied on "clinician-established contact guided by clinical standards." This inherently involves expert judgment but without a specified formal adjudication process for multiple readers.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was NOT done to compare human readers with vs. without AI assistance. This device is not an AI-driven image interpretation tool in that typical sense. It is a system that generates mapping data and provides additional information (TEV, Cryo Occlusion Viewer) to the clinician. The new features supplement existing clinical practice rather than replacing or directly assisting in complex diagnostic interpretation in an MRMC comparative method.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done
- Yes, in spirit, elements of standalone performance were evaluated for the features of the device.
- Cryo Occlusion Viewer: Verified through software verification and compatibility/interoperability testing. Its "validation" involved comparing its output (occlusion status) to venography, which is akin to evaluating the algorithm's output against a reference.
- Tissue Engagement Viewer (TEV): Assessed "on the bench to verify the capability to indicate if the catheter tip touches the tissue or not, indicate the touch level in latency up to 1 second and to indicate if the catheter tip touches the tissue with high force." This is an algorithm-only performance evaluation of its core functionality before human-in-the-loop validation.
7. The Type of Ground Truth Used
- Cryo Occlusion Viewer: Compared to venography (a gold standard imaging technique for confirming occlusion) and other unnamed appropriate visualization techniques. Implicitly, clinical outcomes from the retrospective cases also contribute to validation.
- Tissue Engagement Viewer (TEV): Ground truth was established by clinicians guided by established clinical standards (fluoroscopy, EGM, impedance, ICE) in an animal model. This is a form of expert consensus based on established clinical practice.
- For general system performance, internal design specifications and regulatory standards (IEC, ANSI/AAMI) serve as the "ground truth" for verification.
8. The Sample Size for the Training Set
- This device is not an AI/ML model that explicitly undergoes "training" on a distinct dataset in the way a deep learning classification model would. It's a deterministic system whose new features are based on impedance/dielectric mapping principles.
- Therefore, there is no "training set" in the context of AI/ML model development. The system's underlying algorithms are based on established biophysical principles, not learned from data in an iterative training process.
9. How the Ground Truth for the Training Set Was Established
- As there is no "training set" in the AI/ML sense, this question is not applicable. The "ground truth" for the development of the system's principles and algorithms would be fundamental physics, physiology, and engineering principles validated through design and verification testing.
Ask a specific question about this device
(120 days)
Philips Magnetic Resonance (MR) systems are Medical Electrical Systems indicated for use as a diagnostic device. This MR system enables trained physicians to obtain cross-sectional images, spectroscopic images and/or spectra of the internal structure of the head, body or extremities, in any orientation, representing the spatial distribution of protons or other nuclei with spin.
Image appearance is determined by many different physical properties of the tissue and the anatomy, the MR scan technique applied, and presence of contrast agents. The use of contrast agents for diagnostic imaging applications should be performed consistent with the approved labeling for the contrast agent.
The trained clinical user can adjust the MR scan parameters to customize image appearance, accelerate image acquisition, and synchronize with the patient's breathing or cardiac cycle.
The systems can use combinations of images to produce physical parameters, and related derived images, spectra, and measurements of physical parameters, when interpreted by a trained physician, provide information that may assist diagnosis, and therapy planning. The accuracy of determined physical parameters depends on system and scan parameters, and must be controlled and validated by the clinical user.
In addition the Philips MR systems provide imaging capabilities, such as MR fluoroscopy, to guide and evaluate interventional and minimally invasive procedures in the head, body and extremities. MR Interventional procedures, performed inside or adjacent to the Philips MR system, must be performed with MR Conditional or MR Safe instrumentation as selected and evaluated by the clinical user for use with the specific MR system configuration in the hospital. The appropriateness and use of information from a Philips MR system for a specific interventional procedure and specific MR system configuration must be validated by the clinical user.
The proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 are provided on the 60 cm and 70 cm bore 3.0 Tesla (3.0T) Magnetic Resonance Diagnostic Devices.
This bundled abbreviated 510(k) submission will include modifications of the 3.0T systems, included in the legally marketed predicate device Achieva, Intera, Ingenia, Ingenia CX, Ingenia Elition, and Ingenia Ambition MR Systems R5.7 (K193215, 04/10/2020).
This 510(k) submission will address the following HW and SW modifications for the proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 since the clearance of the last submission for each of the systems: New system MR 7700 which contains modified gradient system compared to Ingenia Elition X Modified Multi Nuclei option, now available for all 3.0T systems
This 510(k) submission will also address minor hardware and software enhancements: Universal Mains Distribution Unit (uMDU) 3.0T 1H RF Amplifier Extended Functionality Options
The proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 are intended to be marketed with the following pulse sequences and coils that were previously cleared by FDA: mDIXON (K102344) SWIp (K131241) mDIXON-Quant (K133526) mDIXON XD (K143128) O-MAR K143253 3D APT (K172920) Coils compatible with Ingenia 3.0T, Ingenia 3,0T CX, Ingenia Elition and MR 7700
The proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 are substantially equivalent to the legally marketed predicate device Achieva, Intera, Ingenia, Ingenia CX, Ingenia Elition, and Ingenia Ambition MR Systems R5.7 (K193215, 04/10/2020).
The provided text describes modifications to existing Philips Magnetic Resonance (MR) systems (Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition, and MR 7700) with the addition of "Distributed Multi Nuclei" functionality. The submission argues for substantial equivalence to a legally marketed predicate device (Achieva, Intera, Ingenia, Ingenia CX, Ingenia Elition, and Ingenia Ambition MR Systems R5.7, K193215).
However, the document does not describe a study that proves the device meets specific performance acceptance criteria in the way one might expect for a diagnostic AI device (e.g., sensitivity, specificity, AUC). Instead, it relies on demonstrating substantial equivalence through non-clinical verification and validation testing, and compliance with recognized standards.
Here's an analysis based on the information provided, highlighting what is present and what is missing concerning your request:
1. Table of Acceptance Criteria and Reported Device Performance
This information is not provided in the document as a quantitative table of performance metrics. The document states: "Test results demonstrate that the proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 meet the acceptance criteria and are adequate for its intended use." However, it does not specify what those acceptance criteria are nor what the reported performance values against those criteria are for the functional enhancements (Modified Gradient System, Distributed Multi Nuclei, uMDU, 3.0T 1H RF Amplifier).
The "acceptance criteria" appear to be related to compliance with international, FDA recognized consensus standards, and the intended use of the MR system as a diagnostic device. The performance is implied to be equivalent to the predicate device.
2. Sample Size Used for the Test Set and Data Provenance
This information is not applicable in the context of a clinical study for a diagnostic algorithm. The document describes "non-clinical verification and/or validation tests." These would typically involve engineering tests, phantom studies, and system-level performance checks rather than a test set of patient data with ground truth for diagnostic accuracy.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not applicable. Since no clinical study or diagnostic accuracy study with a "test set" in the context of expert-established ground truth is described, this detail is not present. The device enables physicians to obtain images and spectra; the interpretation by a "trained physician" is mentioned as part of the intended use, but not as part of a ground truth establishment process for the device's own performance evaluation.
4. Adjudication Method for the Test Set
This information is not applicable for the same reasons as points 2 and 3.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document explicitly states: "The proposed Ingenia 3.0T, Ingenia 3.0T CX, Ingenia Elition and MR 7700 with Distributed Multi Nuclei R5.9 did not require a clinical study since substantial equivalence to the legally marketed predicate device was proven with the verification/validation testing." Therefore, no MRMC study or AI assistance effect size is mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This information is not applicable. The device is an MR imaging system itself, not a standalone diagnostic algorithm that would typically undergo such testing. Its function is to acquire images and spectra for human interpretation.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
This information is not applicable in the context of evaluating diagnostic accuracy using a test set of patient data. The "ground truth" for the non-clinical tests would have been established through engineering specifications, physical measurements, and compliance with recognized standards for MR system performance.
8. The Sample Size for the Training Set
This information is not applicable. The document does not describe an AI or machine learning algorithm that requires a training set in the conventional sense for diagnostic classification. The modifications are to the physical and software components of an MR system for image acquisition and processing capabilities.
9. How the Ground Truth for the Training Set was Established
This information is not applicable for the same reasons as point 8.
Summary of what the document does provide regarding acceptance criteria and proof:
The document establishes "substantial equivalence" to a predicate device (K193215) by demonstrating that the modifications (Modified Gradient System, Distributed Multi Nuclei, uMDU, 3.0T 1H RF Amplifier, and minor software enhancements) do not raise different questions of safety and effectiveness, and that the device continues to meet its intended use.
The proof described is through:
- Non-clinical verification and validation tests: These tests were performed "with regards to the intended use, the technical claims, the requirement specifications and the risk management results."
- Compliance with international and FDA recognized consensus standards: A list of standards (e.g., IEC60601 series, IEC62366-1, IEC 62304, NEMA MS series, ISO 14971, and various FDA guidance documents) is provided.
- Risk management activities: To ensure all identified risks are sufficiently mitigated and overall residual risk is acceptable.
The acceptance criteria implicitly refer to successfully passing these non-clinical tests, complying with all listed standards, adequately mitigating risks, and maintaining the same safety and effectiveness profile as the predicate device such that no new clinical study was deemed necessary. The "reported device performance" is the successful fulfillment of these criteria, leading to the conclusion of substantial equivalence.
Ask a specific question about this device
(305 days)
The Precise Image is a reconstruction software application for a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment supports, components and accessories. Precise Image has been evaluated and available on preselected reference protocols for adult subjects. Precise Image is not indicated for use in pediatric subjects.
The CT system with Precise Image is indicated for head, whole body and vascular X-ray Computed Tomography applications. These scanners are intended to be used for diagnostic imaging.
Precise Image uses an Artificial Intelligence powered reconstruction that is designed for low radiation dose, provides lower noise, and improves low contrast detectability.
The proposed Precise Image is a reconstruction software application that may be used on a Philips whole-body computed tomography (CT) X-Ray System. Precise Image is a robust reconstruction software application, utilizing technological advancements in Artificial Intelligence and a Convolutional Neural Networks (CNN), When used, Precise Image generates CT images that provides an image appearance similar to traditional FBP images while reducing dose and improving image quality.
The implemented algorithm includes 5 user-adjustable settings to match the Radiologist's preference for dose reduction and image quality.
The proposed Precise Image reconstruction has been trained on and may be used on the currently marketed predicate device Philips Incisive CT System (K180015).
Here's a breakdown of the acceptance criteria and the study details for the Philips Precise Image device, based on the provided text:
Acceptance Criteria and Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Low Contrast Resolution (20 cm Catphan phantom) | 5 mm @ 0.3% @ 5.5 mGy CTDIvol (Improved, better low contrast resolution at lower dose levels compared to predicate's 4 mm @ 0.3% @ 22 mGy CTDIvol) |
| Noise Reduction and Low Contrast Detectability | Achieving up to 85% lower noise at 80% lower dose and 60% better low contrast detectability (Improved compared to standard mode, which is the baseline for the claim) |
| Noise Power Spectrum (NPS) Shift | Where noise is reduced by at least 50%, the system shall shift the noise power spectrum of images by no more than 6% as compared to the same data reconstructed without Precise Image. (Will not shift NPS more than 6%) |
| Application | Head, Body, and Vascular (Matches predicate's Head, Body, Vascular, and Cardiac applications in relevant scan types) |
| Scan Regime | Continuous Rotation (Identical to predicate) |
| Scan Field of View | Up to 500 mm (Identical to predicate) |
| Minimum Scan Time | 0.35 sec for 360° rotation (Identical to predicate) |
| Noise in Standard Mode (21.6 cm water-equivalent) | 0.27% at 27 mGY (Identical to predicate) |
| Compliance with Standards and Guidance | Maintains compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-1-6, IEC 60601-2-44, IEC 62304, ISO 10993-1, ISO 14971, Guidance for Industry and FDA Staff – Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Discussion Paper. |
| Image Quality (Clinical Evaluation) | All images were evaluated to have good image quality by certified radiologists. |
| Diagnostic Confidence, Sharpness, Noise Level, Image Texture, and Artifacts | Evaluated on a five-point Likert scale, demonstrating substantial equivalence to the predicate. |
Study Details
-
Sample Size for Test Set and Data Provenance:
- Sample Size: 55 image set pairs.
- Data Provenance: The document states "Sample clinical images are provided with this submission," implying these are real clinical images. No specific country of origin is mentioned, nor is it explicitly stated if the data is retrospective or prospective. However, given they are "clinical images" and used for evaluation, it's highly likely they are retrospective images from existing clinical practice.
-
Number of Experts and Qualifications:
- Number of Experts: 6 board-certified radiologists.
- Qualifications: "board certified radiologists." No specific years of experience or subspecialty are provided.
-
Adjudication Method for the Test Set:
- The document implies individual evaluations by each of the 6 radiologists on a Likert scale for various image attributes. It does not mention any explicit adjudication method (like 2+1 or 3+1 consensus) for the ground truth of the test set itself. The radiologists assessed "Diagnostic Confidence. Sharpness, Noise level. Image texture and Artifacts." The study compares the proposed device images against predicate device images, with the radiologists providing their individual assessment on these attributes.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was it done? Yes, a comparative image evaluation study was performed by 6 board-certified radiologists on 55 image set pairs.
- Effect size of human readers improvement with AI vs without AI assistance: The document states that the evaluation was to "evaluate Diagnostic Confidence. Sharpness, Noise level. Image texture and Artifacts on a five point Likert scale" demonstrating "substantial equivalence to the currently marketed predicate device Philips Incisive CT (K180015)." It highlights improvements in low contrast resolution, noise reduction, and low contrast detectability of the device itself compared to the predicate/standard mode, but does not quantify human reader improvement (e.g., AUC, sensitivity, specificity) with AI assistance versus without it. The evaluation focused on image quality and characteristics, not diagnostic accuracy changes for the human reader.
-
Standalone (Algorithm Only) Performance Study:
- Yes, the performance characteristics like "Low Contrast Resolution," "Noise Reduction and Low Contrast Detectability," and "Noise Power Spectrum" are measurements of the algorithm's output (the reconstructed image) and are done in a standalone manner without human intervention influencing these specific metrics. The clinical image evaluation by radiologists also assesses the output of the algorithm relative to the predicate.
-
Type of Ground Truth Used:
- For the quantitative technical specifications (e.g., low contrast resolution, noise, NPS), the ground truth is based on phantom measurements (e.g., "20 cm Catphan phantom," "21.6 cm water-equivalent").
- For the clinical image evaluation, the "ground truth" for comparison is the predicate device's images (Incisive CT and Brilliance iCT), with radiologists evaluating the attributes of the Precise Image compared to these established images. There is no mention of a separate, definitive, clinical ground truth (e.g., pathology, clinical outcomes) for the diagnosis from these images. The radiologists are evaluating image quality characteristics and comparing them.
-
Sample Size for the Training Set:
- Not explicitly stated in the provided text. The document mentions, "The proposed Precise Image reconstruction has been trained on and may be used on the currently marketed predicate device Philips Incisive CT System (K180015)." However, it doesn't give a specific number of images or cases used for training.
-
How the Ground Truth for the Training Set was Established:
- Not explicitly stated in the provided text. It mentions the device "has been trained on" the predicate device's data, implying that the established high-quality images from the predicate device likely served as a reference or ground truth for the AI training process to guide the AI in producing similar or improved image characteristics. However, the specific method of ground truth establishment for training data is not detailed.
Ask a specific question about this device
Page 1 of 1