(125 days)
Indicated for displaying images of the tracheobronchial tree to aid the physician in guiding endoscopic tools or catheters in the pulmonary tract and to enable marker placement within soft lung tissue. It does not make a diagnosis and is not an endoscopic tool. Not for pediatric use.
The superDimension™ navigation system version 7.2 (V7.2) is a device that guides endoscopic tools to a target in or adjacent to the bronchial tree on a path identified by a previous CT scan. The superDimension™ navigation system V7.2 allows visualization of the target and the interior of the bronchial tree; placement of catheters in the bronchial tree; and placement of radiosurgical and dye markers into soft lung tissue to guide radiosurgery and thoracic surgery.
Covidien Ilc is introducing the superDimension™ navigation system software release V7.2, which is a software modification to the predicate device superDimension navigation system cleared under 510(k) K151376. The V7.2 software includes an optional local registration feature intended to compensate for CT-to-body divergence though incorporation of additional fluoroscopic imaging data taken during the electromagnetic navigation procedure. Local registration is an optional feature and can be used at the physician's discretion.
This document describes a 510(k) premarket notification for the superDimension™ Navigation System V7.2. The device is a software modification to a previously cleared predicate device (K151376). The core of the information provided focuses on the substantial equivalence argument, particularly how the safety and effectiveness are maintained despite the modifications.
Based on the provided text, here's an analysis of the acceptance criteria and the study that proves the device meets them:
Key Takeaway: This submission (K173244) is for a software modification (V7.2) to an existing device. Crucially, no clinical studies were required for this submission. The "studies" proving acceptance are primarily design verification and validation tests, and a demonstration of substantial equivalence to the predicate device.
1. Table of Acceptance Criteria and Reported Device Performance
Given that this is a 510(k) for a software update to an already cleared device, the "acceptance criteria" are predominantly about ensuring the updated device continues to meet the safety and effectiveness standards of the predicate, and that the new feature (local registration) functions as intended without introducing new risks or compromising existing functions.
Acceptance Criterion (Based on Design V&V) | Reported Device Performance |
---|---|
Software Verification Testing | |
Target visualization accuracy | Met specifications |
Target marking accuracy | Met specifications |
Local registration accuracy | Met specifications |
Software application testing | Met specifications |
Regression testing (no impact on unmodified software) | Confirmed no impact |
Usability Validation Testing | |
Functionality | Confirmed by users |
User interface | Confirmed by users |
User needs and intended uses | Conformed to user needs and intended uses |
Fiducial Marker Board Design Verification | Met specifications |
Shipping Validation | Met specifications |
Risk Management | Performed to analyze potential hazards; demonstrated the addition of the local registration feature does not significantly change device risks. |
Compliance with Standards | Met listed International and FDA-recognized consensus standards (ISO 14971, IEC 62366-1, ISO 15223-1, ASTM D4169, ANSI/AAMI/IEC 62304) |
No change in intended use/Indications for Use | Confirmed identical indications for use as the predicate device. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a numerical "sample size" in terms of patient data or clinical cases for testing. The testing described is primarily engineering and usability validation.
- For software verification, this would typically involve a comprehensive set of test cases designed to exercise all functionalities, including the new local registration feature. The document indicates "target visualization and marking accuracy, local registration accuracy, software application testing and regression testing." These are likely performed on simulated data, phantom studies, and possibly existing de-identified CT scans.
- For usability validation, the text mentions "representative users from targeted user groups." No specific number of users is provided.
- For fiducial marker board testing, no specific sample size is mentioned.
- Data Provenance: Not applicable in the context of clinical studies for this submission. The tests are primarily in-house design verification and validation. If any imaging data were used for software testing, its origin (e.g., country) is not specified, but it's implied to be de-identified or synthetic data suitable for testing.
- Retrospective or Prospective: Not applicable as this was not a clinical study involving patients. The design verification and validation were prospective in nature (performed specifically for this submission).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: For usability validation, the document mentions "qualified bronchoscopists and clinicians." A specific number is not provided.
- Qualifications of Experts: They are described as "qualified bronchoscopists and clinicians." This implies medical professionals with relevant experience in the use of such navigation systems in pulmonary procedures. No specific years of experience are listed.
- Ground Truth Establishment: For the technical software verification tests (accuracy, regression), the "ground truth" would be established by engineering specifications and expected outputs generated by the development team. For usability validation, "ground truth" is established by user feedback and observation against defined usability metrics.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable. This was not a multi-reader assessment of subjective image interpretations. For usability testing, observed behaviors and subjective feedback from users would be collected and analyzed, likely against predefined criteria, but not through a formal adjudication process akin to clinical diagnostic studies.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- MRMC Study: No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical tests were not required to validate the changes to the superDimension™ navigation system V7.2."
- Effect Size: Not applicable.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Standalone Performance: The software verification testing (e.g., target visualization accuracy, target marking accuracy, local registration accuracy) could be considered a form of "standalone" evaluation of the algorithm's performance against predefined metrics, though the device is inherently intended for human-in-the-loop use. It assesses the software's computational correctness and accuracy independent of real-time human interaction during a procedure for these specific metrics. However, it's not a standalone diagnostic AI system.
7. The Type of Ground Truth Used
- Type of Ground Truth:
- For software verification (accuracy): Engineering specifications, mathematically derived correct outputs, or measurements from controlled phantom studies.
- For usability validation: User feedback, observed user interactions, and conformance to predefined user needs and task completion criteria.
- Clinical ground truth (e.g., pathology, outcomes data) was not used, as no clinical studies were performed.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not applicable. This submission is for a software update for a medical device that guides instruments, not for a machine learning/AI algorithm that requires a "training set" in the conventional sense (e.g., for image classification or prediction). The "software modification" refers to changes in the core application logic and potentially new computational features (like local registration), which are developed through traditional software engineering processes, not machine learning model training.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth for Training Set Establishment: Not applicable, as there was no machine learning training set in the context of this device update. The "ground truth" for the software's functionality and accuracy would be established through its design specifications and algorithms, which are based on established physics, geometry, and engineering principles relevant to navigation and imaging.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.