Search Results
Found 1 results
510(k) Data Aggregation
(177 days)
The Stealth AXIS Surgical System, with the Stealth AXIS Cranial clinical application, is intended for precise positioning of surgical instruments and as an aid for locating anatomical structures in open, minimally invasive, and percutaneous neurosurgical procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the skull, can be identified relative to images of the anatomy.
This can include, but is not limited to, the following cranial procedures (including stereotactic frame-based and stereotactic frame alternatives-based procedures):
- Tumor resections
- General ventricular catheter placement
- Pediatric ventricular catheter placement
- Depth electrode, lead, and probe placement
- Cranial biopsies
The Stealth AXIS Cranial clinical application works in conjunction with the Stealth AXIS Surgical System. The Stealth AXIS Cranial clinical application helps guide surgeons during cranial procedures such as biopsies, tumor resections, shunt placements and depth electrode and probe placement. The system tracks the position of instruments in relation to surgical anatomy and identifies this position on diagnostic or intraoperative images of the patient.
Patient images are transferred to the system, and the Stealth AXIS Cranial clinical application displays the image of the patient anatomy from a variety of perspectives (axial, sagittal, coronal, oblique) and 3-dimensional (3D) renderings of anatomical structures. During navigation, the Stealth AXIS Surgical System identifies the tip location and trajectory of the tracked instrument on images and models the user has selected to display on the monitor. The surgeon may also create and store one or more surgical plan trajectories before surgery and simulate progression along these trajectories. During surgery, the Stealth AXIS Cranial clinical application can display how the actual instrument tip position and trajectory relate to the pre-surgical plan, helping to guide the surgeon along the planned trajectory.
The provided FDA 510(k) clearance letter and summary for the Stealth AXiS Cranial clinical application offers surprisingly limited detail on the specific acceptance criteria and the study that proves the device meets them, especially concerning the AI/ML components. However, I will extract and infer as much as possible from the provided text to answer your questions.
It's important to note that the clearance letter itself doesn't present the full study details, but rather summarizes the findings that Medtronic provided to the FDA. Many of the specific details you requested (like exact sample sizes for test sets, data provenance, number and qualifications of experts, and adjudication methods for ground truth) are not explicitly stated in this public document. I will highlight what is present and what is missing.
Acceptance Criteria and Device Performance Study for Stealth AXiS Cranial Clinical Application
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on overall system accuracy requirements and general software validation. For the AI-enabled "Autotracts" feature, the acceptance criteria are less formally quantified in the provided text.
| Acceptance Criterion (Implicit) | Reported Device Performance |
|---|---|
| System Accuracy (Non-AI Component) | |
| 3D positional accuracy ≤ 2.0 mm (mean error) | Demonstrated performance in 3D positional accuracy with a mean error ≤ 2.0 mm under representative worst-case configuration. |
| Trajectory angle accuracy ≤ 2.0 degrees (mean error) | Demonstrated performance in trajectory angle accuracy with a mean error ≤ 2.0 degrees under representative worst-case configuration. |
| Software Functionality (General) | |
| Product requirements met, device performs as intended | Software verification and validation testing verified the product requirements are met, and the device performs as intended. |
| Usability for intended users, uses, and use environments | Summative usability validation was performed by representative users. The summative evaluations demonstrated the Stealth AXiS™ Cranial clinical application to be substantially equivalent for the intended user, uses, and use environments. |
| AI-enabled Autotracts Feature Acceptance (Implicit) | |
| Reliability in generating patient-specific white matter tracts | "Performance was assessed leveraging expert review to ensure reliability." (No specific quantitative metrics for reliability are provided in this summary, such as sensitivity, specificity, Dice score, etc., which are common for segmentation or tractography models. It implies expert satisfaction with the output.) |
| User control over tract appearance | "Users retain control by adjusting tract appearance via probability thresholds, manually cropping tracts as needed, and ultimately verifying tracts before proceeding." (This is a design feature enabling user acceptance, rather than a quantifiable performance metric, but important for clinical integration.) |
| Spans normal and pathological cases | "Training and validation used hundreds of images from internal studies and public datasets, spanning normal and pathological cases..." (Implicitly, the model is expected to perform adequately across a variety of patient presentations, though specific performance differences between normal/pathological cases are not detailed.) |
2. Sample Size for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated in the document for either the general system accuracy or the AI Autotracts feature. The document only mentions "withheld datasets" for validation of the AI model.
- Data Provenance:
- For System Accuracy: "representative worst-case configuration" implies laboratory testing, not patient data in this context.
- For AI Autotracts: "hundreds of images from internal studies and public datasets, spanning normal and pathological cases." The country of origin and whether the data was retrospective or prospective are not specified.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Not explicitly stated. The document mentions "expert-reviewed gold standard annotations" for the training/validation of the Autotracts and "expert review" to assess performance.
- Qualifications of Experts: Not explicitly stated. It is implied that these are experts in brain anatomy, neuroimaging, and neuronavigation, but specific qualifications (e.g., "Radiologist with 10 years of experience") are not provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly stated. For the expert-reviewed ground truth, it is unknown if a single expert provided the ground truth, if consensus was reached among multiple experts (e.g., 2+1, 3+1), or if there was no formal adjudication process described beyond "expert-reviewed."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was explicitly NOT performed. The document states, "No clinical testing was performed." The usability validation was done by "representative users," but this is distinct from a clinical MRMC study designed to measure the effect size of AI assistance on human reader performance.
- Effect Size of AI vs. Without AI Assistance: Since no clinical testing or MRMC study was performed, there is no reported effect size of how much human readers improve with AI vs. without AI assistance in this document.
6. Standalone (Algorithm Only) Performance
- Standalone Performance: For the AI-enabled "Autotracts" feature, the description of "Performance was assessed leveraging expert review to ensure reliability" suggests some form of standalone evaluation against expert-derived ground truth. However, specific quantitative standalone metrics (e.g., sensitivity, Dice coefficient for segmentation, average distance error for tractography) are not provided. The phrase "Users retain control by adjusting tract appearance... and ultimately verifying tracts before proceeding" also highlights that the AI's output is intended for human-in-the-loop verification, not necessarily as a standalone diagnostic or planning output without review.
7. Type of Ground Truth Used
- For System Accuracy: Physical measurements against known standards in a "worst-case configuration."
- For AI Autotracts: "expert-reviewed gold standard annotations" for training and validation, and "expert review" for performance assessment. This implies expert consensus on the definition of white matter tracts based on diffusion MRI images. It does not mention pathology or outcomes data for ground truth for this specific AI feature.
8. Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document mentions "Training and validation used hundreds of images from internal studies and public datasets." It does not separate the exact number for training versus validation.
9. How the Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: The ground truth for the AI Autotracts training set was established through "expert-reviewed gold standard annotations." This indicates that human experts manually identified or delineated the white matter tracts on the diffusion MRI images, and their work was considered the "gold standard" for the AI model to learn from.
Ask a specific question about this device
Page 1 of 1