Search Results
Found 134 results
510(k) Data Aggregation
(29 days)
Brainlab AG
Ask a specific question about this device
(259 days)
Brainlab AG
The software supports image guidance by overlaying vessel anatomy onto live fluoroscopic images in order to navigate guidewires, catheters, stents and other endovascular devices.
The device is indicated for use by physicians for patients undergoing endovascular PAD interventions of the lower limbs including iliac vessels.
The device is intended to be used in adults.
There is no other demographic, ethnic or cultural limitation for patients.
The information provided by the software or system is in no way intended to substitute for, in whole or in part, the physician's judgment and analysis of the patient's condition.
The Subject Device is a standalone medical device software supporting image guidance in endovascular procedures of peripheral artery disease (PAD) in the lower limbs, including the iliac vessels. Running on a suitable platform and connected to an angiographic system, the Subject Device receives and displays the images acquired with the angiographic system as a video stream. It provides the ability to save and process single images out of that video stream and is able to create a vessel tree consisting of angiographic images. This allows to enrich the video stream with the saved vessel tree to continuously localize endovascular devices with respect to the vessel anatomy.
The medical device is intended for use with compatible hardware and software and must be connected to a compatible angiographic system via video connection.
Here's a breakdown of the acceptance criteria and study information for the Vascular Navigation PAD 2.0, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for Vascular Navigation PAD 2.0
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Video Latency (Added) | $\le$ 250 ms | $\le$ 250 ms (for Ziehm Vision RFD 3D, Siemens Cios Spin, and combined) |
Capture Process Timespan (initiation to animation start) | $\le$ 1s | Successfully passed |
Stitching Timespan (entering stitching to calculation result) | $\le$ 10s | Successfully passed |
Roadmap/Overlay Display Timespan (manual initiation / selection / realignment to updated display) | $\le$ 10s | Successfully passed |
System Stability (Stress and Load, Anti-Virus) | No crashes, responsive application (no significant waiting periods), no significant latencies of touch interaction/animations, normal interaction possible. | Successfully passed |
Level Selection and Overlay Alignment (True-Positive Rate for suggested alignments) | Not explicitly stated as a number, but implied to be high for acceptance. | 95.71 % |
Level Selection and Overlay Alignment (Average Registration Accuracy for proposed alignments) | Not explicitly stated (but the stated "2D deviation for roadmapping $\le$ 5 mm" likely applies here as an overall accuracy goal). | 1.49 ± 2.51 mm |
Level Selection Algorithm Failures | No failures | No failures during the test |
Modality Detection (Prediction Rate in determining image modality) | Not explicitly stated ("consequently, no images were misidentified" implies 100% accuracy) | 99.25 % |
Modality Detection (Accuracy for each possible modality) | Not explicitly stated (but 100% for acceptance) | 100 % |
Roadmapping Accuracy (Overall Accuracy) | $\le$ 5 mm | 1.57 ± 0.85 mm |
Stitching Algorithm (True-Positive Rate for suggested alignments) | $\ge$ 75 % | 95 % |
Stitching Algorithm (False-Positive Rate for incorrect proposal of stitching) | $\le$ 25 % | 6.4 % |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a single number.
- For Latency Tests: Data from Siemens Cios Spin and Ziehm Vision RFD 3D.
- For Level Selection and Overlay Alignment: Images acquired with Siemens Cios Spin, Ziehm Vision RFD 3D, and GE OEC Elite CFD.
- For Modality Detection: Image data from Siemens Cios Spin, GE OEC Elite CFD, Philips Zenition, and Ziehm Vision RFD 3D.
- For Roadmapping Accuracy: Image data from Siemens Cios Spin.
- For Stitching Algorithm: Image data from Philips Azurion, Siemens Cios Spin, GE OEC Elite CFD, and Ziehm Vision RFD 3D.
- Data Provenance:
- Retrospective/Prospective: Not explicitly stated for all tests. However, the Level Selection and Overlay Alignment and Roadmapping Accuracy tests mention using "cadaveric image data" which implies a controlled, likely prospective, acquisition for testing purposes rather than retrospective clinical data. Other tests reference "independent image data" or data "acquired using" specific devices, suggesting a dedicated test set acquisition.
- Country of Origin: Not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: Not explicitly stated. The document mentions "manually achieved gold standard registrations" for Level Selection and Overlay Alignment and "manually comparing achieved gold standard (GS) stitches" for the Stitching Algorithm, implying human expert involvement in establishing ground truth, but specific details on the number or qualifications of these "manual" reviewers are absent. The phrase "if a human would consider the image pairs matchable" in the stitching section further supports human-determined ground truth.
4. Adjudication Method for the Test Set
- Adjudication Method: Not explicitly described. The ground truth seems to be established through "manually achieved gold standard" or "manual comparison," implying a single expert or a common understanding rather than a formal adjudication process between multiple conflicting expert opinions (e.g., 2+1 or 3+1).
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- Was it done? No. The submission focuses on standalone technical performance measures and accuracy metrics of the algorithm rather than comparing human reader performance with and without AI assistance.
6. Standalone Performance Study
- Was it done? Yes. The entire "Performance Data" section details the algorithm's performance in various standalone tests, such as latency, stress/load, level selection and overlay alignment, modality detection, roadmapping accuracy, and stitching algorithm performance. The results are quantitative metrics of the device itself.
7. Type of Ground Truth Used
- Type of Ground Truth:
- Expert Consensus / Manual Gold Standard: For Level Selection and Overlay Alignment ("manually achieved gold standard registrations") and for the Stitching Algorithm ("manually comparing achieved gold standard (GS) stitches"). This implies human experts defined the correct alignment or stitch.
- Technical Metrics: For Latency, Capture Process, Stitching Timespan, Roadmap/Overlay Display Timespan, and System Stability, the ground truth is based on objective technical measurements against defined criteria.
- True Modality: For Modality Detection, the ground truth is simply the actual modality of the image (fluoroscopy vs. angiography) as known during test data creation or acquisition.
8. Sample Size for the Training Set
- Sample Size: Not provided. The submission focuses solely on the performance characteristics of the tested device and its algorithms, without detailing the training data or methods used to develop those algorithms.
9. How the Ground Truth for the Training Set Was Established
- How Established: Not provided. As with the training set size, the information about the training process and ground truth for training is outside the scope of the clearance letter's performance data section.
Ask a specific question about this device
(266 days)
Brainlab AG
Cranial 4Pi is intended for patient immobilization in radiotherapy and radiosurgery procedures.
Cranial 4Pi is indicated for any medical condition in which the use of radiotherapy or radiosurgery may be appropriate for cranial and head & neck treatments.
Cranial 4Pi is an assembly of the following medical device/ accessory groups:
- CRANIAL 4PI OVERLAYS (CRANIAL 4PI CT OVERLAY, CRANIAL 4PI TREATMENT OVERLAY)
- CRANIAL 4PI HEADRESTS (CRANIAL 4PI HEADREST STANDARD, CRANIAL 4PI HEADREST LOW-NECK, CRANIAL 4PI HEADREST PLATFORM)
- CRANIAL 4PI HEADREST INLAYS (CRANIAL 4PI HEADREST INLAY STANDARD, CRANIAL 4PI HEADREST INLAY OPEN FACE, CRANIAL 4PI HEADREST INLAY H&N, CRANIAL 4PI HEAD SUPPORT STANDARD, CRANIAL 4PI HEAD SUPPORT WIDE)
- CRANIAL 4PI MASKS (CRANIAL 4PI BASIC MASK, CRANIAL 4PI OPEN FACE MASK, CRANIAL 4PI EXTENDED MASK, CRANIAL 4PI STEREOTACTIC MASK, CRANIAL 4PI STEREOTACTIC MASK 3.2MM)
- CRANIAL 4PI WEDGES AND SPACERS (CRANIAL 4PI WEDGE 5 DEG., CRANIAL 4PI WEDGE 10 DEG., CRANIAL 4PI SPACER 20MM, CRANIAL 4PI INDEXING PLATE)
The Cranial 4Pi Overlays are medical devices used for fixation of the patient in a CT- resp. linear accelerator - environment.
The Cranial 4Pi Headrests and the Cranial 4Pi Headrest Inlays are accessories to the Cranial 4Pi Overlays to allow an indication specific positioning of the patient's head and neck. The Cranial 4Pi Wedges and Spacers are accessories to the Cranial 4Pi Headrest Platform to adapt the inclination of the head support to the patients necks.
The Cranial 4Pi Masks are accessories to the Cranial 4Pi Overlays used for producing individual custom-made masks for patient immobilization to the Cranial 4Pi Overlay.
The provided text is a 510(k) Clearance Letter and 510(k) Summary for a medical device called "Cranial 4Pi Immobilization." This document focuses on demonstrating substantial equivalence to a predicate device, as required for FDA 510(k) clearance.
However, the provided text does not contain the detailed information typically found in a clinical study report or a pre-market approval (PMA) submission regarding acceptance criteria, study methodologies, or specific performance metrics with numerical results (like sensitivity, specificity, or AUC) that would be used to "prove the device meets acceptance criteria" for an AI/ML-driven device. The document primarily describes the device's components, indications for use, and a comparison to a predicate device to establish substantial equivalence.
The "Performance Data" section primarily addresses biocompatibility, mechanical verification, dosimetry, compatibility with another system, and mask stability. It does not describe a study to prove AI model performance against clinical acceptance criteria. The "Usability Evaluation" section describes a formative usability study, which is different from a performance study demonstrating clinical effectiveness or accuracy.
Therefore, many of the requested elements (especially those related to AI/ML model performance, ground truth establishment, expert adjudication, MRMC studies, or standalone algorithm performance) cannot be extracted from the provided text. The Cranial 4Pi Immobilization device appears to be a physical immobilization system, not an AI/ML diagnostic or prognostic tool.
Given the nature of the document (510(k) for an immobilization device), the concept of "acceptance criteria for an AI model" and "study that proves the device meets the acceptance criteria" in the traditional sense of an AI/ML clinical study does not apply here.
I will answer the questions based on the closest relevant information available in the provided text, and explicitly state where the information is not available or not applicable to the type of device described.
Preamble: Nature of the Device and Submission
The Cranial 4Pi Immobilization device is a physical medical device designed for patient immobilization during radiotherapy and radiosurgery. The 510(k) premarket notification for this device seeks to demonstrate substantial equivalence to an existing predicate device (K202050 - Cranial 4Pi Immobilization). This type of submission typically focuses on comparable intended use, technological characteristics, and safety/performance aspects relevant to the physical device's function (e.g., biocompatibility, mechanical stability, dosimetry interaction).
The provided documentation does not describe an AI/ML-driven component that would require acceptance criteria related to AI model performance (e.g., accuracy, sensitivity, specificity, AUC) or a study to prove such performance. Therefore, many of the questions asking about AI-specific validation (like ground truth, expert adjudication, MRMC studies, training/test sets for AI) are not applicable to this type of device and submission.
1. A table of acceptance criteria and the reported device performance
Based on the provided document, specific numerical "acceptance criteria" and "reported device performance" in the context of an AI/ML model are not available and not applicable. The document focuses on demonstrating substantial equivalence of a physical immobilization device.
However, the "Performance Data" section lists several tests and their outcomes, which serve as evidence that the device performs as intended for its physical function. These are not acceptance criteria for an AI model.
Test Category | Acceptance Criteria (Explicitly stated or Inferred) | Reported Device Performance (as stated) |
---|---|---|
Biocompatibility | Risk mitigated by limited exposure and intact skin contact for Irritation/Sensitization; low unbound residues for coating. Cytotoxicity to be performed. | Cytotoxicity Testing: Amount of non-reacted ducts is considered low. |
Sensitization Testing (ISO 10993-10): |
- Saline Extraction: No sensitization reactions observed.
- Cottonseed Oil Extraction: No sensitization reactions observed.
Test article did not elicit sensitization reactions (guinea pigs). Positive controls validated sensitivity.
Irritation Testing (ISO 10993-23): - No irritation observed (rabbits) compared to control based on erythema and edema scores for saline and cottonseed oil extracts.
Test article met requirements for Intracutaneous (Intradermal) Reactivity Test. Positive controls validated sensitivity. |
| Mechanical Tests | Relevant for fulfillment of IEC 60601-1 requirements. | All mechanical tests relevant for fulfillment of IEC 60601-1 requirements were carried out successfully. |
| Dosimetry Tests | Verify that dose attenuation is acceptable. | Tests to verify that dose attenuation is acceptable with the hardware components were carried out successfully. |
| Compatibility Tests| Compatibility with ExacTrac Dynamic 2.0. | Compatibility with ExacTrac Dynamic 2.0 was tested successfully. |
| Mask Stability | Cranial 4Pi SRS mask 3.2 mm (vs. 2mm predicate) to have higher stability against head movement. | Technical validation test to prove that the Cranial 4Pi SRS mask 3.2 mm... having a 3.2 mm top mask sheet instead of 2mm has a higher stability against head movement was carried out successfully. |
| Usability Evaluation| Evaluate the usability of the subject devices. | Formative usability evaluation performed in three different clinics with seven participants to evaluate the usability of the subject devices. (Specific findings not detailed, but the study was performed). |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable/not stated in the context of an AI/ML test set. The usability evaluation involved "seven participants" in "three different clinics." For biocompatibility, animal studies were performed (guinea pigs for sensitization, rabbits for irritation; specific number of animals not stated but implied to be sufficient for ISO standards).
- Data Provenance: Not applicable for an AI/ML test set. The usability evaluation involved "three different clinics" but the country of origin is not specified.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. This device is a physical immobilization system, not an AI/ML diagnostic or prognostic tool that requires expert-established ground truth on medical images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. This information is relevant to validating AI/ML diagnostic performance against ground truth, which is not described for this device.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This is an AI/ML-specific study design. The device is a physical immobilization system, not an AI assistance tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is an AI/ML-specific validation. There is no AI algorithm component described for this physical device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable. No ground truth for diagnostic or prognostic purposes is established for this physical device. The "performance data" relies on standards compliance (e.g., ISO, IEC), physical measurements, and usability feedback.
8. The sample size for the training set
- Not applicable. There is no AI model described that would require a training set.
9. How the ground truth for the training set was established
- Not applicable. There is no AI model described.
Ask a specific question about this device
(123 days)
Brainlab AG
The device is intended for radiation treatment planning for use in stereotactic, conformal, computer planned, Linac based radiation treatment and indicated for cranial, head and neck and extracranial lesions.
RT Elements are computed-based software applications for radiation therapy treatment planning and dose optimization for linac-based conformal radiation treatments, i.e. stereotactic radiosurgery (SRS), fractionated stereotactic radiotherapy (SRT) or stereotactic ablative radiotherapy (SABR), also known as stereotactic body radiation therapy (SBRT) for use in stereotactic, conformal, computer planned, Linac based radiation treatment of cranial, head and neck, and extracranial lesions.
The device consists of the following software modules: Multiple Brain Mets SRS 4.5, Cranial SRS 4.5, Spine SRS 4.5, Cranial SRS w/ Cones 4.5, RT Contouring 4.5, RT QA 4.5, Dose Review 4.5, Brain Mets Retreatment Review 4.5, and Physics Administration 7.5.
Here's the breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for RT Elements 4.5, specifically focusing on the AI Tumor Segmentation feature:
Acceptance Criteria and Reported Device Performance
Diagnostic Characteristics | Minimum Acceptance Criteria (Lower Bound of 95% Confidence Interval) | Reported Device Performance (Mean 95% CI Lower Bound) |
---|---|---|
All Tumor Types | Dice ≥ 0.7 | Dice: 0.74 |
Recall ≥ 0.8 | Recall: 0.83 | |
Precision ≥ 0.8 | Precision: 0.85 | |
Metastases to the CNS | Dice ≥ 0.7 | Dice: 0.73 |
Recall ≥ 0.8 | Recall: 0.82 | |
Precision ≥ 0.8 | Precision: 0.83 | |
Meningiomas | Dice ≥ 0.7 | Dice: 0.73 |
Recall ≥ 0.8 | Recall: 0.85 | |
Precision ≥ 0.8 | Precision: 0.84 | |
Cranial and paraspinal nerve tumors | Dice ≥ 0.7 | Dice: 0.88 |
Recall ≥ 0.8 | Recall: 0.93 | |
Precision ≥ 0.8 | Precision: 0.93 | |
Gliomas and glio-/neuronal tumors | Dice ≥ 0.7 | Dice: 0.76 |
Recall ≥ 0.8 | Recall: 0.74 | |
Precision ≥ 0.8 | Precision: 0.88 |
Note: For "Gliomas and glio-/neuronal tumors," the reported lower bound 95% CI for Recall (0.74) is slightly below the stated acceptance criteria of 0.8. Additional clarification from the submission would be needed to understand how this was reconciled for clearance. However, for all other categories and overall, the reported performance meets or exceeds the acceptance criteria.
Study Details for AI Tumor Segmentation
2. Sample size used for the test set and the data provenance:
- Sample Size: 412 patients (595 scans, 1878 annotations)
- Data Provenance: De-identified 3D CE-T1 MR images from multiple clinical sites in the US and Europe. Data was acquired from adult patients with one or multiple contrast-enhancing tumors. ¼ of the test pool corresponded to data from three independent sites in the USA.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated as a number, but referred to as an "external/independent annotator team."
- Qualifications of Experts: US radiologists and non-US radiologists. No further details on years of experience or specialization are provided in this document.
4. Adjudication method for the test set:
- The document mentions "a well-defined data curation process" followed by the annotator team, but it does not explicitly describe a specific adjudication method (e.g., 2+1, 3+1) for resolving disagreements among annotators.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not reported for the AI tumor segmentation. The study focused on standalone algorithm performance against ground truth.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The validation was conducted quantitatively by comparing the algorithm's automatically-created segmentations with the manual ground-truth segmentations.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Expert Consensus Segmentations: The ground truth was established through "manual ground-truth segmentations, the so-called annotations," performed by the external/independent annotator team of radiologists.
8. The sample size for the training set:
- The sample size for the training set is not explicitly stated in this document. The document mentions that "The algorithm was trained on MRI image data with contrast-enhancing tumors from multiple clinical sites, including a wide variety of scanner models and patient characteristics."
9. How the ground truth for the training set was established:
- How the ground truth for the training set was established is not explicitly stated in this document. It can be inferred that it followed a similar process to the test set, involving expert annotations, but the details are not provided.
Ask a specific question about this device
(200 days)
Brainlab AG
Brainlab Elements Image Fusion is an application for the co-registration of image data within medical procedures by using rigid and deformable registration methods. It is intended to align anatomical structures between data sets. It is not intended for diagnostic purposes.
Brainlab Elements Image Fusion is indicated for planning of cranial and extracranial surgical treatments and preplanning of cranial and extracranial radiotherapy treatments.
Brainlab Elements Image Fusion Angio is a software application that is intended to be used for the co-registration of cerebrovascular image data. It is not intended for diagnostic purposes.
Brainlab Elements Image Fusion Angio is indicated for planning of cranial surgical treatments and preplanning of cranial radiotherapy treatments.
Brainlab Elements Fibertracking is an application for the processing and visualization of cranial white matter tracts based on Diffusion Weighted Imaging (DWI) data for use in treatment planning procedures. It is not intended for diagnostic purposes.
Brainlab Elements Fibertracking is indicated for planning of cranial surgical treatments and preplanning of cranial radiotherapy treatments.
Brainlab Elements Contouring provides an interface with tools and views to outline, refine, combine and manipulate structures in patient image data. It is not intended for diagnostic purposes.
Brainlab Elements Contouring is indicated for planning of cranial and extracranial surgical treatments and preplanning of cranial and extracranial radiotherapy treatments.
Brainlab Elements BOLD MRI Mapping provides tools to analyze blood oxygen level dependent data (BOLD MRI Data) to visualize the activation signal. It is not intended for diagnostic purposes.
Brainlab Elements BOLD MRI Mapping is indicated for planning of cranial surgical treatments.
The Brainlab Elements are applications and background services for processing of medical images including functionalities such as data transfer, image co-registration, image segmentation, contouring and other image processing.
They consist of the following software applications:
- Image Fusion 5.0
- Image Fusion Angio 1.0
- Contouring 5.0
- BOLD MRI Mapping 1.0
- Fibertracking 3.0
This device is a successor of the Predicate Device Brainlab Elements 6.0 (K223106).
Brainlab Elements Image Fusion is an application for the co-registration of image data within medical procedures by using rigid and deformable registration methods.
Brainlab Elements Image Fusion Angio is a software application that is intended to be used for the co-registration of cerebrovascular image data. It allows co-registration of 2D digital subtraction angiography images to 3D vascular images in order to combine flow and location information. In particular, 2D DSA (digital subtraction angiography) sequences can be fused to MRA, CTA and 3D DSA sequences.
Brainlab Elements Contouring provides an interface with tools and views to outline, refine, combine and manipulate structures in patient image data. The output is saved as 3D DICOM segmentation object and can be used for further processing and treatment planning.
BOLD MRI Mapping provides methods to analyze task-based (block-design) functional magnet resonance images (fMRI). It provides a user interface with tools and views in order to visualize activation maps and generate 3D objects that can be used for further treatment planning.
Brainlab Elements Fibertracking is an application for the processing and visualization of information based upon Diffusion Weighted Imaging (DWI) data, i.e. to calculate and visualize cranial white matter tracts in selected regions of interest, which can be used for treatment planning procedures.
The provided text is a 510(k) clearance letter and its summary for Brainlab Elements 7.0. It details various components of the software, their indications for use, device descriptions, and comparisons to predicate devices. Crucially, it includes a "Performance Data" section with information on AI/ML performance tests for the Contouring 5.0 module, specifically for "Elements AI Tumor Segmentation."
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, focusing on the AI/ML component discussed:
Acceptance Criteria and Device Performance (Elements AI Tumor Segmentation, Contouring 5.0)
1. Table of Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria (Lower Bound of 95% Confidence Interval) | Reported Device Performance (Mean) |
---|---|---|
Dice Similarity Coefficient (Dice) | ≥ 0.7 | 0.75 |
Precision | ≥ 0.8 | 0.86 |
Recall | ≥ 0.8 | 0.85 |
Sub-stratified Performance:
Diagnostic Characteristics | Mean Dice | Mean Precision | Mean Recall |
---|---|---|---|
All | 0.75 | 0.86 | 0.85 |
Metastases to the CNS | 0.74 | 0.85 | 0.84 |
Meningiomas | 0.76 | 0.89 | 0.90 |
Cranial and paraspinal nerve tumors | 0.89 | 0.97 | 0.97 |
Gliomas and glio-/neuronal tumors | 0.81 | 0.95 | 0.85 |
It's important to note that the acceptance criteria are stated for the lower bound of the 95% confidence intervals, while the reported device performance is presented as the mean values. The text explicitly states, "Successful validation has been completed based on images containing up to 30 cranial metastases, each showing a diameter of at least 3 mm, and images with primary cranial tumors that are at least 10 mm in diameter (for meningioma, cranial/paraspinal nerve tumors, gliomas, glioneuronal and neuronal tumors)." This implies that the lower bounds of the 95% confidence intervals for the reported mean values also met or exceeded the criteria.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 412 patients (595 scans, 1878 annotations).
- Data Provenance: Retrospective image data sets from multiple clinical sites in the US and Europe. The data contained a homogenous distribution by gender and a diversity of ethnicity groups (White/Black/Latino/Asian). Most data were from patients who underwent stereotactic radiosurgery with diverse MR protocols (mainly 1.5T/3T MRI scans acquired in axial scan orientation). One-quarter (1/4) of the test pool corresponded to data from three independent sites in the USA.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Not explicitly stated as a specific number. The text refers to an "external/independent annotator team."
- Qualifications of Experts: The annotator team included US radiologists and non US radiologists. No further details on their experience (e.g., years of experience) are provided.
4. Adjudication Method for the Test Set
- The text states that the ground truth segmentations, "the so-called annotations," were established by an external/independent annotator team following a "well-defined data curation process." However, the specific adjudication method (e.g., 2+1, 3+1 consensus, or independent review) among these annotators is not detailed in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating human readers' improvement with AI assistance vs. without AI assistance was not discussed or presented in the provided text. The performance data is for the AI algorithm in a standalone manner.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance evaluation of the "Elements AI Tumor Segmentation" algorithm was performed. The study describes the algorithm's quantitative validation by comparing its automatically-created segmentations directly to "ground-truth annotations." This indicates an algorithm-only performance assessment.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus (or at least expert-generated) segmentations/annotations. The text explicitly states, "The validation was conducted quantitatively by comparing the (manual) ground-truth segmentations, the so-called annotations with the respective automatically-created segmentations. The annotations involved external/independent annotator team including US radiologists and non US radiologists."
8. The Sample Size for the Training Set
- The sample size for the training set is not specified in the provided text. The numbers given (412 patients, 595 scans, 1878 annotations) pertain to the test set used for validation.
9. How the Ground Truth for the Training Set was Established
- The method for establishing ground truth for the training set is not detailed in the provided text. The description of ground truth establishment (expert annotations) is specifically mentioned for the test set used for validation. However, it's highly probable that a similar expert annotation process was used for the training data given the nature of the validation.
Ask a specific question about this device
(261 days)
Brainlab AG
Mixed Reality Spine Navigation is an accessory to the Spine & Trauma Navigation Medical Device. The software is intended to display 2D navigation screens as well as a floating 3D virtual representation and stereotactic information of tracked instruments on a virtual display using a Mixed Reality headset. The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed stereotaxic information.
The Subject device Mixed Reality Spine Navigation is an accessory to the Spine & Trauma Navigation System. It consists of the software Mixed Reality Spine Navigation 1.0 and the mixed reality headset Magic Leap 2 Medtech (ML2). The software application allows the display in the mixed reality headset of 2D navigation views and a 3D hovering model of the patient anatomy, including stereotactic information of tracked instruments to support the surgeon during pedicle screw placement procedures.
The software Mixed Reality Spine Navigation is installed and running on the Magic Leap 2 MedTech glasses and can only be used in combination with a Brainlab Image Guided Surgery (IGS) platform (Curve Navigation 17700, Kick 2 Navigation Station or Buzz Navigation Ceiling-Mounted), where the Spine & Trauma Navigation software is running. All required navigation data, such as the patient registration, is transferred over Wi-Fi from the IGS platform to the Magic Leap 2 MedTech. Based on these data, 2D views and a 3D model are rendered by the computing unit of the Magic Leap 2 MedTech and displayed stereoscopically in the headset. Thus, navigation information displayed on the screen(s) of the IGS platform can be simultaneously displayed on Magic Leap 2 MedTech during the surgery.
Magic Leap 2 MedTech is an optical see-through head-mounted display: images are projected on semi-transparent optical layers, giving the surgeons the possibility to have virtual content displayed around the patient while having a view of the real world. If needed, corrective lenses can be attached to the headset.
The provided FDA 510(k) clearance letter and summary for the "Mixed Reality Spine Navigation" device indicates that no clinical testing was required. This suggests that the substantial equivalence determination was primarily based on non-clinical performance data, comparisons to predicate devices, and the established safety and effectiveness of the underlying technology.
Therefore, the request for specific details about "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of clinical performance, ground truth establishment, expert adjudication, sample sizes for test/training sets, and MRMC studies, cannot be fully answered using the provided text. The document explicitly states: "No clinical testing was required for the subject device."
However, based on the information provided, we can infer some "acceptance criteria" related to device performance as compared to the predicate, and how the device implicitly meets them through bench testing and verification/validation activities.
Here's an attempt to answer your questions based solely on the provided document, highlighting where information is absent due to the lack of clinical testing:
Acceptance Criteria and Device Performance for Mixed Reality Spine Navigation
The substantial equivalence of the Mixed Reality Spine Navigation device was established primarily through non-clinical testing and comparison to predicate devices, as no clinical testing was required. Therefore, the "acceptance criteria" are implicitly defined by the equivalence to the predicate device's performance characteristics, including accuracy and the successful execution of its intended function in a simulated environment.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category/Feature (Implicit from Predicate Comparison) | Specific Criterion (Based on Predicate) | Reported Device Performance (from Bench Testing/Verification) |
---|---|---|
Navigation Accuracy | Mean Positional Error of placed instrument's tip ≤ 2 mm | Mean Positional Error of placed instrument's tip ≤ 2 mm |
Mean Angular Error of placed instrument's axis ≤ 2° | Mean Angular Error of placed instrument's axis ≤ 2° | |
Functional Equivalence | Display 2D navigation screens | All views displayed in IGS platforms screen of the predicate can also be displayed. |
Display floating 3D virtual representation | Floating 3D model (patient anatomy) available in ML2 Medtech. | |
Display stereotaxic information of tracked instruments | Stereotaxic information of tracked instruments displayed. | |
Usability/Workflow Compatibility | Support pedicle screw placement workflows | Usability testing validated pedicle screw placement workflow in a simulated OR environment. |
Network Performance | Work as intended with minimum bandwidth requirements | Network performance benchmarking performed to ensure working as intended. |
Safety and EMC | Compliance with IEC 60601-1 Ed 3.2 & IEC 60601-1-2 Ed 4.1 | Testing provided according to IEC 60601-1 Ed 3.2 2020-08 and IEC 60601-1-2 Ed 4.1 2020-09 (CONSOLIDATED VERSIONS). |
Biocompatibility | Compliance with ISO 10993-1:2018 | Evaluation carried out in accordance with ISO 10993-1:2018. |
Cleaning Validation | Compliance with FDA guidance | Cleaning validation in accordance with FDA´s guidance. |
Optical Properties | Compliance with IEC 63145 standards | Testing in accordance with IEC 63145-20-10:2019 / 63145-20-20:2019 / 63145-22-10:2020 provided. |
Software Verification & Validation | Compliance with FDA Guidance | Successful implementation of product specifications, incremental testing, risk control measures, compatibility, cybersecurity. Documentation for Enhanced Documentation level. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated in terms of patient data or clinical cases, as no clinical testing was required. Tests were primarily bench tests and usability tests in a simulated environment.
- Data Provenance: The document does not specify data provenance (country of origin, retrospective/prospective) because clinical data was not used for the clearance. Bench test data and simulated environment data would be internal to the manufacturer's testing labs.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not Applicable / Not Specified: For the required non-clinical testing, "ground truth" would be established by validated measurement systems for accuracy tests (e.g., optical tracking systems for positional/angular accuracy) and adherence to engineering specifications. Usability testing would involve trained personnel simulating surgical use, but the document doesn't specify a number of "experts" to establish ground truth in the sense of clinical reference.
4. Adjudication Method for the Test Set
- Not Applicable: Given the nature of bench testing and simulated environment usability tests, no multi-expert adjudication method (like 2+1, 3+1) would be necessary as it's not a diagnostic AI evaluating medical images.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
- No: The document explicitly states, "No clinical testing was required for the subject device." Therefore, no MRMC study was conducted.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Partial/Indirect: The device is an accessory to a navigation system and displays information to a human user. Its core function relies on the underlying Spine & Trauma Navigation software. The "Navigation accuracy" metrics (positional and angular error) would reflect the algorithmic performance in conjunction with the optical tracking system in a controlled, non-human-in-the-loop setting for those specific measurements. However, the overall device function is human-in-the-loop as it's a navigation display.
7. The Type of Ground Truth Used
- Engineering Specifications and Simulated Performance Metrics: For accuracy, the ground truth would be based on precise measurement systems in a controlled environment to verify the device's ability to display instrument position and orientation within specified tolerances. For usability, the ground truth would be the successful completion of simulated surgical workflows as defined by the manufacturer's design specifications. No "expert consensus," "pathology," or "outcomes data" was used as ground truth for this device clearance.
8. The Sample Size for the Training Set
- Not Applicable: The document does not describe the use of any AI or machine learning models that would require a "training set" in the traditional sense of medical image analysis AI. The device is a mixed reality display system for an existing navigation platform.
9. How the Ground Truth for the Training Set was Established
- Not Applicable: As there's no mention of a "training set" for an AI model, this question is not relevant to the information provided.
In summary: The FDA clearance for "Mixed Reality Spine Navigation" was based on a combination of demonstrating substantial equivalence to a predicate device, comprehensive software verification and validation, hardware verification including safety and biocompatibility, and bench tests validating usability and network performance in a simulated environment. The absence of clinical testing means that acceptance criteria related to clinical efficacy studies, such as those involving human readers, expert ground truth, and patient outcome data, were not part of this specific 510(k) submission.
Ask a specific question about this device
(60 days)
Brainlab AG
Alignment System Cranial is intended to plan and to achieve a trajectory with surgical instruments during cranial stereotactic procedures.
The indications for use are biopsy of intracranial lesions, placement of stereoelectroencephalography (SEEG) electrodes and placement of anchor bolts for laser interstitial thermal therapy (LITT).
The subject device Alignment System Cranial is an image guided surgery system intended to support the surgeon to plan and to achieve a trajectory with surgical instruments during cranial stereotactic procedures using optical tracking technology.
For this purpose, the Alignment System Cranial consists of a combination of hardware and software. The Alignment Software Cranial with LITT 2.1 is installed on an Image Guided Surgery (IGS) platform (Curve, Curve Navigation 17700, Kick 2 Navigation Station or Buzz Navigation) consisting of a computer unit, a touch display and an infrared tracking camera. During surgery, the software tracks the position of instruments in relation to the patient anatomy and identifies this position on pre- or intraoperative images. The position of the surgical instruments is continuously updated on these images by optical tracking. This position information is used by the software to align either passive or active positioning devices to a planned trajectory for subsequent surgical steps.
The Alignment System Cranial has different configurations of hardware devices depending on which positioning device is used and which indication is performed. The Alignment Software Cranial with LITT 2.1 supports the active positioning devices Surgical Base System 1.4 and Cirq Arm System 2.0 (+ Cirq Robotic Alignment Module + Cirq Robotic Disposable Kinematic Unit) as well as the passive positioning device VarioGuide. Both types of positioning devices consist of articulated arms with different joints where additional devices and surgical instruments can be attached to for further manual or robotic alignment to a defined trajectory.
In addition, the subject device offers a set of indication specific instruments to support biopsy, sEEG and LITT procedures. This instrumentation consists of instrument holders, tracking arrays, guide tubes, reduction tube, bone anchors, drill bits and depth stops. None of the instruments is delivered sterile. All patient contacting materials consist of different alloys of stainless steel.
The Alignment Software Cranial with LITT has the following accessories:
- Automatic Registration providing an automatic registration for subsequent use.
- Automatic Registration iMRI providing an automatic image registration for intraoperatively . acquired MR images.
The provided text is a 510(k) summary for the "Alignment System Cranial," which includes "Alignment Software Cranial with LITT." It details the device's indications for use, description, and comparison to predicate devices, along with performance data to demonstrate substantial equivalence.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|
Mean Positional Error (instrument tip) ≤ 2 mm | Mean Positional Error: 1.19 mm | Yes |
Mean Angular Error (instrument axis) ≤ 2° | Mean Angular Error: 0.86° | Yes |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set:
- Number of registrations: 6
- Total number of samples: 37 (This likely refers to individual measurements taken over the 6 registrations)
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. However, the study was conducted as "System accuracy testing" to evaluate the device in "a realistic clinical setup and representative worst case scenarios," suggesting it was a controlled, prospective study performed by the manufacturer.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide information on the number of experts or their qualifications used to establish ground truth for this system accuracy testing. This type of testing typically relies on metrology standards and physical measurements rather than clinical expert consensus for ground truth.
4. Adjudication method for the test set
The document does not specify an adjudication method. For system accuracy testing based on physical measurements, an adjudication process involving human experts is generally not applicable in the same way it would be for image-reading or diagnostic AI.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The performance data presented is for the system accuracy of the device concerning its ability to align instruments, not for a diagnostic AI algorithm that human readers would interact with. The document explicitly states: "No clinical testing was needed for the Subject Device since optical tracking technology in the scope of image guided surgery for the included indications for use is well established in the market."
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The performance study was for the system's accuracy (hardware + software components), which is a key aspect of its standalone functionality in terms of guiding instruments. While it's not an "algorithm only" study in the sense of a pure AI diagnostic tool, it measures the precision of the device's output without direct human interpretation in the loop of the measurement itself. The "Automatic Registration" features (including one for iMRI) are mentioned as accessories, implying an algorithmic component, but specific performance criteria for these AI/ML-based features are not detailed beyond the general system accuracy. The document mentions that "There have been no changes to the AI/ML algorithm" for surface matching for patient registration, implying its prior validation.
7. The type of ground truth used
The ground truth for the system accuracy testing ("positional and angular navigation accuracy") would have been established through precise physical measurements using calibrated instruments and metrological standards (e.g., a coordinate measuring machine or similar setup to establish a true target position against which the device's reported position is compared). It is not based on expert consensus, pathology, or outcomes data, as this is a measurement of mechanical and software precision.
8. The sample size for the training set
The document does not provide information on the sample size for the training set for any embedded AI/ML components (e.g., the AI/ML based model for landmark delivery in surface matching). The focus of this 510(k) summary is on the system accuracy for the LITT indication, and asserts "no changes to the AI/ML algorithm" for patient registration.
9. How the ground truth for the training set was established
The document does not provide information on how the ground truth for any training set was established for embedded AI/ML components. It only mentions that an existing AI/ML algorithm for surface matching landmarks has not changed.
Ask a specific question about this device
(161 days)
Brainlab AG
ExacTrac Dynamic is intended to position patients at an accurately defined point within the treatment beam of a medical accelerator for stereotactic radiosurgery or radiotherapy procedures, to monitor the patient position and to provide a beam hold signal in case of a deviation in order to treat lesions, tumors and conditions anywhere in the body when radiation treatment is indicated.
ExacTrac Dynamic (ETD) is a patient positioning and monitoring device used in a radiotherapy environment as an add-on system to standard linear accelerators (linacs). It uses radiotherapy treatment plans and the associated computed tomography (CT) data to determine the patient's planned position and compares it via oblique X-ray images to the actual patient position. The calculated correction shift will then be transferred to the treatment machine to align the patient correctly at the machine's treatment position. During treatment, the patient is monitored with a thermal-surface camera and X-ray imaging to ensure that there is no misalignment due to patient movement. Positioning and monitoring are also possible in combination with implanted markers. By defining the marker positions, ExacTrac Dynamic can position the patient by using X-rays and thereafter monitor the position during treatment.
Additionally, ExacTrac Dynamic features a breath-hold (BH) functionality to serve as a tool to assist respiratory motion management. This functionality includes special features and workflows to correctly position the patient at a BH level and thereafter monitor this position using surface tracking. Regardless of the treatment indication, a correlation between the patient's surface and internal anatomy must be evaluated with Image-Guided Radiation Therapy. The manually acquired X-ray images support a visual inspection of organs at risk (OARs). The aim of this technique is to treat the patient only during breath hold phases where the treatment target is at a certain position to reduce respiratory-induced tumor motion and to ensure a certain planned distance to OARs such as the heart. In addition to the X-ray based positioning technique, the system can also monitor the patient after external devices such as Cone-Beam CT (CBCT has been used to position the patient).
The ExacTrac Dynamic Surface (ETDS) is a camera-only platform without the X-ray system and is available as a configuration which enables surface-based patient monitoring. This system includes an identical thermal-surface camera, workstation, and interconnection hardware to the linac as the ETD system. The workflows supported by ETDS are surface based only and must be combined with an external IGRT device (e.g., CBCT).
The FDA 510(k) summary for Brainlab AG's ExacTrac Dynamic (2.0) and ExacTrac Dynamic Surface provides information regarding its performance testing to demonstrate substantial equivalence to its predicate device, ExacTrac Dynamic 1.1 (K220338).
Here is a breakdown of the requested information based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state a table of "acceptance criteria" side-by-side with "reported device performance" values for all aspects of the device. However, it does reference the AAPM Task Group 1472 guidelines as a reference for the accuracy of surface-based monitoring and indicates that specific tests aimed to verify that accuracy specifications were not negatively affected. From the "Bench Tests" section, we can infer the objectives of the tests and how performance was evaluated.
Feature/Test | Acceptance Criteria (Inferred/Referenced) | Reported Device Performance |
---|---|---|
Rigid Body Surface Monitoring Accuracy Test | Feasibility of surface-based patient monitoring for radiotherapy and adherence to AAPM Task Group 1472 guidelines for non-radiographic radiotherapy localization and positioning systems. | The accuracy of the surface-based monitoring functionality was "checked using this new camera revision and an in-house phantom" and the goal was to "prove the feasibility". The document implies successful demonstration of feasibility and adherence. |
Workflow & Accuracy Test ExacTrac Dynamic | Accuracy specifications for patient positioning and monitoring at phantom treatment with ExacTrac Dynamic are not affected by relevant conditions, settings, and workflows. | The test was conducted to "verify that accuracy specifications... are not affected". The conclusion of substantial equivalence implies these specifications were met. |
Response Time Measurement | Implicit: To measure the time between phantom movement and the "Beam-off" signal, and the "out of tolerance" signal appearance. No explicit numerical threshold is given in the provided text. | The test "measures the time" and "is tracked." The conclusion of substantial equivalence implies acceptable response times. |
Verification of the Radiation Isocenter Calibration in ETD 2.0 | Not inferior to the previous, well-established Radiation Isocenter Calibration in ETD 1.1 by more than a given threshold. | The test was intended to "demonstrate that the Radiation Isocenter Calibration in ETD 2.0 is not inferior" within the specified threshold. The conclusion of substantial equivalence implies this was demonstrated. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document mentions the use of an "in-house phantom" for the Rigid Body Surface Monitoring Accuracy Test and "phantom treatment" for the Workflow & Accuracy Test. It does not provide specific numerical sample sizes (e.g., number of phantom instances, number of trials).
- Data Provenance: All testing appears to be retrospective (bench tests, phantom studies) and conducted internally (in-house phantom). There is no mention of data from human subjects or specific countries of origin.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not provided in the document. The testing described focuses on device performance against physical standards (e.g., phantom, previous system performance) and referenced guidelines (AAPM Task Group 1472), rather than expert-established ground truth on clinical images.
4. Adjudication Method for the Test Set
This information is not provided in the document. Given the nature of the bench and phantom tests, an adjudication method by experts is not described as it would be for clinical image interpretation studies.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study was not reported. The document states, "No clinical testing was required for the subject device." The testing described focuses on the device's physical performance, accuracy, and workflow.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
The described tests, specifically the "Rigid Body Surface Monitoring Accuracy Test," "Workflow & Accuracy Test ExacTrac Dynamic," "Response Time Measurement," and "Verification of the Radiation Isocenter Calibration," appear to evaluate the device's inherent performance characteristics, often in a controlled phantom environment. This implies a focus on the standalone capabilities of the system, even though its ultimate use is in assisting human operators in a radiotherapy setting. The "Beam-off signal" in response to movement is an automated system response, indicating standalone algorithmic functioning.
7. Type of Ground Truth Used
The ground truth for the performance tests appears to be:
- Physical Phantoms: An "in-house phantom" for surface monitoring accuracy and "phantom treatment" for workflow and accuracy.
- Established Reference System Performance: Comparison to the "previous, well-established Radiation Isocenter Calibration in ETD 1.1."
- Industry Guidelines: Reference to "quality assurance guidelines for non-radiographic radiotherapy localization and positioning systems, that were defined by AAPM Task Group 1472."
- Expected System Behavior: Verification of expected responses (e.g., "Beam-off signal") to phantom movement.
8. Sample Size for the Training Set
This information is not provided in the document. The 510(k) summary focuses on verification and validation testing, not the development or training of specific algorithms that would require a "training set" in the context of machine learning.
9. How the Ground Truth for the Training Set was Established
As no training set is mentioned (see point 8), this information is not applicable/provided.
Ask a specific question about this device
(70 days)
Brainlab AG
Spine & Trauma Navigation is intended as an intraoperative image-guided localization system to enable open and minimally invasive surgery. It links a freehand probe, tracked by a passive marker sensor system to virtual computer image space on a patient's preoperative or intraoperative 2D or 3D image data.
Spine & Trauma Navigation enables computer-assisted navigation of medical image data, which can either be acquired preoperatively or intraoperatively by an appropriate image acquisition system. The software offers screw and interbody device planning and navigation with surgical instruments.
The system is indicated for any medical condition in which the use of stereotactic surgery may be appropriate and where a reference to a rigid anatomical structure, such as the skull, the pelvis, a long bone or vertebra can be identified relative to the acquired image (CT, MR, 3D fluoroscopic image reconstruction or 2D fluoroscopic image) and/or an image databased model of the anatomy.
The Spine & Trauma Navigation is an image quided surgery system for navigated treatments in the fields of spine and trauma surgery, whereas the user may use image data based on CT, MR, 3D fluoroscopic image reconstruction (cone beam CT) or 2D fluoroscopic images. It offers different patient image registration methods and instrument calibrations to allow surgical navigation by using optical tracking technology. To fulfil this purpose, it consists of software, Image Guided Surgery platforms and surgical instruments.
Modified Drill Guides and Drill Bits have been introduced as part of the Subject Device. The Drill Guide instruments are navigated instruments which support the surgeon in guiding drill bits and K-wires during spinal procedures. They consist of a guide tube, a trocar insert (both available in five different diameters), a body with two available handles, an array and a depth control (available in two different sizes for various drilling depths). The Drill Guide Tubes and Drill Guide Trocar Inserts have patient contact. All instruments are delivered unsterile and require end user sterilization.
The Drill Bits are used for drilling of bone. They are made of stainless steel and are delivered non-sterile. They require steam sterilization onsite before use. There are several variants in terms of diameter, length, and presence of a depth stop feature.
The provided text describes the regulatory clearance for the Brainlab AG Drill Guide, Drill Bit, and Spine & Trauma Navigation system. Here's a summary of the acceptance criteria and the study details:
1. Table of Acceptance Criteria and Reported Device Performance:
Feature | Acceptance Criteria (Subject Device) | Reported Device Performance |
---|---|---|
Navigation Accuracy | Mean Positional Error of the placed instrument's tip ≤ 2 mm | Accuracy testing: Tip position deviation equal to or below 1.7 mm (95th percentile). |
Mean Angular Error of the placed instrument's axis ≤ 2° | Accuracy testing: Angular deviation equal to or below 1.7° (95th percentile). | |
Assembly Stability | Withstand unintended loads without losing accuracy or function. | The Drill Guide was able to withstand forces without losing accuracy or function. |
Lifecycle Assessment | Maintain accuracy and label readability throughout product lifetime. | Accuracy as well as label readability requirements were met. |
Skiving | Provide a stable hold. | The modified Drill Guide provides a more stable hold with the improved teeth design compared to the predicate Guide Tubes. |
Handling and Interface | Acceptance criteria met for depth control and array attachment. | Acceptance criteria were met in all cases. |
Usability | Safe and effective for use in defined scenarios. | The final design was proven safe and effective for use in the defined use scenarios. |
Mechanical Failure | Strong enough to withstand expected torques and possible bending. | The new worst case drill bit (diameter 2 mm) was tested under different scenarios in order to ensure it is strong enough to withstand the expected torques and possible bending under worst case conditions. |
Biocompatibility | Compliance with ISO 10993-1:2018. | Was evaluated according to ISO 10993-1:2018 "Biological evaluation of medical devices – Part 1: Evaluation and testing within a risk management process." |
Reprocessing Validation | Compliance with ISO 11737-1:2018, ANSI/AAMI ST98:2022, ISO 15883-5:2021-07. | Was evaluated according to ISO 11737-1:2018, ANSI/AAMI ST98:2022 and ISO 15883-5:2021-07. |
2. Sample Size Used for the Test Set and Data Provenance:
The document doesn't explicitly state sample sizes for all test sets.
- Usability Testing: 15 representative users.
- Other Testing (Accuracy, Assembly Stability, Lifecyle assessment, Skiving, Handling and interface analysis, Mechanical failure testing, Biocompatibility, Reprocessing validation): Sample sizes are not specified in the provided text.
- Data Provenance: Not specified, but generally, these types of performance tests are conducted in a controlled laboratory environment. The document does not mention the country of origin of data or whether it was retrospective or prospective, as clinical data was not required.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- This information is not provided in the document. The performance tests described (e.g., accuracy, mechanical failure) are typically assessed against engineering specifications rather than expert consensus on medical images or diagnoses.
- For usability testing, the 15 "representative users" likely served as the evaluators, but their specific qualifications beyond being "representative" are not detailed.
4. Adjudication Method for the Test Set:
- An adjudication method (like 2+1, 3+1) is not applicable for the reported performance testing, as these are engineering and functional tests.
- For usability testing, the document states "The final design was proven safe and effective," implying an overall assessment rather than a specific multi-reader adjudication process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance:
- No MRMC comparative effectiveness study was done. This device is a navigation system and surgical instruments, not an AI diagnostic or assistive tool that would typically be evaluated in an MRMC study comparing human reader performance with and without AI.
- The document explicitly states: "No clinical testing was required for the subject device."
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:
- The "Spine & Trauma Navigation" system is an image-guided surgery system that inherently involves a human surgeon in the loop. The performance tests (e.g., navigation accuracy) evaluate the system's ability to accurately guide instruments, which is a standalone function of the technology when used by a surgeon. It's not an "algorithm-only" performance in the sense of a fully automated diagnostic or interpretive AI.
7. The Type of Ground Truth Used:
- The ground truth for most of the performance tests (e.g., accuracy, mechanical failure, stability) is based on engineering specifications and metrology (precise measurements against known values).
- For biocompatibility and reprocessing, the ground truth is established by adherence to international standards (ISO 10993-1:2018, ISO 11737-1:2018, ANSI/AAMI ST98:2022, ISO 15883-5:2021-07).
- For usability, the ground truth is the satisfaction of predefined acceptance criteria by the representative users in a simulated scenario.
8. The Sample Size for the Training Set:
- This information is not applicable/not provided. The device is an image-guided surgery system and newly modified instruments, not a machine learning or AI model trained on data in the traditional sense that would have a "training set." The development of such physical devices and software systems involves rigorous engineering design, verification, and validation, but not typically a "training set" like that seen in deep learning applications.
9. How the Ground Truth for the Training Set Was Established:
- This information is not applicable, as there is no mention of a "training set" for an AI model.
Ask a specific question about this device
(256 days)
Brainlab AG
The software displays medical images and data. It also includes functions for image review, image manipulation, basic measurements and 3D visualization.
Viewer is a software for viewing DICOM data, such as native slices generated with medical imaging devices, axial, coronal and sagittal reconstructions, and data specific volume rendered views (e.g., skin, vessels, bone). Viewer supports basic manipulation such as windowing, reconstructions or alignment and it provides basic measurement functionality for distances and angles. Viewer is not intended for diagnosis nor for treatment planning. The Subject Device (Viewer) for which we are seeking clearance consists of the following software modules.
- · Viewer 5.4 (General Viewing)
- · Universal Atlas Performer 6.0
- Universal Atlas Transfer Performer 6.0
Universal Atlas Performer: Software for analyzing and processing medical image data with Universal Atlas to create different output results for further use by Brainlab applications
Universal Atlas Transfer Performer: Software that provides medical image data autoseqmentation information to Brainlab applications
When installed on a server, Viewer can be used on mobile devices like tablets. No specific application or user interface is provided for mobile devices. In mixed reality, the data and the views are selected and opened via desktop PC. The views are then rendered on the connected stereoscopic head-mounted display. Multiple users in the same room can connect to the Viewer session and view/review the data (such as already saved surgical plans) on their mixed reality glasses.
The Brainlab AG Viewer (5.4) and associated products (Elements Viewer, Mixed Reality Viewer, Smart Layout, Elements Viewer Smart Layout) are a medical image management and processing system. The device displays medical images and data, and includes functions for image review, manipulation, basic measurements, and 3D visualization.
Here's an analysis of the acceptance criteria and supporting studies based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided text details various tests performed to ensure the device's performance, particularly focusing on the mixed reality aspects and measurement accuracy. However, specific numerical "acceptance criteria" (e.g., "accuracy must be >95%") and corresponding reported performance values are not explicitly stated in a detailed quantitative manner in the summary. Instead, the document describes the types of tests conducted and generally states that they were successful or ensured certain qualities.
Test Category | Acceptance Criteria (Implied/General) | Reported Device Performance (General) |
---|---|---|
Software Verification & Validation | Successful implementation of product specifications, incremental testing for different release candidates, testing of risk control measures, compatibility testing, cybersecurity tests. | Documentation provided as recommended by FDA guidance. Successful implementation, testing of risk controls, compatibility, and cybersecurity acknowledged for an enhanced level. |
Ambient Light | Sufficient visualization in a variety of ambient lighting conditions with Magic Leap 2. | Test conducted to determine Magic Leap 2 display quality for sufficient visualization in a variety of ambient lighting conditions. (Implied successful) |
Hospital Environment | Compatibility with various hardware platforms and compatible software. | Test conducted to test compatibility of the Subject Device with various hardware platforms and compatible software. (Implied successful) |
Display Quality | Seamless integration of real and virtual content; maintenance of high visibility and image quality (optical transmittance, luminance non-uniformity, Michelson contrast). | Tests carried out to measure and compare optical transmittance, luminance non-uniformity, and Michelson contrast of the head-mounted display to ensure seamless integration of real and virtual content and maintenance of high visibility and image quality, both with and without segmented dimming. (Implied successful) |
Measurement Accuracy | Accurate 3D measurement placement using Mixed Reality user interface (Magic Leap control), comparable to mouse and touch input. | Tests performed to evaluate the accuracy of 3D measurement placement using a Mixed Reality user interface (Magic Leap control) in relation to mouse and touch as input methods. (Implied successful, and supports equivalence to predicate's measurement capabilities, with added 3D functionality in MR) |
2. Sample Size for the Test Set and Data Provenance
The document does not explicitly state the sample sizes used for any of the described tests (Ambient Light, Hospital Environment, Display Quality, Measurement Accuracy).
Regarding data provenance, the document does not specify the country of origin for any data, nor whether the data used in testing was retrospective or prospective.
3. Number of Experts and Qualifications for Ground Truth
The document does not provide information on the number of experts used to establish ground truth for any of the described tests, nor their qualifications.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) used for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study. The study is focused on the device's technical performance and accuracy, not on human reader improvement with or without AI assistance.
6. Standalone (Algorithm Only) Performance Study
The document describes the "Viewer" as software for displaying and manipulating medical images. While it's a software-only device, the tests described (e.g., display quality, measurement accuracy) inherently assess the algorithm's performance in its intended functions without direct human-in-the-loop impact on the results being measured during those specific tests. However, it's not explicitly framed as an "algorithm-only" performance study in contrast to human performance, but rather as instrumental performance validation. The Mixed Reality functionality, while requiring a human operator, still has its underlying software/hardware performance (e.g., accuracy of 3D measurement placement) evaluated.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for any of the tests. For "Measurement accuracy test," it can be inferred that a known, precisely measured physical or digital standard would have been used as ground truth for comparison. For other tests like display quality or compatibility, the ground truth would be conformance to established technical specifications or standards for optical properties and functional compatibility, respectively.
8. Sample Size for the Training Set
The document does not provide any information regarding a training set sample size. This is consistent with the device being primarily a viewing, manipulation, and measurement tool rather than an AI/ML diagnostic algorithm that requires a "training set" in the conventional sense. The "Universal Atlas Performer" and "Universal Atlas Transfer Performer" modules do involve "analyzing and processing medical image data with Universal Atlas to create different output results" and "provides medical image data autosegmentation information," which might imply some form of algorithmic learning or rule-based processing. However, no details on training sets for these specific components are included.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned or detailed, there is no information on how its ground truth might have been established.
Ask a specific question about this device
Page 1 of 14