Search Results
Found 3 results
510(k) Data Aggregation
(135 days)
Saint Louis, Missouri 63108
Re: K243520
Trade/Device Name: Bullsai Confirm Regulation Number: 21 CFR 882.5855
|
| Classification: | Brain stimulation programming planning software, 21 CFR
882.5855
Performance Data:
The subject device meets the special controls under 21 CFR 882.5855
Bullsai Confirm provides functionality to assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
Bullsai Confirm is intended to assist medical professionals in planning the programming of deep brain stimulation (DBS) by visualizing the Volume of Tissue Activated (VTA) relative to patient anatomy. It is used to visualize patient-specific information within the patient's anatomy. Integrated magnetic resonance imaging (MRI) and computed tomography (CT) images are uploaded to Bullsai Confirm and can be navigated through in multiple 2D projections and 3D reconstructions. Abbott DBS lead models are positioned in the corresponding artifacts and potential stimulation settings and electrode configurations entered. Bullsai Confirm mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model to translate potential stimulation settings and electrode configurations into a visualized VTA field to indicate the shape and the area or volume of anatomy that will be activated by the stimulation. Results, including input image quality assessments, are shared in an output PDF report and visualized in a web-based software interface.
Bullsai Confirm is used to do the following:
- Import DICOM images from a picture archiving and communication system (PACS), including . MRI and CT DICOM images.
- . Import preoperative planning outputs (including tractography, structural ROIs, etc.) from AWS S3 Cloud Storage
- . Combine MR images, CT images, and patient specific 3D structures for more detail
- Localize graphical compatible DBS lead models (based on preoperative imaging) .
- . Visualize VTA fields relative to structures of interest in the patient anatomy or lead position
The software provides a workflow for clinicians to:
- Create patient-specific stimulation plans for DBS programming .
- . Export reports that summarize stimulation plans for patients (PNG screenshot)
Here's a breakdown of the acceptance criteria and study details for Bullsai Confirm, based on the provided FDA 510(k) summary:
Device: Bullsai Confirm
Indication for Use: To assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't present a specific table of quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or accuracy for the Bullsai Confirm product itself. Instead, the acceptance criteria are described in terms of compliance with regulatory requirements and the fulfillment of specific software functionalities as special controls.
The "Performance Data" section states the device meets the special controls under 21 CFR 882.5855, which are:
Acceptance Criteria (Special Controls - 21 CFR 882.5855) | Reported Device Performance |
---|---|
1. Software verification, validation, and hazard analysis must be performed. | A hazard analysis and software verification and validation testing were performed for Bullsai Confirm. |
2. Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device. | Bullsai Confirm underwent formative usability testing. |
3. Labeling must include: | |
a. The implanted brain stimulators for which the device is compatible. | |
b. Instructions for use. | |
c. Instructions and explanations of all user-interface components. | |
d. A warning regarding use of the data with respect to not replacing clinical judgment. | The User Manual for Bullsai Confirm contains the labeling statements in accordance with the special controls. (Implies compliance with all sub-points a-d). |
Note: The document also mentions "technical performance evaluation of the lead artifact detection and registration between image types" but does not provide specific acceptance criteria or performance metrics (e.g., accuracy, precision) for these evaluations. This is a common practice in 510(k) submissions where specific quantitative performance for a planning software like this might not be required in the public summary if the primary claim is substantial equivalence and compliance with special controls.
Study Proving Device Meets Acceptance Criteria
The document outlines that the device's performance was evaluated through various tests to meet the special controls.
2. Sample Sizes Used for the Test Set and Data Provenance
The document does not explicitly state the number of cases or sample sizes used for the "formative usability testing" or the "technical performance evaluation of the lead artifact detection and registration between image types."
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While it mentions importing from PACS and AWS S3, this doesn't detail the origin of the data used for testing.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications involved in establishing ground truth for any test sets. The nature of this software (DBS planning assistant) suggests that "ground truth" would likely relate to the accuracy of lead localization, VTA calculation, and anatomical registration, typically assessed by neurosurgeons or neurologists specializing in DBS.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth or evaluating device performance. This would typically be detailed if a reader study or performance validation against a consensus "gold standard" was performed and the results reported as part of the summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of an MRMC comparative effectiveness study being performed, nor any data on how much human readers improve with AI vs. without AI assistance. The device is described as an assistant for planning, implying a human-in-the-loop, but without a comparative study.
6. Standalone (Algorithm Only) Performance
The document does not provide standalone (algorithm only, without human-in-the-loop) performance metrics. The software is explicitly described as assisting "medical professionals," indicating it's designed for human-in-the-loop use.
7. Type of Ground Truth Used
The specific "type of ground truth" (e.g., expert consensus, pathology, outcomes data) is not explicitly stated for any of the performance evaluations mentioned. For "technical performance evaluation of the lead artifact detection and registration between image types," the ground truth would likely be derived from expert manual localization or highly accurate imaging methods, but this is not detailed.
8. Sample Size for the Training Set
The document provides no information regarding the size of the training set used for any machine learning components (if applicable) within the Bullsai Confirm software. Given the description focusing on finite element models and neural activation models, it's possible the core algorithms are physics-based rather than exclusively data-driven, or that details of data-driven components (if any) are not disclosed in this summary.
9. How Ground Truth for the Training Set Was Established
As no training set size is provided, there is consequently no information on how ground truth for a training set was established.
Ask a specific question about this device
(124 days)
Germany
Re: K213930
Trade/Device Name: Brainlab Elements Guide XT, Guide 3.0 Regulation Number: 21 CFR 882.5855
| QQC |
| Regulation Number | 882.5855
Brainlab Elements Guide XT provides functionality to assist medical professionals in planning the programming of stimulation for patients receiving approved Boston Scientific deep brain stimulation (DBS) devices.
Brainlab Elements Guide XT is a software application designed to support neurosurgeons and neurologists in Deep Brain Stimulation (DBS) treatments. It enables stimulation field simulation and visualization of the stimulation field model in the context of anatomical images. This functionality can be used to aid in lead parameter adjustment. Brainlab Elements Guide XT is used in combination with other applications that provide functionality for image fusion, segmentation, etc. Guide XT is compatible with selected Boston Scientific leads and implantable pulse generators but does not directly interact with the surgeon. The subject device Brainlab Elements Guide XT consists of multiple software components including the Guide XT 3.0 application that provides the core functionality for stimulation field model creation and visualization in the context of the patient anatomy.
The provided text does not contain detailed acceptance criteria and performance data for the Brainlab Elements Guide XT device. It broadly states that "the device was demonstrated to be as safe and effective as the predicate device" based on "verification and non-clinical tests."
However, I can extract the information that is present and indicate where specific details are missing based on your request.
Here's a breakdown of the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table of acceptance criteria with corresponding performance metrics. It only mentions general testing and verification without quantifiable results.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document states: "Additionally, the performance test for the VTA (Volume of Tissue Activated) simulation was conducted supporting the use of a proprietary heuristic methodology for the prediction of threshold current (Ith) maps in the vicinity of implanted leads stimulated by the Deep Brain Stimulation (DBS) system from Boston Scientific Neuromodulation (BSN)."
- Sample Size for Test Set: Not explicitly stated.
- Data Provenance: Not specified (e.g., country of origin). The study is described as a "performance test for the VTA simulation," suggesting it's likely a retrospective analysis of existing data or a simulated environment, but not explicitly stated as retrospective or prospective human subject data. "Clinical data was leveraged from the existing literature sources to validate the indications and intended use of the device," but this refers to the validation of the indications, not the direct performance test of the device itself. The device "was not used for any clinical tests for the purposes of this 510(k) submission."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This information is not provided. The document describes a "proprietary heuristic methodology" for VTA simulation, but it does not mention expert involvement in establishing ground truth for the performance test set.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document explicitly states: "The Subject Device was not used for any clinical tests for the purposes of this 510(k) submission." Therefore, an MRMC comparative effectiveness study was not conducted with human readers using the device. The device is a "planning software" to "assist medical professionals in planning," so human-in-the-loop performance would be relevant, but it was not tested in this submission.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance test was done for the VTA simulation. The document states: "Additionally, the performance test for the VTA (Volume of Tissue Activated) simulation was conducted supporting the use of a proprietary heuristic methodology for the prediction of threshold current (Ith) maps in the vicinity of implanted leads stimulated by the Deep Brain Stimulation (DBS) system from Boston Scientific Neuromodulation (BSN)."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document mentions that the VTA simulation performance test "supporting the use of a proprietary heuristic methodology." This implies the ground truth might have been established through a computational model or prior validated data from Boston Scientific, rather than directly from expert consensus, pathology, or outcomes data from independent cases for this specific performance test. The document does not definitively state the type of ground truth used for this VTA simulation performance evaluation.
8. The sample size for the training set
The document does not mention a training set or its sample size. The VTA simulation uses a "proprietary heuristic methodology," which might imply a model without explicit training data or a model trained externally, but no details are provided here.
9. How the ground truth for the training set was established
Since no training set is mentioned, this information is not provided.
Ask a specific question about this device
(201 days)
NEW REGULATION NUMBER: 21 CFR 882.5855
CLASSIFICATION: Class II
PRODUCT CODE: QQC1
BACKGROUND
Code: QQC Device Type: Brain stimulation programming planning software Class: II Regulation: 21 CFR 882.5855
The SureTune4 Software is indicated to assist medical professionals in planning the programming of stimulation for patients receiving approved Medtronic deep brain stimulation (DBS) devices.
The SureTune4 Software is intended to assist medical professionals in planning the programming of deep brain stimulation (DBS) by visualizing the Volume of Neural Activation (VNA) relative to patient anatomy. It is used to visualize patient-specific information within the patient's anatomy. Integrated preoperative and postoperative magnetic resonance imaging (MRI), Oarm™, and computed tomography (CT) images are uploaded to SureTune4 and can be navigated through in multiple 2D projections and 3D reconstructions. Medtronic DBS lead models are positioned in the corresponding artifacts and potential stimulation settings and electrode configurations entered. The SureTune4 Software mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model to translates potential stimulation settings and electrode configurations into a visualized VNA field to indicate the shape and the area or volume of anatomy that will be activated by the stimulation.
The SureTune4 software is used to do the following:
- TM Import MR, O-arm ", and CT patient images over a DICOM network or from . physical media (hard drive, USB drive, CD, or DVD)
- Import DICOM archives from StealthStation TM S7 TM systems with Cranial 3.x . software and StealthStation S8 Cranial software systems, and SureTune4 systems over a DICOM network
- Combine MR. O-arm and CT images for more detail .
- Superimpose an anatomical atlas to better understand the position of structures of . interest relative to a patient's anatomy
- Manually segment structures of interest to highlight particular brain structures .
- . Localize graphical Medtronic DBS lead models (based on preoperative imaging)
- Enter electrophysiological annotations .
- Visualize VNA fields relative to structures of interest in the patient anatomy or . lead position
- . Create patient-specific stimulation plans for DBS programming
- Generate reports that summarize stimulation plans for patients .
- Export patient sessions to SureTune4 XLS spreadsheets (in Microsoft™ Excel format) .
The SureTune4 Software, a brain stimulation programming planning software for Deep Brain Stimulation (DBS) devices, underwent a user needs validation study and a formative usability evaluation to demonstrate its conformity with regulatory requirements. No clinical performance testing was provided.
1. Acceptance Criteria and Reported Device Performance
The acceptance criterion for the user needs validation study was an average score of 3 or higher (on a 1-5 scale) for each user need. The device met this criterion.
Acceptance Criteria | Reported Device Performance |
---|---|
Average score of 3 or higher for user needs in the User Needs Validation Study. | All user needs received an average rating of 3 or greater on a 1-5 scale. |
Passing results in the Formative Usability Evaluation, identifying use difficulties and assessing safety and efficacy of use, including Task Completion, Trends in Use Difficulties, Qualitative Feedback, and Participant Safety Scores. | The formative usability evaluation "Passed." Some minor use difficulties were observed but were appropriately mitigated. |
2. Sample Size and Data Provenance
User Needs Validation Study:
- Sample Size: 8 subjects (5 Neurologists, 3 Neurosurgeons)
- Data Provenance: Not explicitly stated, but implied to be prospective for the purpose of the study. The country of origin is not specified.
Formative Usability Evaluation:
- Sample Size: 15 subjects (8 Neurosurgeons, 7 Neurologists)
- Data Provenance: Not explicitly stated, but implied to be prospective for the purpose of the study. The country of origin is not specified.
3. Number of Experts and Qualifications for Ground Truth
For both the User Needs Validation Study and the Formative Usability Evaluation, the "ground truth" was established by the participating medical professionals' subjective ratings and observations of the software's performance and usability.
- Number of Experts:
- User Needs Validation Study: 8 (5 Neurologists, 3 Neurosurgeons)
- Formative Usability Evaluation: 15 (8 Neurosurgeons, 7 Neurologists)
- Qualifications of Experts: Neurologists and Neurosurgeons. The level of experience (e.g., years of practice) is not specified.
4. Adjudication Method
Neither study described an explicit adjudication method. The User Needs Validation Study used an average rating, implying that individual ratings were aggregated without a formal adjudication process to resolve discrepancies between subjects. The Formative Usability Evaluation involved observing use difficulties and collecting qualitative feedback, which would likely be summarized rather than formally adjudicated.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was mentioned. The studies focused on user needs and usability of the software itself rather than comparing human reader performance with and without AI assistance for diagnosing or interpreting images.
6. Standalone Performance Study
No standalone performance study of the algorithm (i.e., algorithm only without human-in-the-loop performance) was explicitly described in the provided text in the context of clinical performance. The software verification and validation (V&V) testing included "Volume of neural activation generation and visualization" and "Patient image fusion (registration)," which suggest algorithmic evaluations, but these were part of the software development and verification process, not a standalone clinical performance study as typically understood (e.g., measuring diagnostic accuracy). The document states, "Clinical performance testing was not provided for the SureTune4."
7. Type of Ground Truth Used
For the User Needs Validation Study and Formative Usability Evaluation, the "ground truth" was based on:
- Expert Consensus/Subjective Feedback: Participants (Neurologists and Neurosurgeons) directly assessed whether the device met their needs and identified usability issues. This is a form of expert subjective evaluation rather than a ground truth derived from objective clinical outcomes or pathology.
For the underlying VNA model and image processing:
- Internal Validation/Benchmarking: The "VNA modeling description and justification, human factors/usability testing, auto-detect lead orientation algorithm accuracy testing, image fusion accuracy testing" passed prespecified acceptance criteria. This implies some form of internal ground truth or reference standard was used for these technical validations, but its specific nature (e.g., phantom data, annotated datasets) is not detailed.
8. Sample Size for the Training Set
The document does not provide information about a specific training set size for an AI/ML algorithm. The SureTune4 Software's description focuses on mathematical modeling of electric fields and neural activation rather than a deep learning model trained on a large dataset. Therefore, the concept of a "training set" as typically used for AI/ML is not directly applicable or described in this context.
9. How the Ground Truth for the Training Set Was Established
As there is no explicit mention of a training set or a deep learning algorithm in the provided text, the method for establishing ground truth for a training set is not applicable. The software relies on "mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model," which suggests a physics-based approach rather than a data-driven machine learning model requiring a labeled training set.
Ask a specific question about this device
Page 1 of 1