(135 days)
Bullsai Confirm provides functionality to assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
Bullsai Confirm is intended to assist medical professionals in planning the programming of deep brain stimulation (DBS) by visualizing the Volume of Tissue Activated (VTA) relative to patient anatomy. It is used to visualize patient-specific information within the patient's anatomy. Integrated magnetic resonance imaging (MRI) and computed tomography (CT) images are uploaded to Bullsai Confirm and can be navigated through in multiple 2D projections and 3D reconstructions. Abbott DBS lead models are positioned in the corresponding artifacts and potential stimulation settings and electrode configurations entered. Bullsai Confirm mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model to translate potential stimulation settings and electrode configurations into a visualized VTA field to indicate the shape and the area or volume of anatomy that will be activated by the stimulation. Results, including input image quality assessments, are shared in an output PDF report and visualized in a web-based software interface.
Bullsai Confirm is used to do the following:
- Import DICOM images from a picture archiving and communication system (PACS), including . MRI and CT DICOM images.
- . Import preoperative planning outputs (including tractography, structural ROIs, etc.) from AWS S3 Cloud Storage
- . Combine MR images, CT images, and patient specific 3D structures for more detail
- Localize graphical compatible DBS lead models (based on preoperative imaging) .
- . Visualize VTA fields relative to structures of interest in the patient anatomy or lead position
The software provides a workflow for clinicians to:
- Create patient-specific stimulation plans for DBS programming .
- . Export reports that summarize stimulation plans for patients (PNG screenshot)
Here's a breakdown of the acceptance criteria and study details for Bullsai Confirm, based on the provided FDA 510(k) summary:
Device: Bullsai Confirm
Indication for Use: To assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't present a specific table of quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or accuracy for the Bullsai Confirm product itself. Instead, the acceptance criteria are described in terms of compliance with regulatory requirements and the fulfillment of specific software functionalities as special controls.
The "Performance Data" section states the device meets the special controls under 21 CFR 882.5855, which are:
Acceptance Criteria (Special Controls - 21 CFR 882.5855) | Reported Device Performance |
---|---|
1. Software verification, validation, and hazard analysis must be performed. | A hazard analysis and software verification and validation testing were performed for Bullsai Confirm. |
2. Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device. | Bullsai Confirm underwent formative usability testing. |
3. Labeling must include: | |
a. The implanted brain stimulators for which the device is compatible. | |
b. Instructions for use. | |
c. Instructions and explanations of all user-interface components. | |
d. A warning regarding use of the data with respect to not replacing clinical judgment. | The User Manual for Bullsai Confirm contains the labeling statements in accordance with the special controls. (Implies compliance with all sub-points a-d). |
Note: The document also mentions "technical performance evaluation of the lead artifact detection and registration between image types" but does not provide specific acceptance criteria or performance metrics (e.g., accuracy, precision) for these evaluations. This is a common practice in 510(k) submissions where specific quantitative performance for a planning software like this might not be required in the public summary if the primary claim is substantial equivalence and compliance with special controls.
Study Proving Device Meets Acceptance Criteria
The document outlines that the device's performance was evaluated through various tests to meet the special controls.
2. Sample Sizes Used for the Test Set and Data Provenance
The document does not explicitly state the number of cases or sample sizes used for the "formative usability testing" or the "technical performance evaluation of the lead artifact detection and registration between image types."
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While it mentions importing from PACS and AWS S3, this doesn't detail the origin of the data used for testing.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications involved in establishing ground truth for any test sets. The nature of this software (DBS planning assistant) suggests that "ground truth" would likely relate to the accuracy of lead localization, VTA calculation, and anatomical registration, typically assessed by neurosurgeons or neurologists specializing in DBS.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth or evaluating device performance. This would typically be detailed if a reader study or performance validation against a consensus "gold standard" was performed and the results reported as part of the summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of an MRMC comparative effectiveness study being performed, nor any data on how much human readers improve with AI vs. without AI assistance. The device is described as an assistant for planning, implying a human-in-the-loop, but without a comparative study.
6. Standalone (Algorithm Only) Performance
The document does not provide standalone (algorithm only, without human-in-the-loop) performance metrics. The software is explicitly described as assisting "medical professionals," indicating it's designed for human-in-the-loop use.
7. Type of Ground Truth Used
The specific "type of ground truth" (e.g., expert consensus, pathology, outcomes data) is not explicitly stated for any of the performance evaluations mentioned. For "technical performance evaluation of the lead artifact detection and registration between image types," the ground truth would likely be derived from expert manual localization or highly accurate imaging methods, but this is not detailed.
8. Sample Size for the Training Set
The document provides no information regarding the size of the training set used for any machine learning components (if applicable) within the Bullsai Confirm software. Given the description focusing on finite element models and neural activation models, it's possible the core algorithms are physics-based rather than exclusively data-driven, or that details of data-driven components (if any) are not disclosed in this summary.
9. How Ground Truth for the Training Set Was Established
As no training set size is provided, there is consequently no information on how ground truth for a training set was established.
§ 882.5855 Brain stimulation programming planning software.
(a)
Identification. The brain stimulation programming planning software is a prescription device intended to assist in planning stimulation programming for implanted brain stimulators.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Software verification, validation, and hazard analysis must be performed.
(2) Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device.
(3) Labeling must include:
(i) The implanted brain stimulators for which the device is compatible.
(ii) Instructions for use.
(iii) Instructions and explanations of all user-interface components.
(iv) A warning regarding use of the data with respect to not replacing clinical judgment.