Search Results
Found 2 results
510(k) Data Aggregation
(135 days)
Turing Medical Technologies, Inc.
Bullsai Confirm provides functionality to assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
Bullsai Confirm is intended to assist medical professionals in planning the programming of deep brain stimulation (DBS) by visualizing the Volume of Tissue Activated (VTA) relative to patient anatomy. It is used to visualize patient-specific information within the patient's anatomy. Integrated magnetic resonance imaging (MRI) and computed tomography (CT) images are uploaded to Bullsai Confirm and can be navigated through in multiple 2D projections and 3D reconstructions. Abbott DBS lead models are positioned in the corresponding artifacts and potential stimulation settings and electrode configurations entered. Bullsai Confirm mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model to translate potential stimulation settings and electrode configurations into a visualized VTA field to indicate the shape and the area or volume of anatomy that will be activated by the stimulation. Results, including input image quality assessments, are shared in an output PDF report and visualized in a web-based software interface.
Bullsai Confirm is used to do the following:
- Import DICOM images from a picture archiving and communication system (PACS), including . MRI and CT DICOM images.
- . Import preoperative planning outputs (including tractography, structural ROIs, etc.) from AWS S3 Cloud Storage
- . Combine MR images, CT images, and patient specific 3D structures for more detail
- Localize graphical compatible DBS lead models (based on preoperative imaging) .
- . Visualize VTA fields relative to structures of interest in the patient anatomy or lead position
The software provides a workflow for clinicians to:
- Create patient-specific stimulation plans for DBS programming .
- . Export reports that summarize stimulation plans for patients (PNG screenshot)
Here's a breakdown of the acceptance criteria and study details for Bullsai Confirm, based on the provided FDA 510(k) summary:
Device: Bullsai Confirm
Indication for Use: To assist medical professionals in planning the programming of stimulation for patients receiving approved Abbott deep brain stimulation (DBS) devices.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't present a specific table of quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or accuracy for the Bullsai Confirm product itself. Instead, the acceptance criteria are described in terms of compliance with regulatory requirements and the fulfillment of specific software functionalities as special controls.
The "Performance Data" section states the device meets the special controls under 21 CFR 882.5855, which are:
Acceptance Criteria (Special Controls - 21 CFR 882.5855) | Reported Device Performance |
---|---|
1. Software verification, validation, and hazard analysis must be performed. | A hazard analysis and software verification and validation testing were performed for Bullsai Confirm. |
2. Usability assessment must demonstrate that the intended user(s) can safely and correctly use the device. | Bullsai Confirm underwent formative usability testing. |
3. Labeling must include: | |
a. The implanted brain stimulators for which the device is compatible. | |
b. Instructions for use. | |
c. Instructions and explanations of all user-interface components. | |
d. A warning regarding use of the data with respect to not replacing clinical judgment. | The User Manual for Bullsai Confirm contains the labeling statements in accordance with the special controls. (Implies compliance with all sub-points a-d). |
Note: The document also mentions "technical performance evaluation of the lead artifact detection and registration between image types" but does not provide specific acceptance criteria or performance metrics (e.g., accuracy, precision) for these evaluations. This is a common practice in 510(k) submissions where specific quantitative performance for a planning software like this might not be required in the public summary if the primary claim is substantial equivalence and compliance with special controls.
Study Proving Device Meets Acceptance Criteria
The document outlines that the device's performance was evaluated through various tests to meet the special controls.
2. Sample Sizes Used for the Test Set and Data Provenance
The document does not explicitly state the number of cases or sample sizes used for the "formative usability testing" or the "technical performance evaluation of the lead artifact detection and registration between image types."
- Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). While it mentions importing from PACS and AWS S3, this doesn't detail the origin of the data used for testing.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications involved in establishing ground truth for any test sets. The nature of this software (DBS planning assistant) suggests that "ground truth" would likely relate to the accuracy of lead localization, VTA calculation, and anatomical registration, typically assessed by neurosurgeons or neurologists specializing in DBS.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing ground truth or evaluating device performance. This would typically be detailed if a reader study or performance validation against a consensus "gold standard" was performed and the results reported as part of the summary.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no mention of an MRMC comparative effectiveness study being performed, nor any data on how much human readers improve with AI vs. without AI assistance. The device is described as an assistant for planning, implying a human-in-the-loop, but without a comparative study.
6. Standalone (Algorithm Only) Performance
The document does not provide standalone (algorithm only, without human-in-the-loop) performance metrics. The software is explicitly described as assisting "medical professionals," indicating it's designed for human-in-the-loop use.
7. Type of Ground Truth Used
The specific "type of ground truth" (e.g., expert consensus, pathology, outcomes data) is not explicitly stated for any of the performance evaluations mentioned. For "technical performance evaluation of the lead artifact detection and registration between image types," the ground truth would likely be derived from expert manual localization or highly accurate imaging methods, but this is not detailed.
8. Sample Size for the Training Set
The document provides no information regarding the size of the training set used for any machine learning components (if applicable) within the Bullsai Confirm software. Given the description focusing on finite element models and neural activation models, it's possible the core algorithms are physics-based rather than exclusively data-driven, or that details of data-driven components (if any) are not disclosed in this summary.
9. How Ground Truth for the Training Set Was Established
As no training set size is provided, there is consequently no information on how ground truth for a training set was established.
Ask a specific question about this device
(113 days)
Turing Medical Technologies, Inc.
Bullsai is composed of a set of modules intended for analysis and processing of medical images and other healthcare data. It includes functions for image manipulation, basic measurements, and planning.
Bullsai is indicated for use in image processing, registration, atlas-assisted visualization and segmentation, and target export creation and selection of structural MRI images, where an output can be generated for use by a system capable of reading DICOM image sets.
Bullsai is indicated for use in the processing of diffusion-weighted MRI sequences into 3D maps that represent white matter tracts based on diffusion reconstruction methods and for the use of said maps to select and create exports.
Typical users of Bullsai are medical professionals, including but not limited to surgeons, clinicians. and radiologists.
Bullsai is a software-only, cloud-deployed, image processing package which can be used to perform DICOM image processing and analysis.
Bullsai can receive ("import") DICOM images from picture archiving and communication systems (PACS), including Diffusion Weighted Imaging (DWI) and structural brain imaging.
Bullsai removes protected health information (PHI) and links the dataset to an encryption key, which is then used to relink the data back to the patient when the data is exported to a medical imaging platform such as a hospital PACS or other DICOM device.
The software provides a workflow for a clinician to:
- Select an image for planning and visualization,
- Validate image quality,
- Export black and white and color DICOMs for use in systems that can view DICOM images.
Bullsai uses advanced MRI processing to deliver patient-specific anatomy and tractography results to support physicians in neurosurgical planning. Bullasi preprocessing steps include denoising, debiasing, skull stripping, susceptibility distortion and head motion correction to ensure input data can support generation of tractography and segmentation results. Bullsai uses generalized q-sampling imaging (GQI) to estimate the voxel-level white matter microstructure: GQI is a model-free diffusion reconstruction method that uses the Fourier transform relationship between the diffuse MR signal and the potential diffusion displacement for resolving fiber orientations and the anisotropy of water. Patient-specific anatomical segmentation and GQI estimated fiber orientations are used in Bullsai as part of an iterative heuristic tractography algorithm that emulates the manual work of neuroanatomists who iteratively seed tracts and remove aberrant fibers. Results are shared as DICOM outputs which can readily be viewed and edited in standard neurosurgical planning software packages.
The provided text contains details about the Bullsai device, its indications for use, and a comparison to a predicate device (Quicktome Software Suite). However, it does not explicitly detail the acceptance criteria for a study or the results of a specific study proving the device meets those criteria, nor does it provide information regarding sample sizes for test/training sets, data provenance, number/qualification of experts, adjudication methods, MRMC studies, or standalone performance metrics.
The "Performance Testing Summary" section indicates that "Software verification and validation testing were conducted. Documentation and relevant standards were provided." but does not elaborate on the specifics of these tests or their results against defined acceptance criteria.
The "Validation of differences compared to predicate devices" section mentions three validation activities:
- "Clinical accuracy of the Bullsai device outputs were evaluated and validated by clinical experts for the clinical intended uses of presurgical planning."
- "Bullsai device output's were validated compatibility with major neuronavigation software systems."
- "Bullsai device output's were validated for repeatability and reproducibility across major scanner manufacturer brands."
However, no further details are provided about these validations.
Therefore,Based on the provided text, the specific information requested in the prompt's numbered points cannot be fully extracted as it is not explicitly stated.
Here's a summary of what can be inferred or what is missing:
1. A table of acceptance criteria and the reported device performance:
* Acceptance Criteria: Not explicitly stated in the document.
* Reported Device Performance: Not explicitly stated in the document in measurable terms against acceptance criteria. The document mentions "Clinical accuracy of the Bullsai device outputs were evaluated and validated by clinical experts" and "Bullsai device output's were validated compatibility" and "Bullsai device output's were validated for repeatability and reproducibility," but no specific performance metrics or thresholds are provided.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
* Sample Size (Test Set): Not stated.
* Data Provenance: Not stated.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
* Number of Experts: Not stated, only "clinical experts" are mentioned.
* Qualifications of Experts: Not stated.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
* Adjudication Method: Not stated.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* MRMC Study: Not mentioned as being performed. The device description does not imply a human-in-the-loop diagnostic aid but rather image processing and output generation for other systems. The predicate device's differences section states, "Bullsai does not have a viewer and therefore a usability study was not conducted," which suggests a traditional MRMC study involving human reading performance might not be directly applicable or was not undertaken.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* Standalone Performance: The document implies standalone validation for "clinical accuracy," "compatibility," and "repeatability and reproducibility." However, specific quantitative results from such a standalone study are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
* Type of Ground Truth: The document states "Clinical accuracy of the Bullsai device outputs were evaluated and validated by clinical experts." This suggests expert consensus or expert review served as the ground truth. No mention of pathology or outcomes data is made.
8. The sample size for the training set:
* Sample Size (Training Set): Not stated.
9. How the ground truth for the training set was established:
* Ground Truth (Training Set): Not stated. The document focuses on validation activities, not training methodologies or ground truth establishment for training data.
Ask a specific question about this device
Page 1 of 1