Search Results
Found 1 results
510(k) Data Aggregation
(112 days)
BRIGHTMATTER PLANNING SOFTWARE
BrightMatter Planning's indications for use are the viewing, simulation of cranial surgical procedures and reviewing of existing treatment plans. Typical users of the software are medical professionals, including but not limited to surgeons and radiologists.
BrightMatter Planning is a treatment planning software that enables the user to view and process medical image data. The software is intended for pre-operative planning of neuro-surgical treatments based on image guided surgical systems. The planning software system provides the ability to visualize diagnostic images in 2D and 3D formats and fusion of image datasets. The software automatically segments the skull from the acquired image and generates diffusion tracts from DTI data. The user can also manually annotate regions of interest, resulting in structures which can subsequently be visualized in 3D. The end result of such processing is a set of images that can be used to develop a treatment plan for a neuronavigational procedure. The treatment plan is developed by a trained person. A trained person can use the software to segment structures, define regions of interest and establish one or more trajectories. The software, operated on a stand-alone computer workstation, is expected to be used by a Surgical Planner in an office setting, in preparation for one of several possible surgical procedures. The resulting treatment plan can be exported to a PACS for subsequent use in image guided surgery.
Here's a breakdown of the acceptance criteria and study information for the BrightMatter Planning Software, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary does not explicitly state quantitative acceptance criteria for the BrightMatter Planning Software. Instead, it focuses on demonstrating substantial equivalence to predicate devices and verifying software functionality. The key performance comparison is for DTI-derived image creation and tractography.
Acceptance Criteria (Implicit/Derived) | Reported Device Performance |
---|---|
Equivalence in performance for DTI-derived image creation and tractography compared to a predicate system (Siemens syngo MR B17). | DTI-derived image creation and tractography results were demonstrated to be equivalent in performance to Siemens syngo MR B17 software. |
Software functionality (load/import data, view/adjust data, registration points, image fusion, object creation, advanced object planning, fiber tracking, trajectory planning, save/export plans, 3D functionalities) | Subject device (BrightMatter Planning) provides these functionalities, similar to the predicate iPlan Cranial module. |
Software verification and validation (unit, integration, system level for each requirement/algorithmic function) | Bench (software validation) testing was conducted for each requirement specification and algorithmic function, at unit, integration, and system levels. |
Compliance with quality assurance measures during development | Software Development Life Cycle, Software Risk Assessment, Risk Assessment of Off-the-Shelf (OTS) Software, Software Configuration Management and Version Control, and Software issue tracking and resolution were applied. |
Safety and effectiveness for intended use. | Design validation in actual and simulated use settings supported substantial equivalence and demonstrated safety for intended use. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a separate "test set" in the context of clinical data for performance assessment. The "testing" primarily refers to non-clinical software verification and validation.
- Sample Size for Test Set: Not specified, as it's primarily a software validation and comparison to an existing system's output, not a clinical study with patient samples.
- Data Provenance: Not applicable in the context of a "test set" for clinical performance. The comparison for DTI tractography was against the output of Siemens syngo MR B17 software, implying existing data processed by that tool.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
This information is not provided in the document. The performance assessment is based on comparison to an existing software's output rather than expert-established ground truth on a test image set.
4. Adjudication Method for the Test Set
Not applicable. There is no mention of a test set requiring adjudication from multiple readers.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of Human Improvement with AI vs. Without AI Assistance
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done. The documentexplicitly states: "This technology is not new, therefore a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device." Therefore, there is no information on the effect size of human improvement with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a "standalone" evaluation of the algorithm's performance for DTI-derived image creation and tractography was done. The document states: "The DTI-derived image creation and tractography results were compared with Siemens syngo MR B17 software and were demonstrated to be equivalent in performance." This suggests an assessment of the algorithm's output directly against a benchmark, without necessarily involving a human reader in the performance metric itself.
7. The Type of Ground Truth Used
The "ground truth" for the DTI-derived image creation and tractography comparison was the output of a commercially available and cleared software: Siemens syngo MR B17. This serves as a comparative benchmark rather than an independent expert consensus, pathology, or outcomes data.
8. The Sample Size for the Training Set
The document does not mention a training set. This is consistent with a 510(k) submission for a software device that relies on established algorithms and demonstrates equivalence to predicate devices, rather than a new AI/ML device that requires extensive training data.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set is mentioned or implied for this device.
Ask a specific question about this device
Page 1 of 1