Search Results
Found 1 results
510(k) Data Aggregation
(30 days)
NeuroStar Advanced Therapy System (Version 3.8)
The NeuroStar Advanced Therapy System is indicated for the treatment of depressive episodes and for decreasing anxiety symptoms for those who may exhibit comorbid anxiety symptoms in adult patients suffering from Major Depressive Disorder (MDD) and who failed to achieve satisfactory improvement from previous antidepressant medication treatment in the current episode.
The NeuroStar Advanced Therapy system is intended to be used as an adjunct for the treatment of adult patients suffering from Obsessive-Compulsive Disorder (OCD).
The NeuroStar Advanced Therapy System is a transcranial magnetic stimulation device. Specifically, it is a computerized, electromechanical medical device that produces and delivers non-invasive magnetic fields to induce electrical currents targeting specific regions of the cerebral cortex. Transcranial magnetic stimulation (TMS) is a non-invasive technique used to apply brief magnetic pulses to the brain. The pulses are administered by passing high currents through an electromagnetic coil placed adjacent to a patient's scalp. The pulses induce an electric field in the underlying brain tissue. When the induced field is above a certain threshold and is directed in an appropriate orientation relative the brain's neuronal pathway, localized axonal depolarizations are produced, thus activating neurons in the targeted brain region.
The NeuroStar Advanced Therapy System is an integrated system consisting of a combination of the following components:
- Mobile Console
- System Software
- Treatment Chair
- Ferromagnetic Treatment Coil
- Head Support System or Headrest
- SenStar® Connect Treatment Link & SenStar® Treatment Link
- Treatment Pack (for use with the SenStar® Connect Treatment Link)
- MDD MT Cap and OCD MT Cap
- TrakStar™ Patient Data Management System
- D-Tect™ MT Accessory
- NeuroSITE
The provided document is a 510(k) summary for the NeuroStar Advanced Therapy System (Version 3.8). This document primarily focuses on demonstrating substantial equivalence to predicate devices for regulatory clearance, rather than detailing clinical study results or acceptance criteria in the comprehensive manner typically found in a clinical study report for an AI/ML device.
However, based on the information provided, we can infer some details relevant to your request, specifically concerning the engineering and human factors testing which serves as the "study" proving the device meets performance criteria, even if not phrased as traditional "acceptance criteria" for an AI/ML algorithm's clinical performance.
Here's an attempt to answer your questions based on the available text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state quantitative acceptance criteria for the performance of the new software features, nor does it present device performance in a table with specific metrics. Instead, it states a qualitative conclusion:
Acceptance Criterion (Inferred) | Reported Device Performance |
---|---|
Successful recording, saving, and displaying of patient's MT and treatment locations by NeuroSITE and Headrest (equivalent to current Head Support System). | "the results of verification and validation testing confirmed that the NeuroSITE and Headrest can successfully be used with the NeuroStar System software to record, save, and display the patient's MT and treatment locations in a manner that is equivalent to the current Head Support System." |
Treatment location equivalence between new components (NeuroSITE/Headrest) and current Head Support System. | "Engineering and Human Factors testing had demonstrated that the treatment location is equivalent whether using the Head Support System or the NeuroSITE" (stated for both MDD and OCD indications). |
Clinical outcomes will be the same with new components. | "and clinical outcomes will be the same" (stated for both MDD and OCD indications, though this is a prediction based on location equivalence, not a direct clinical outcome measure from this specific submission's testing). |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "verification and validation testing" and "Engineering and Human Factors testing." It does not specify the sample size (e.g., number of uses, number of subjects for human factors) used for these tests. It also does not provide details on data provenance (country of origin, retrospective/prospective). This type of detail is typically found in the full test reports, which are summarized in the 510(k).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide information on the number or qualifications of experts used to establish "ground truth" for the engineering and human factors testing. Given the nature of the change (replacement of physical components for coil positioning), the ground truth for "treatment location equivalence" would likely be based on physical measurements and possibly user observations in human factors studies, rather than expert interpretation of medical images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not describe any adjudication method. This type of process is common in image interpretation studies or clinical trials where subjective assessments are made by multiple readers. For engineering and human factors testing of device mechanics and software functionality, adjudication in this sense is typically not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done or mentioned. This submission is for a modification to a Transcranial Magnetic Stimulation (TMS) system, specifically regarding mechanical components and software for coil positioning, not an AI-assisted diagnostic or interpretative device that would typically undergo an MRMC study. The device itself is a treatment device, not primarily an AI-driven interpretive one in the context of MRMC studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document does not detail separate standalone algorithm performance. The software change (version 3.8) is explicitly linked to the physical components (NeuroSITE and Headrest) for recording, saving, and displaying patient MT and treatment locations. The testing confirmed the system's ability (software with new hardware) to achieve equivalence, implying an integrated performance evaluation rather than a standalone algorithm assessment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document doesn't explicitly state the "type of ground truth." However, based on the context of the device modification (coil positioning), the ground truth for "treatment location equivalence" would most likely be established through:
- Engineering measurements: Precise physical measurements of coil placement relative to anatomical landmarks.
- Human factors evaluation: Observation and measurement of user interaction and accuracy in achieving desired coil positions by operators, potentially against a pre-defined target or the performance of the predicate system.
- Absence of new risks: The document emphasizes that the change "does not introduce any new risks" and that "clinical outcomes will be the same," implying that the ground truth for safety and effectiveness is tied to maintaining the established performance of the predicate device.
8. The sample size for the training set
The document does not mention a training set, as this submission is for a hardware and software update to an existing device, not the development or retraining of a new AI/ML algorithm where large training datasets are typically disclosed. The software updates are likely based on engineering design and software development best practices, rather than AI model training.
9. How the ground truth for the training set was established
As no training set is mentioned in the context of AI/ML, this question is not applicable to the information provided. The "ground truth" for the software's functionality would have been established through software requirements, design specifications, and subsequent verification and validation testing against these pre-defined expectations.
Summary of what the document focuses on regarding "proof":
The core of the "proof" in this 510(k) submission, regarding the software and hardware changes, is the demonstration through non-clinical testing (engineering and human factors) that the new components perform equivalently to the existing ones in terms of their intended functions, specifically "recording, saving, and displaying the patient's MT and treatment locations" and achieving "treatment location equivalence." The conclusion is that these changes do not introduce new questions of safety or effectiveness. This is a common approach for minor device modifications where clinical performance has already been established by the predicate device.
Ask a specific question about this device
Page 1 of 1