(212 days)
The device is intended for the spatial positioning and orientation of instrument holders or tool guides to be used by neurosurgeons to guide standard neurosurgical instruments (biopsy needle, stimulation or recording electrode, endoscope). The device is indicated for any neurosurgical procedure in which the use of stereotactic surgery may be appropriate.
The ROSA Brain device is a robotized platform providing guidance of any neurosurgical instruments compatible with the diameter of the adaptors supplied by Medtech (for example, a biopsy needle). The device is composed of a compact robotic arm and a touch screen mounted on a robot stand. Different types of instruments may be attached to the robot arm and changed according to the requirements of the procedure to be completed. ROSA Brain is an image-guided device that assists the surgeon in planning the position of instruments or implants on preoperative or intraoperative images. It provides a stable, accurate and reproducible mechanical guidance in accordance with the planning.
The provided document is a 510(k) premarket notification for the ROSA Brain device. It does not describe a study proving the device meets specific acceptance criteria in the context of diagnostic or AI performance.
Instead, the document focuses on demonstrating substantial equivalence to a predicate device (ROSA Surgical Device K101797) based on technological characteristics and non-clinical performance data, primarily related to safety, electrical compatibility, and software validation.
Therefore, many of the requested items (e.g., table of acceptance criteria vs. reported performance, sample sizes for test/training sets, expert ground truth, MRMC study, standalone performance) are not applicable or extractable from this specific document, as it pertains to a different type of device clearance (stereotaxic instrument rather than an AI/diagnostic device).
Here's a breakdown of what can be extracted or inferred from the document:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't present a table of quantitative performance acceptance criteria with corresponding device results in the way it would for an AI or diagnostic device (e.g., sensitivity, specificity, accuracy thresholds). The "performance data" provided relates to compliance with standards and successful software verification/validation.
Acceptance Criteria (related to general device function/safety) | Reported Device Performance |
---|---|
Biocompatibility requirements met (ISO-10993) | Met |
Electrical safety (IEC 60601-1) | Complied |
Electromagnetic compatibility (IEC 60601-1-2) | Complied |
Software Verification & Validation (FDA Guidance, IEC 62304) | Verification activities performed, conformity with user needs/intended use |
Mechanical and Acoustic Testing (general function) | Not explicitly detailed in criteria/results but implied by device description and predicate comparison |
2. Sample Size for the Test Set and Data Provenance:
Not applicable in the context of an AI/diagnostic test set. The document refers to "software tests" and "verification tests," but these are functional and safety tests, not performance evaluations against a labeled dataset for an AI algorithm.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
Not applicable in the context of an AI/diagnostic test set. The device is a surgical guidance robot, not an AI diagnostic tool requiring expert-labeled ground truth for performance evaluation.
4. Adjudication Method for the Test Set:
Not applicable.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
No. The document explicitly states: "The 510(k) does not contain clinical information for the ROSA Brain." and "The 510(k) does not contain animal study test results for the ROSA Brain." An MRMC study would fall under clinical information.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Not applicable for a surgical robotic system. The device is inherently "human-in-the-loop," assisting a surgeon. Its performance is evaluated through functional metrics and its ability to accurately position instruments, not as a standalone diagnostic algorithm.
7. The Type of Ground Truth Used:
For the software verification and validation, the "ground truth" would be the predefined functional requirements and expected outputs of the software modules. The "Conformity of software with the user needs and intended use of the device" serves as the ultimate validation.
For the mechanical and electrical performance, the ground truth is the established standards (e.g., IEC 60601-1) and the physical measurements/outputs of the device to ensure it meets specifications (e.g., accuracy of positioning).
8. Sample Size for the Training Set:
Not applicable. This device is not an AI model that undergoes "training" on a dataset in the conventional sense. The "training" in this context would be the development and testing of the software and hardware components following engineering principles.
9. How the Ground Truth for the Training Set Was Established:
Not applicable. As above, there's no "training set" in the context of an AI algorithm. The validation of the device's components and system is against design specifications, functional requirements, and established safety/performance standards. The "ground truth" for each specific test during development and verification would be defined by those specifications and the expected, correct behavior or output.
§ 882.4560 Stereotaxic instrument.
(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).