Search Results
Found 1 results
510(k) Data Aggregation
(88 days)
The device is intended for the spationing and orientation of instruments holders or tool guides to be used by trained neurosurgeons to guide standard neurosurgical instruments (biopsy needle, stimulation or recording electrode, endoscope). The device is indicated for any neurosurgical procedure in which the use of stereotactic neurosurgery may be appropriate.
The ROSA One Brain application device is a robotized image-quided device that assists the surgeon during brain surgeries. It provides guidance of any surgical instruments compatible with the diameter of the adaptors supplied by Medtech. It allows the user to plan the position of instruments on medical images and provides stable, accurate and reproducible guidance in accordance with the planning. The device is composed of a robot stand with a compact robotic arm and a touch screen. Different types of instruments may be attached to the robot arm and changed according to the intended surgical procedure. For Brain applications, these neurosurgical instruments (e.g. biopsy needle, stimulation or recording electrode, endoscope) remain applicable for a variety of procedures as shown below in Figure 5.1 for the placement of recording electrodes. The touchscreen ensures the communication between the device and its user by indicating the actions to be performed with respect to the procedure. Adequate guidance of instruments is obtained from three-dimensional calculations performed from desired surgical planning parameters and registration of spatial position of the patient.
The provided text describes the ROSA ONE Brain application and its substantial equivalence to a predicate device. However, it does not include detailed acceptance criteria and a study proving the device meets those criteria in the way typically expected for an AI/ML medical device submission (e.g., performance metrics like sensitivity, specificity, AUC for a diagnostic algorithm).
Instead, this document focuses on demonstrating substantial equivalence based on engineering and quality control tests rather than clinical performance of an AI algorithm making diagnostic or treatment recommendations. The "performance data" section primarily discusses electrical safety, EMC, software verification, and biocompatibility, along with a statement about system applicative accuracy derived from the predicate device's testing.
Given the information provided, here's a breakdown of what is and is not available in the document regarding your request:
1. A table of acceptance criteria and the reported device performance
Based on the document, the primary "performance data" that could be interpreted as a performance criterion is the "System applicative accuracy."
| Acceptance Criteria (Implied from Predicate) | Reported Device Performance (Inherited from Predicate) |
|---|---|
| Robot arm positioning accuracy < 0.75 mm RMS | < 0.75 mm RMS |
| Device applicative accuracy < 2 mm | < 2 mm |
Note: The document explicitly states: "Testing were performed on the predicate device. The subject devices were evaluated against the predicate testing and determined to be substantially equivalent." This implies that the current device is expected to meet these same performance levels, rather than providing new, independent test results for the current device's accuracy.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not available in the provided text. The accuracy testing was "in vitro" and performed on the predicate device, not necessarily on a "test set" of clinical cases or data in the context of an AI/ML algorithm.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not available. The document does not describe the establishment of ground truth by experts in the context of clinical image interpretation or AI performance evaluation. The accuracy testing mentioned is an engineering performance bench test.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not available. Adjudication methods are typically relevant for clinical studies involving human readers or expert consensus on clinical data, which is not described here.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not available. The document explicitly states "Clinical data were not required to support the safety and effectiveness of ROSA ONE Brain application. All validation was performed based on non-clinical performance tests." Therefore, an MRMC comparative effectiveness study was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance test in the typical AI/ML sense (e.g., evaluating an algorithm's diagnostic accuracy on images) was not done or at least not described. The "System applicative accuracy" is a standalone test of the robot's physical positioning capabilities, not an AI algorithm's interpretive performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
For the "System applicative accuracy," the ground truth would be precise physical measurements or a known, highly accurate reference system on a test bench. It is not based on expert consensus, pathology, or outcomes data in a clinical sense.
8. The sample size for the training set
This information is not available. The device describes a "robotized image-guided device" that assists surgeons; it does not explicitly mention an AI algorithm that is "trained" on a dataset in the way a diagnostic AI would be. The software verification and validation are for the overall embedded software system, not specifically for an AI model's training.
9. How the ground truth for the training set was established
This information is not available, for the same reasons as point 8. No specific AI training set or its ground truth establishment is described in this document.
Summary of the Study that Proves the Device Meets Acceptance Criteria:
The "study" referenced in the document is a series of non-clinical performance tests, primarily conducted on the predicate device, and the new device (ROSA ONE 3.1.3.2) was evaluated for substantial equivalence against these established tests and performance levels.
- System Applicative Accuracy In vitro testing: This was a performance bench test designed to evaluate the physical accuracy of the robotic arm's positioning. The results stated were "<0.75 mm RMS" for robot arm positioning accuracy and "<2mm" for device applicative accuracy. These tests were performed on the predicate device, and the subject device was deemed substantially equivalent. The specific methodology would involve measuring the robot's ability to reach planned targets with precision using specialized measurement tools, but the details of the "Medtech/Zimmer robotics procedures" are not provided.
- Electrical safety and electromagnetic compatibility (EMC): Testing against IEC 60601-1 and IEC 60601-1-2 standards.
- Biocompatibility testing: Evaluation according to ISO 10993-1, including cytotoxicity, sensitization, irritation, and acute systemic toxicity performed on the predicate device.
- Software Verification and Validation Testing: Conducted according to FDA guidance and IEC 62304 standards, with the software designated as "major" concern level. This involved code inspections, unit tests, integration tests, and verification tests against requirements, followed by validation against user needs.
- Cleaning- and Sterilization Validation: Performed according to FDA guidance and ISO/AAMI standards.
In essence, the document confirms that the ROSA ONE Brain application (v.3.1.3.2), as a stereotactic instrument, relies on demonstrating its performance through engineering and quality control tests, showing substantial equivalence to a previously cleared predicate device, rather than a clinical study evaluating an AI's interpretive performance.
Ask a specific question about this device
Page 1 of 1