Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K231103
    Manufacturer
    Date Cleared
    2023-07-20

    (92 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended for the spatial positioning and orientation of instruments holders or tool guides to be used by trained neurosurgeons to guide standard neurosurgical instruments (biopsy needle, stimulation or recording electrode, endoscope). The device is indicated for any neurosurgical procedure in which the use of stereotactic neurosurgery may be appropriate.

    Device Description

    The ROSA One Brain application device is a robotized image-guided device that assists the surgeon during brain surgeries. It provides quidance of any surqical instruments compatible with the diameter of the adaptors supplied by Medtech. It allows the user to plan the position of instruments or implants on medical images and provides stable, accurate and reproducible guidance in accordance with the planning. The device is composed of a robot stand with a compact robotic arm and a touchscreen positioned close the operating table. Different types of instruments may be attached to the robot arm and changed according to the intended surgical procedure. For Brain applications, these neurosurgical instruments (e.g. biopsy needle, stimulation or recording electrode, endoscope) remain applicable for a variety of procedures as shown below in Figure 1 for the placement of recording electrodes. The touchscreen ensures the communication between the device and its user by indicating the actions to be performed with respect to the procedure. Adequate quidance of instruments is obtained from three-dimensional calculations performed from desired surgical planning parameters and registration of spatial position of the patient.

    AI/ML Overview

    The provided text describes the ROSA ONE Brain application, a robotized image-guided device for neurosurgery. It's an FDA 510(k) submission seeking substantial equivalence to a previously cleared version of the same device. The submission focuses on non-clinical performance data to demonstrate this equivalence.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly state "acceptance criteria" for each test in a formal table with pass/fail. However, it does outline the tests performed and the results, implying that the predicate device's performance levels define the acceptance criteria for the new version. The most specific performance metric provided is for accuracy.

    TestAcceptance Criteria (Implied)Reported Device Performance
    System applicative accuracy (In vitro)Robot arm positioning accuracy <0.75 mm RMS; Device applicative accuracy <2mm (based on predicate device testing)Robot arm positioning accuracy <0.75 mm RMS; Device applicative accuracy <2mm
    Electrical safety and EMCCompliance with IEC 60601-1 and IEC 60601-1-2 standards (based on predicate device)Complies with IEC 60601-1 and IEC 60601-1-2 standards
    Biocompatibility testingCompliance with FDA guidance ISO 10993-1 (Cytotoxicity, Sensitization, Irritation, Acute systemic toxicity performed on predicate device)Requirements met; Evaluated against predicate testing
    Software Verification and Validation TestingCompliance with FDA guidance "General Principles of Software Validation" and IEC 62304: 2015 for "major" level of concern softwareDemonstrated substantially equivalent performance
    Cleaning- and Sterilization ValidationCompliance with FDA guidance "Reprocessing Medical Devices..." and standards like ISO 17665-1, ISO 17664, ANSI/AAMI ST79, AAMI TIR 12 (based on predicate device)Evaluated against predicate testing

    2. Sample size used for the test set and the data provenance

    • System applicative accuracy: The text states "Performance bench Testing in compliance with internal Medtech/Zimmer Biomet robotics procedures." No specific sample size (number of tests or cases) for this in vitro testing is mentioned. The data provenance is internal company testing.
    • For other tests (Electrical safety, EMC, Biocompatibility, Cleaning- and Sterilization Validation), the testing was largely performed on the predicate device. The subject device (ROSA ONE Brain application v. 3.1.7.0) was then evaluated against these predicate testing results, implying a comparison or re-evaluation rather than new, extensive testing on a new sample set for clinical endpoints.
    • For Software Verification and Validation Testing, testing was performed on the subject device. No specific sample size (number of test cases or runs) is provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not provided in the document. The studies listed are non-clinical (bench testing, software testing, electrical safety, biocompatibility, cleaning/sterilization), which do not typically involve experts establishing ground truth in the same way clinical studies with image interpretation or patient outcomes do.

    4. Adjudication method for the test set

    This information is not applicable as the studies are non-clinical performance and engineering tests, not involving human interpretation of data that would require an adjudication method.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The document explicitly states: "Clinical data were not required to support the safety and effectiveness of ROSA ONE Brain application. All validation was performed based on non-clinical performance tests." Therefore, there is no information about human reader improvement with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    While the device is a "robotized image-guided device," the performance data presented is for the entire system's accuracy and various engineering aspects. The "Software Verification and Validation Testing" does cover the algorithm's performance within the broader software system. The statement that "Robot arm positioning accuracy <0.75 mm RMS Device applicative accuracy <2mm" can be considered a standalone performance metric for the device's mechanical and software-guided capabilities, demonstrating its ability to meet a precision target independently of a human's final surgical action. However, it's not an "algorithm-only" performance in the sense of an AI diagnostic tool.

    7. The type of ground truth used

    • System applicative accuracy: The ground truth would be precise, known physical measurements and positions in a controlled bench test environment.
    • Electrical safety and EMC: Ground truth is defined by the absolute limits and requirements of the IEC 60601-1 and IEC 60601-1-2 standards.
    • Biocompatibility testing: Ground truth is established by the specified biological responses (e.g., cell viability for cytotoxicity, skin reaction for irritation) determined according to ISO 10993-1.
    • Software Verification and Validation Testing: Ground truth is defined by the software requirements and design specifications, against which the software's behavior is verified and validated.
    • Cleaning- and Sterilization Validation: Ground truth is defined by the absence of viable microorganisms or acceptable residual soil levels, determined according to standards like ISO 17665-1, ISO 17664, ANSI/AAMI ST79, and AAMI TIR 12.

    8. The sample size for the training set

    This information is not provided and is not applicable to this submission. The device is a robotized stereotaxic instrument, not an AI/ML device that requires a distinct "training set" for model development. The software verification and validation are against pre-defined requirements, not derived from a training dataset.

    9. How the ground truth for the training set was established

    This question is not applicable as there is no mention of a training set for an AI/ML model in this submission.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1