Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K183325
    Device Name
    Modus Nav
    Date Cleared
    2019-07-14

    (226 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K180394, K033621, K160523

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Modus Nav is intended as a planning and intraoperative guidance system to enable open and percutaneous computer assisted surgery. The system is indicated for medical conditions requiring neurosurgical cranial procedures where the use of computer assisted planning and surgery may be appropriate. The system can be used for intra-operative guidance where a reference to a rigid anatomical structure can be identified. The user should consult the "Accuracy Characterization" section of the User Manual to assess if the accuracy of the system is suitable for their needs.

    Device Description

    The subject device, Modus Nav, is a modified version of its predicate, BrightMatter Guide with SurfaceTrace Registration. The system is a surgical planning and image guided surgical system that enables open or percutaneous computer-assisted cranial surgery. The system uses optical 3D tracking technology to display the location and orientation of tracked (also known as navigated) surgical instruments relative to the pre-operative scan images of the patient. The system consists of a software application installed on a computer, tracked surgical instruments, and accessories to enable the tracking of those instruments.

    The planning functionality of the device is provided by an already cleared device called BrightMatter Plan 1.6.0 (K180394). The remaining functionality of the system can be broadly grouped into data preparation, registration and visualization of surqical tools. Data preparation and registration is performed during the initial stages of a surgical procedure and visualization of the tools is performed as needed during the surgical procedure.

    General use of the system as an image guided surgical tool is composed of the following key steps:

    • . Equipment setup
    • . Plan selection and data preparation
    • Patient registration .
    • . Tool localization and visualization

    An optical Tracking Camera provides the position and orientation of the tools with respect to the tracking origin. The navigated surgical tools are tracked using singleuse passive reflective markers (K033621) that are attached to the surgical tools prior to each surqical procedure. An external display can be used by the surgical staff if needed, given that the Tracking Camera mounted on a cart maintains a line of sight between the Cranial Reference and the Tracked Surqical Tools. Both the User Cart (also known as Navigation Cart) and Auxiliary Carts are placed outside the sterile field.

    The primary purpose of this 510(k) submission is to introduce new navigated tools such as the Short Pointer, Shunt Stylet, and the corresponding Calibration Device. It also introduces new software features to support the navigation of these tools, the ability to navigate with Synaptive's Trackable Suction tools, and minor workflow improvements to facilitate the surgical procedure.

    AI/ML Overview

    The provided text describes the Modus Nav system, a surgical planning and image-guided surgical system for neurosurgical cranial procedures. Below is a summary of the acceptance criteria and the study that proves the device meets them, based on the provided document.

    Acceptance Criteria and Reported Device Performance

    The document primarily focuses on demonstrating substantial equivalence to a predicate device (BrightMatter Guide with SurfaceTrace Registration) rather than setting specific clinical performance metrics with target values for new device features. The acceptance criteria are largely centered around functional verification, safety, and equivalence to the predicate.

    Acceptance Criteria CategorySpecific Activity/TestReported Device Performance/Documentation Result
    Software VerificationFunctional verification of integrated software systemVerified acceptance criteria for all SRS (Software Requirements Specifications) items have been met. Previous errors were tested and verified to no longer occur.
    Algorithm Pipeline VerificationAutomated performance verification of the core data processing facility (algorithm pipeline)Performance verified using known data sets or 'truth data sets' to evaluate image processing pipeline and its outputs.
    System Requirements VerificationBiocompatibility testing (Bacterial Endotoxins, Cytotoxicity, Irritation/Intracutaneous toxicity, Sensitization, Material-mediated pyrogenicity, Acute systemic toxicity, Hemocompatibility, Extractables)All biocompatibility tests passed, demonstrating the material is non-endotoxic, non-cytotoxic, non-irritant, non-sensitizing, non-pyrogenic, non-toxic, and non-hemolytic.
    Cleaning ValidationTesting passed all acceptance criteria (for re-usable tools per ISO 15883-1).
    Sterilization ValidationTesting passed all acceptance criteria (for re-usable tools per AAMI TIR12, AAMI TIR30, ANSI AAMI ISO 17665-1, ANSI AAMI ISO TIR17665-2).
    Medical Electrical System SafetyExternal testing against ANSI AAMI IEC ES60601-1 to verify electrical and mechanical safety was successful.
    Electromagnetic CompatibilityExternal testing against IEC 60601-1-2 was successful, verifying operation within safe emission and interference limits.
    System ValidationUser acceptance testing by intended user groupAll acceptance criteria met.
    Human Factors ValidationAll acceptance criteria met (testing per IEC ANSI AAMI 62366 and FDA Guidance).
    Accuracy CharacterizationSystem accuracy with an accuracy measurement phantomModus Nav system is accurate to within 2 mm and 2 degrees of the physical tip of the tracked tool, equivalent to the predicate device.
    Latency TestingComparison of video latency with predicate deviceDeemed equivalent to the predicate device.

    Additional Information Regarding the Study:

    1. Sample size used for the test set and the data provenance:

      • Accuracy Characterization: An "accuracy measurement phantom of similar volume to an adult head" was used. The specific number of measurements or trials conducted on this phantom is not specified.
      • Algorithm pipeline verification: "Known data sets" or "truth data sets" were used. The size, type, or provenance (country of origin, retrospective/prospective) of these datasets is not detailed.
      • For other tests like software verification, human factors, and user acceptance, the "test set" refers to the specific test cases, scenarios, or participants involved, but detailed numerical sample sizes are not provided.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Accuracy Characterization: Ground truth for the accuracy phantom was "obtained using a Coordinate Measurement Machine (CMM)." This implies a metrology standard rather than human experts.
      • Algorithm pipeline verification: "Expert review of output generated by the pipeline" was used. However, the number of experts and their qualifications are not specified.
      • User acceptance testing and Human Factors Validation: These tests were conducted "by intended user in a simulated use environment" and "by intended users," respectively. While these "users" would be qualified medical professionals, their specific number and detailed qualifications are not provided.
    3. Adjudication method for the test set:

      • The document primarily describes verification and validation activities rather than studies requiring adjudicator consensus (like clinical trials for sensitivity/specificity).
      • For the "Algorithm pipeline verification," it mentions "expert review of output," but does not detail an adjudication method (e.g., 2+1, 3+1).
      • For other tests, the "documentation results" simply state that acceptance criteria were met, implying direct pass/fail assessment rather than a multi-reader adjudication process.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

      • No MRMC comparative effectiveness study was done. The document explicitly states: "This technology is not new; therefore, a clinical study was not considered necessary prior to release. The substantial equivalence of the device is supported by the nonclinical testing."
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Algorithm pipeline verification: Yes, an "Automated performance verification of the core data processing facility of the software (known as 'algorithm pipeline'). Uses known data sets and expert review of output generated by the pipeline at various stages of processing." This suggests a standalone evaluation of the algorithm's output against known data.
      • Accuracy Characterization: This would also be considered a standalone performance test of the system's accuracy, without human interpretation of images directly impacting the accuracy measurement.
    6. The type of ground truth used:

      • Accuracy Characterization: Ground truth was established using a "Coordinate Measurement Machine (CMM)" on an accuracy phantom, which is a physical measurement standard.
      • Algorithm pipeline verification: "Known data sets or 'truth data sets'" and "expert review" were used to establish ground truth for algorithm outputs. The nature of these "truth data sets" (e.g., expert consensus, pathology, simulated data) is not specified.
      • Software verification: Ground truth refers to the defined "software requirements specifications (SRS) items."
    7. The sample size for the training set:

      • The document does not mention any training sets for machine learning models. The device's functionality as described primarily involves image-guided navigation based on optical tracking and pre-operative scans, rather than an AI/ML component requiring a separate training set for classification or detection tasks. The reference to "algorithm pipeline" suggests image processing, but no specific machine learning training is detailed.
    8. How the ground truth for the training set was established:

      • Not applicable, as no training set for machine learning is explicitly mentioned or described.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1