Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN210003
    Date Cleared
    2021-08-23

    (201 days)

    Product Code
    Regulation Number
    882.5855
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    SureTune4 Software

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SureTune4 Software is indicated to assist medical professionals in planning the programming of stimulation for patients receiving approved Medtronic deep brain stimulation (DBS) devices.

    Device Description

    The SureTune4 Software is intended to assist medical professionals in planning the programming of deep brain stimulation (DBS) by visualizing the Volume of Neural Activation (VNA) relative to patient anatomy. It is used to visualize patient-specific information within the patient's anatomy. Integrated preoperative and postoperative magnetic resonance imaging (MRI), Oarm™, and computed tomography (CT) images are uploaded to SureTune4 and can be navigated through in multiple 2D projections and 3D reconstructions. Medtronic DBS lead models are positioned in the corresponding artifacts and potential stimulation settings and electrode configurations entered. The SureTune4 Software mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model to translates potential stimulation settings and electrode configurations into a visualized VNA field to indicate the shape and the area or volume of anatomy that will be activated by the stimulation.

    The SureTune4 software is used to do the following:

    • TM Import MR, O-arm ", and CT patient images over a DICOM network or from . physical media (hard drive, USB drive, CD, or DVD)
    • Import DICOM archives from StealthStation TM S7 TM systems with Cranial 3.x . software and StealthStation S8 Cranial software systems, and SureTune4 systems over a DICOM network
    • Combine MR. O-arm and CT images for more detail .
    • Superimpose an anatomical atlas to better understand the position of structures of . interest relative to a patient's anatomy
    • Manually segment structures of interest to highlight particular brain structures .
    • . Localize graphical Medtronic DBS lead models (based on preoperative imaging)
    • Enter electrophysiological annotations .
    • Visualize VNA fields relative to structures of interest in the patient anatomy or . lead position
    • . Create patient-specific stimulation plans for DBS programming
    • Generate reports that summarize stimulation plans for patients .
    • Export patient sessions to SureTune4 XLS spreadsheets (in Microsoft™ Excel format) .
    AI/ML Overview

    The SureTune4 Software, a brain stimulation programming planning software for Deep Brain Stimulation (DBS) devices, underwent a user needs validation study and a formative usability evaluation to demonstrate its conformity with regulatory requirements. No clinical performance testing was provided.

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criterion for the user needs validation study was an average score of 3 or higher (on a 1-5 scale) for each user need. The device met this criterion.

    Acceptance CriteriaReported Device Performance
    Average score of 3 or higher for user needs in the User Needs Validation Study.All user needs received an average rating of 3 or greater on a 1-5 scale.
    Passing results in the Formative Usability Evaluation, identifying use difficulties and assessing safety and efficacy of use, including Task Completion, Trends in Use Difficulties, Qualitative Feedback, and Participant Safety Scores.The formative usability evaluation "Passed." Some minor use difficulties were observed but were appropriately mitigated.

    2. Sample Size and Data Provenance

    User Needs Validation Study:

    • Sample Size: 8 subjects (5 Neurologists, 3 Neurosurgeons)
    • Data Provenance: Not explicitly stated, but implied to be prospective for the purpose of the study. The country of origin is not specified.

    Formative Usability Evaluation:

    • Sample Size: 15 subjects (8 Neurosurgeons, 7 Neurologists)
    • Data Provenance: Not explicitly stated, but implied to be prospective for the purpose of the study. The country of origin is not specified.

    3. Number of Experts and Qualifications for Ground Truth

    For both the User Needs Validation Study and the Formative Usability Evaluation, the "ground truth" was established by the participating medical professionals' subjective ratings and observations of the software's performance and usability.

    • Number of Experts:
      • User Needs Validation Study: 8 (5 Neurologists, 3 Neurosurgeons)
      • Formative Usability Evaluation: 15 (8 Neurosurgeons, 7 Neurologists)
    • Qualifications of Experts: Neurologists and Neurosurgeons. The level of experience (e.g., years of practice) is not specified.

    4. Adjudication Method

    Neither study described an explicit adjudication method. The User Needs Validation Study used an average rating, implying that individual ratings were aggregated without a formal adjudication process to resolve discrepancies between subjects. The Formative Usability Evaluation involved observing use difficulties and collecting qualitative feedback, which would likely be summarized rather than formally adjudicated.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was mentioned. The studies focused on user needs and usability of the software itself rather than comparing human reader performance with and without AI assistance for diagnosing or interpreting images.

    6. Standalone Performance Study

    No standalone performance study of the algorithm (i.e., algorithm only without human-in-the-loop performance) was explicitly described in the provided text in the context of clinical performance. The software verification and validation (V&V) testing included "Volume of neural activation generation and visualization" and "Patient image fusion (registration)," which suggest algorithmic evaluations, but these were part of the software development and verification process, not a standalone clinical performance study as typically understood (e.g., measuring diagnostic accuracy). The document states, "Clinical performance testing was not provided for the SureTune4."

    7. Type of Ground Truth Used

    For the User Needs Validation Study and Formative Usability Evaluation, the "ground truth" was based on:

    • Expert Consensus/Subjective Feedback: Participants (Neurologists and Neurosurgeons) directly assessed whether the device met their needs and identified usability issues. This is a form of expert subjective evaluation rather than a ground truth derived from objective clinical outcomes or pathology.

    For the underlying VNA model and image processing:

    • Internal Validation/Benchmarking: The "VNA modeling description and justification, human factors/usability testing, auto-detect lead orientation algorithm accuracy testing, image fusion accuracy testing" passed prespecified acceptance criteria. This implies some form of internal ground truth or reference standard was used for these technical validations, but its specific nature (e.g., phantom data, annotated datasets) is not detailed.

    8. Sample Size for the Training Set

    The document does not provide information about a specific training set size for an AI/ML algorithm. The SureTune4 Software's description focuses on mathematical modeling of electric fields and neural activation rather than a deep learning model trained on a large dataset. Therefore, the concept of a "training set" as typically used for AI/ML is not directly applicable or described in this context.

    9. How the Ground Truth for the Training Set Was Established

    As there is no explicit mention of a training set or a deep learning algorithm in the provided text, the method for establishing ground truth for a training set is not applicable. The software relies on "mathematically combines finite element (FE) based electric field model of the lead with an axon based neural activation model," which suggests a physics-based approach rather than a data-driven machine learning model requiring a labeled training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1