Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250984
    Manufacturer
    Date Cleared
    2025-06-27

    (88 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location.
    This 510(k) is being is being submitted to implement 5G and WiFi capability to the previously cleared Maestro System (K242323). This modification is intended for data offload; only Telemetry and Event logs will be sent over 5G or WiFi. A PCCP is also implemented for the ScoPilot feature.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Maestro System, based on the provided FDA 510(k) clearance letter. It's important to note that the document is focused on a modification to an already cleared device and a Predetermined Change Control Plan (PCCP), so some of the detailed information often found in initial submissions might be less explicit here.

    Acceptance Criteria and Reported Device Performance

    The document describes various performance tests without explicitly listing pass/fail acceptance criteria values. However, it indicates compliance with recognized standards and that validation activities were performed to pre-defined performance requirements for the ScoPilot feature. For the purpose of this summary, I'll extract the performance aspects mentioned.

    Acceptance Criteria CategoryReported Device Performance / Compliance
    Electrical SafetyComplies with IEC 60601-1:2005+A1+A2
    Electromagnetic Compatibility (EMC)Complies with IEC 60601-1-2:2014+A1, AIM 7351731 Rev. 3.00: 20201 (wireless immunity), and IEEE/ANSI C63.27:2021 (wireless coexistence)
    Software Verification & ValidationDocumentation provided according to FDA guidance; included testing of ScoPilot feature, detection/tracking of instrument tips, motion trajectories, safety limits, malformed inputs at video/frame level.
    ML Model Performance (ScoPilot)Model performance (lower bound of 95%CI for AP and AR) demonstrated compliance with specifications on an independent test dataset, including brands unseen during training/tuning.
    Payload CapacityTested to 4.4 lbs.
    Malformed InputTested.
    Force AccuracyTested.
    Emergency StopTested.
    Hold Position AccuracyTested.
    IFU InspectionTested.
    Positioning Guidance & Collision DetectionTested.
    System Positioning AccuracyTested.
    Bedside Joint Control AccuracyTested.
    End to End WorkflowTested.
    Design InspectionTested.
    System SetupTested.
    System LatencyTested.
    Electro-Cautery CompatibilityTested.
    System EnduranceTested.
    CybersecurityTested.
    System Data LoggingTested.
    System ConnectivityTested.
    System Cloud DataTested.
    OS PerformanceTested (related to OS update).
    ScoPilot Motion PerformanceTested.
    ScoPilot Vision PerformanceTested.

    Study Details

    The primary study mentioned in this document relates to the validation of the ScoPilot ML model and the non-clinical bench studies for the overall system.

    1. Sample size used for the test set and the data provenance:

      • ML Model Validation (ScoPilot): An "independent testing dataset containing videos" was used. The specific number of videos or cases is not provided in this document.
      • Data Provenance: The document does not specify the country of origin or whether the data was retrospective or prospective. It only mentions using data "including brands unseen during training/tuning."
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided in the document for the ScoPilot ML model validation.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • This information is not provided in the document.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • A MRMC study or any study comparing human readers with and without AI assistance is not mentioned in this document. The device (Maestro System) is an instrument holder and positioner, and the ScoPilot feature assists in instrument tracking and positioning, not in diagnostic interpretation where MRMC studies are common.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, performance testing for the ScoPilot ML model was done in a standalone manner, with the model being "trained and tuned" and then verified against "predefined performance requirements" on an "independent testing dataset." The performance metrics used were "lower bound of the 95%CI for AP and AR (Average Precision and Average Recall likely)."
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the ScoPilot ML model, the ground truth establishment method is not explicitly detailed. It would likely involve manual annotation of instrument tips and surgical tools within the video frames by qualified personnel to create the labels against which the algorithm's detection and tracking are evaluated.
    7. The sample size for the training set:

      • This information is not provided in the document. The text mentions the ML model was "trained and tuned through a K-fold cross-tuning process."
    8. How the ground truth for the training set was established:

      • This information is not explicitly detailed in the document, but it would align with the method used for the test set (likely manual annotation of surgical tools in video data). The PCCP mentions "Modification retraining the ML model with the addition of newly acquired data enables it to detect surgical instrument classes already claimed, and an increased variety of other brands in the video feed more accurately" and adding "surgical cautery hooks to the ML model class hooks as another surgical instrument class." This implies a process of labeling or annotating new data for these specific elements.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1