Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K242323
    Manufacturer
    Date Cleared
    2025-03-14

    (220 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K240598,K152848

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement the ScoPilot feature. ScoPilot is an on-demand, optional, ease-of-use feature of the Maestro System, allowing the laparoscope which is attached to a Maestro Arm to seamlessly follow a desired instrument tip. The surgeon remains in control of laparoscope positioning, without having to disengage from the instrument in their hand, helping maintain surgical flow and focus.

    AI/ML Overview

    The provided text describes the Moon Surgical Maestro System, including its features and the testing performed for its 510(k) submission. However, the document does not contain a detailed table of acceptance criteria or the reported device performance against those criteria as would typically be found in a study summary with quantifiable results. It lists various tests performed but does not present the specific metrics and their outcomes in a structured format.

    Therefore, I cannot fully complete the requested information for acceptance criteria and reported performance with quantitative data. I can, however, extract related information about the testing and ground truth establishment.

    Here's an attempt to answer your questions based on the provided text, with limitations acknowledged:

    1. Table of acceptance criteria and the reported device performance

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters, until it reached our predefined performance requirements. An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning."

    While this indicates that performance requirements were predefined and that "AP" (presumably Average Precision) and "AR" (presumably Average Recall) were metrics, the specific numerical values for these "predefined performance requirements" (acceptance criteria) and the "compliant" reported performance are not detailed in the provided text.

    Therefore, a table with specific numbers cannot be generated from the given information.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: The document mentions "An independent testing dataset containing videos" was used. The specific number of videos or cases in this test set is not provided.
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions "ScoPilot Vision Performance" as one of the tests. For the ML model validation, it states: "The ML model was trained and tuned... An independent testing dataset containing videos was used to verify that the model performance...". However, the document does not specify the number of experts or their qualifications used to establish the ground truth for the test set.

    4. Adjudication method for the test set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions "Human factors testing" and "Cadaver testing." However, there is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The described "ScoPilot" feature is an "on-demand, optional, ease-of-use feature" that allows the laparoscope to follow a desired instrument tip, aiming to help "maintain surgical flow and focus." This implies a focus on a specific functionality rather than a broad comparative effectiveness study with human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the ML model was performed. The text states: "An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning." This describes an algorithm-only evaluation.

    7. The type of ground truth used

    For the "ScoPilot Vision Performance" and ML model validation, the ground truth would likely involve annotated video frames where the "desired instrument tip" is precisely identified. The text mentions "detection and tracking of specified instrument tips." However, it does not elaborate on how these ground truth annotations (e.g., expert consensus, pathology, outcomes data) were generated. Given the nature of the device (laparoscopic instrument tracking), it would most likely be based on expert manual annotation of video frames.

    8. The sample size for the training set

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters..." The specific sample size (number of videos/frames) for the training set is not provided.

    9. How the ground truth for the training set was established

    The document states "Machine Learning methodology used to develop software algorithm responsible for identifying tool tip." While it indicates that an ML model was trained to identify the tool tip, it does not explicitly state how the ground truth was established for this training set. Similar to the test set, it would logically involve expert annotation of video data to delineate the "tool tip."

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1