Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K250984
    Manufacturer
    Date Cleared
    2025-06-27

    (88 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Maestro System (REF100)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location.
    This 510(k) is being is being submitted to implement 5G and WiFi capability to the previously cleared Maestro System (K242323). This modification is intended for data offload; only Telemetry and Event logs will be sent over 5G or WiFi. A PCCP is also implemented for the ScoPilot feature.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the Maestro System, based on the provided FDA 510(k) clearance letter. It's important to note that the document is focused on a modification to an already cleared device and a Predetermined Change Control Plan (PCCP), so some of the detailed information often found in initial submissions might be less explicit here.

    Acceptance Criteria and Reported Device Performance

    The document describes various performance tests without explicitly listing pass/fail acceptance criteria values. However, it indicates compliance with recognized standards and that validation activities were performed to pre-defined performance requirements for the ScoPilot feature. For the purpose of this summary, I'll extract the performance aspects mentioned.

    Acceptance Criteria CategoryReported Device Performance / Compliance
    Electrical SafetyComplies with IEC 60601-1:2005+A1+A2
    Electromagnetic Compatibility (EMC)Complies with IEC 60601-1-2:2014+A1, AIM 7351731 Rev. 3.00: 20201 (wireless immunity), and IEEE/ANSI C63.27:2021 (wireless coexistence)
    Software Verification & ValidationDocumentation provided according to FDA guidance; included testing of ScoPilot feature, detection/tracking of instrument tips, motion trajectories, safety limits, malformed inputs at video/frame level.
    ML Model Performance (ScoPilot)Model performance (lower bound of 95%CI for AP and AR) demonstrated compliance with specifications on an independent test dataset, including brands unseen during training/tuning.
    Payload CapacityTested to 4.4 lbs.
    Malformed InputTested.
    Force AccuracyTested.
    Emergency StopTested.
    Hold Position AccuracyTested.
    IFU InspectionTested.
    Positioning Guidance & Collision DetectionTested.
    System Positioning AccuracyTested.
    Bedside Joint Control AccuracyTested.
    End to End WorkflowTested.
    Design InspectionTested.
    System SetupTested.
    System LatencyTested.
    Electro-Cautery CompatibilityTested.
    System EnduranceTested.
    CybersecurityTested.
    System Data LoggingTested.
    System ConnectivityTested.
    System Cloud DataTested.
    OS PerformanceTested (related to OS update).
    ScoPilot Motion PerformanceTested.
    ScoPilot Vision PerformanceTested.

    Study Details

    The primary study mentioned in this document relates to the validation of the ScoPilot ML model and the non-clinical bench studies for the overall system.

    1. Sample size used for the test set and the data provenance:

      • ML Model Validation (ScoPilot): An "independent testing dataset containing videos" was used. The specific number of videos or cases is not provided in this document.
      • Data Provenance: The document does not specify the country of origin or whether the data was retrospective or prospective. It only mentions using data "including brands unseen during training/tuning."
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided in the document for the ScoPilot ML model validation.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • This information is not provided in the document.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • A MRMC study or any study comparing human readers with and without AI assistance is not mentioned in this document. The device (Maestro System) is an instrument holder and positioner, and the ScoPilot feature assists in instrument tracking and positioning, not in diagnostic interpretation where MRMC studies are common.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, performance testing for the ScoPilot ML model was done in a standalone manner, with the model being "trained and tuned" and then verified against "predefined performance requirements" on an "independent testing dataset." The performance metrics used were "lower bound of the 95%CI for AP and AR (Average Precision and Average Recall likely)."
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the ScoPilot ML model, the ground truth establishment method is not explicitly detailed. It would likely involve manual annotation of instrument tips and surgical tools within the video frames by qualified personnel to create the labels against which the algorithm's detection and tracking are evaluated.
    7. The sample size for the training set:

      • This information is not provided in the document. The text mentions the ML model was "trained and tuned through a K-fold cross-tuning process."
    8. How the ground truth for the training set was established:

      • This information is not explicitly detailed in the document, but it would align with the method used for the test set (likely manual annotation of surgical tools in video data). The PCCP mentions "Modification retraining the ML model with the addition of newly acquired data enables it to detect surgical instrument classes already claimed, and an increased variety of other brands in the video feed more accurately" and adding "surgical cautery hooks to the ML model class hooks as another surgical instrument class." This implies a process of labeling or annotating new data for these specific elements.
    Ask a Question

    Ask a specific question about this device

    K Number
    K242323
    Manufacturer
    Date Cleared
    2025-03-14

    (220 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Maestro System (REF100)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement the ScoPilot feature. ScoPilot is an on-demand, optional, ease-of-use feature of the Maestro System, allowing the laparoscope which is attached to a Maestro Arm to seamlessly follow a desired instrument tip. The surgeon remains in control of laparoscope positioning, without having to disengage from the instrument in their hand, helping maintain surgical flow and focus.

    AI/ML Overview

    The provided text describes the Moon Surgical Maestro System, including its features and the testing performed for its 510(k) submission. However, the document does not contain a detailed table of acceptance criteria or the reported device performance against those criteria as would typically be found in a study summary with quantifiable results. It lists various tests performed but does not present the specific metrics and their outcomes in a structured format.

    Therefore, I cannot fully complete the requested information for acceptance criteria and reported performance with quantitative data. I can, however, extract related information about the testing and ground truth establishment.

    Here's an attempt to answer your questions based on the provided text, with limitations acknowledged:

    1. Table of acceptance criteria and the reported device performance

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters, until it reached our predefined performance requirements. An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning."

    While this indicates that performance requirements were predefined and that "AP" (presumably Average Precision) and "AR" (presumably Average Recall) were metrics, the specific numerical values for these "predefined performance requirements" (acceptance criteria) and the "compliant" reported performance are not detailed in the provided text.

    Therefore, a table with specific numbers cannot be generated from the given information.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: The document mentions "An independent testing dataset containing videos" was used. The specific number of videos or cases in this test set is not provided.
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions "ScoPilot Vision Performance" as one of the tests. For the ML model validation, it states: "The ML model was trained and tuned... An independent testing dataset containing videos was used to verify that the model performance...". However, the document does not specify the number of experts or their qualifications used to establish the ground truth for the test set.

    4. Adjudication method for the test set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions "Human factors testing" and "Cadaver testing." However, there is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The described "ScoPilot" feature is an "on-demand, optional, ease-of-use feature" that allows the laparoscope to follow a desired instrument tip, aiming to help "maintain surgical flow and focus." This implies a focus on a specific functionality rather than a broad comparative effectiveness study with human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the ML model was performed. The text states: "An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning." This describes an algorithm-only evaluation.

    7. The type of ground truth used

    For the "ScoPilot Vision Performance" and ML model validation, the ground truth would likely involve annotated video frames where the "desired instrument tip" is precisely identified. The text mentions "detection and tracking of specified instrument tips." However, it does not elaborate on how these ground truth annotations (e.g., expert consensus, pathology, outcomes data) were generated. Given the nature of the device (laparoscopic instrument tracking), it would most likely be based on expert manual annotation of video frames.

    8. The sample size for the training set

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters..." The specific sample size (number of videos/frames) for the training set is not provided.

    9. How the ground truth for the training set was established

    The document states "Machine Learning methodology used to develop software algorithm responsible for identifying tool tip." While it indicates that an ML model was trained to identify the tool tip, it does not explicitly state how the ground truth was established for this training set. Similar to the test set, it would logically involve expert annotation of video data to delineate the "tool tip."

    Ask a Question

    Ask a specific question about this device

    K Number
    K240598
    Manufacturer
    Date Cleared
    2024-06-03

    (91 days)

    Product Code
    Regulation Number
    878.4960
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Maestro System (REF100)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement design changes to the previously cleared Maestro System. The following modifications have been implemented to the Maestro System:

    • · System Positioning Guidance
    • · System Hold Status Indication
    • Instrument Coupling
    • · System Setup
    • Bedside Setup Joint Control
    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the "Maestro System (REF100)". This document is an FDA clearance letter and a 510(k) summary, which outlines the device, its intended use, and a comparison to a predicate device. It also briefly mentions the types of testing performed (design verification and validation testing).

    However, the document does not provide a detailed breakdown of acceptance criteria and the results of a study proving the device meets those criteria, especially in the context of an AI/human-in-the-loop system that would typically have specific performance metrics like sensitivity, specificity, or accuracy.

    The Maestro System is described as a two-arm system that utilizes software and hardware to support surgeons by manipulating and maintaining instrument position in laparoscopic surgical procedures. The modifications made to the device relate to user interface, setup guidance, and instrument coupling, rather than an AI component that would perform diagnostic or interpretive tasks.

    Therefore, many of the requested elements for an AI-powered device's acceptance criteria and study results (e.g., sample size for test set, data provenance, number of experts for ground truth, MRMC study, standalone performance, ground truth type, training set details) are not applicable or not present in this document, as the device is characterized as an operating table accessory with electromechanical functions, not an AI/ML diagnostic or assistive imaging system.

    The document indicates that the device has undergone design verification and validation testing, which are standard for medical devices to ensure they meet their specified requirements and are safe and effective for their intended use. These tests typically focus on engineering and functional performance rather than AI-specific metrics.

    Here's a breakdown based on the available information and an explanation of why other requested items are not provided:


    Acceptance Criteria and Study for Maestro System (REF100)

    Based on the provided 510(k) summary, the Maestro System is an electromechanical device designed to assist in laparoscopic surgery by holding and positioning instruments, effectively an accessory to an operating table. It does not appear to be an AI/ML-driven diagnostic or image analysis system. Therefore, the types of "acceptance criteria" and "study" details requested for AI systems (e.g., sensitivity, specificity, expert consensus for ground truth, MRMC studies) are not pertinent to this device's classification and described functionality.

    The testing performed is primarily focused on the device's mechanical and software functions to ensure safety and effectiveness in its intended use.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document lists various "Testing Performed" which serve as the verification and validation activities against unstated, but implied, acceptance criteria related to engineering specifications and functional safety. It does not provide explicit numerical acceptance criteria or performance results in a table format typical for AI system performance.

    Test CategorySpecific Tests PerformedImplied Acceptance Criteria (General)Reported Performance (Generally Stated)
    Functional SafetyPayload capacity, Single fault condition, Emergency stop, Back-up fault response, Drape integrity, System cleaningDevice maintains intended function and safety under various conditions, including faults.Device found to be safe and effective, substantial equivalence established. (Specific results not detailed in this summary).
    Accuracy & PrecisionForce accuracy, Hold position accuracy, Positioning guidance and collision detection, System positioning accuracyDevice holds and positions instruments accurately and precisely as intended.Specific quantitative results not provided, but implicitly met for substantial equivalence.
    Software & ControlSystem end-to-end workflow, Bedside joint control, System setup, System latency, LED status, Software verification, Electrical safety, EMCSoftware and controls function correctly, respond as expected, and meet electrical/EMC standards.All clinical input requirements were validated. Software verified. Electrical and EMC compliance implied. (Specific results not detailed).
    UsabilityHuman factors testing, IFU inspectionDevice is user-friendly and instructions for use are clear.Human factors testing performed. Implies usability and safety in user interaction.
    Physical IntegrityDesign inspection, Coupler performanceDevice components are robust and the instrument coupling works reliably.Designed inspection performed. Coupler performance tested.
    Clinical RelevanceCadaver testingDevice functions as intended in a simulated surgical environment.Cadaver testing performed.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Not specified for discrete quantitative test sets in the provided summary. The testing appears to be functional and engineering-based rather than data-driven in the sense of AI model validation.
    • Data Provenance: Not applicable in the context of clinical data for AI model training/testing. The "data" here refers to engineering test results.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Not applicable/Not specified. This device does not generate diagnostic outputs that require expert ground truth labeling in the way an AI diagnostic tool would. Testing likely involves engineers, usability experts, and potentially surgeons during cadaver or human factors testing, but not for "ground truth labeling" of imaging data.

    4. Adjudication Method for the Test Set:

    • Not applicable. No adjudication method for ground truth labeling is mentioned or expected for this type of device.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    • No, not indicated. MRMC studies are typically performed for AI-assisted diagnostic tools (e.g., radiology AI) to assess the impact of AI on human reader performance. This is not pertinent to the Maestro System's described function.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Partially applicable. For an electromechanical device with software, "standalone performance" refers to the device's functional operation independent of human interaction within its specified parameters (e.g., holding force, positioning accuracy). The various engineering and software verification tests (e.g., "Payload capacity," "Hold position accuracy," "Software verification") would assess various aspects of its standalone performance. The document states "Testing described in this 510(k) consisted of verification of all system input requirements and product specifications."

    7. The Type of Ground Truth Used:

    • Engineering specifications and functional requirements. For this type of device, "ground truth" equates to the pre-defined target values for forces, positions, response times, and the successful completion of intended actions (e.g., maintaining position, allowing easy manipulation). This is established through design validation against known physical principles and user requirements, not from clinical outcomes or expert consensus on clinical data.

    8. The Sample Size for the Training Set:

    • Not applicable. This device is not an AI/ML model trained on a dataset in the conventional sense. Its "training" is in the form of engineering design, calibration, and software programming.

    9. How the Ground Truth for the Training Set was Established:

    • Not applicable. As above, there is no "training set" for an AI model. The "ground truth" for the device's design and programming comes from engineering principles, user requirements, and clinical needs defined during the device development process.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1