K Number
K221410
Device Name
Maestro Platform
Manufacturer
Date Cleared
2022-12-02

(200 days)

Product Code
Regulation Number
878.4960
Panel
SU
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Maestro System is intended to hold and position laparoscopic instruments during laparoscopic surgical procedures.

Device Description

The Moon Maestro System utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location.

AI/ML Overview

The Moon Surgical Maestro System is a device designed to hold and position laparoscopic instruments during surgical procedures. The provided FDA 510(k) summary outlines its design, intended use, and comparison to a predicate device (ENDEX Endoscopic Positioning System K936308) to establish substantial equivalence.

Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly list "acceptance criteria" with specific quantitative thresholds that are then directly matched with reported device performance in a single table. Instead, it describes various performance tests conducted. The "Performance Testing" section states that "All testing had passed in accordance with the pre-specified success criteria, international standards or FDA guidances." This implies that the acceptance criteria were internal to the company, based on these standards and guidances.

However, we can infer some performance aspects based on the comparison table and the performance testing section.

Acceptance Criteria (Inferred/Stated)Reported Device Performance
Functional & Mechanical
Positional reach & trocar accommodationPassed
Payload capacity4.4 lbs tested (vs. predicate's 5 lbs maximum force generated)
System cart stabilityPassed
Emergency stopPassed
Gravity compensation accuracyPassed
Coupler performancePassed
Brake holdPassed
Safety & Electrical
Electrical insulationPassed
Electrical safetyPassed
EMC (Electromagnetic Compatibility)Passed
Single fault conditionPassed (Automatic System Performance Monitoring, redundant encoders, velocity/acceleration/current/torque limits, brakes engage if power removed)
LED statusPassed (LEDs change colors, pulse for status; red for critical fault)
Back-up fault responseBrakes engage on motorized axis in the event of a fault state to prohibit any arm motion
Biocompatibility & Sterilization
Sterilization validation (for couplers)Passed (steam sterilization)
Sterility barrier (drape integrity)Passed
Software
Software validationPassed
Usability
Human factors testingPassed
Cadaver testingPassed

2. Sample Size Used for the Test Set and the Data Provenance

The document does not specify the sample size used for the various performance tests (e.g., how many units were tested for payload capacity, how many cadaver procedures were performed).

The data provenance is internal to Moon Surgical, described as "Design validation testing." No specific country of origin for the data is mentioned, nor is it explicitly stated whether the tests were retrospective or prospective, though "design validation testing" typically implies prospective testing of the device.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions "Cadaver testing" and "Human factors testing" which would implicitly involve experts (surgeons, medical professionals), but details are absent.

4. Adjudication Method for the Test Set

The document does not specify any formal adjudication method (e.g., 2+1, 3+1). The testing appears to be based on pre-specified success criteria and compliance with standards, implying that results were evaluated against these benchmarks.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The submission focuses on demonstrating substantial equivalence to a predicate device through technical comparison and performance testing, rather than a clinical effectiveness study involving human readers or operators and their performance with and without AI assistance. The device is a mechanical robotic assist system, not an AI diagnostic tool.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

The device is a system that supports surgeons, meaning it inherently has a "human-in-the-loop." Therefore, a "standalone algorithm only" performance evaluation is not applicable in the context of this device. The software's role is to maintain instrument position based on surgeon input (movements and release). Software validation was performed, but this is a component of the system's overall function, not a standalone diagnostic AI.

7. The Type of Ground Truth Used

For the performance testing, the "ground truth" would be defined by:

  • Pre-specified success criteria: Internal benchmarks for how the device should perform.
  • International standards or FDA guidances: Established metrics for device safety and performance.
  • Engineering specifications: Design parameters for mechanical and electrical functions.
  • Clinical feasibility (Cadaver testing): Demonstrating the device's ability to be used effectively in a simulated surgical environment.

8. The Sample Size for the Training Set

The document does not mention a training set in the context of machine learning or AI algorithm development. The "software validation" mentioned would refer to the validation of its control logic and functions, not typically a machine learning model trained on a dataset. The device's function as described (compensating for gravitational force, maintaining instrument position upon release) relies on electromechanical control and sensor feedback, not a learned model from a training set.

9. How the Ground Truth for the Training Set Was Established

Since a training set for machine learning is not applicable as per point 8, the method for establishing its ground truth is also not provided.

§ 878.4960 Operating tables and accessories and operating chairs and accessories.

(a)
Identification. Operating tables and accessories and operating chairs and accessories are AC-powered or air-powered devices, usually with movable components, intended for use during diagnostic examinations or surgical procedures to support and position a patient.(b)
Classification. Class I (general controls). The device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to § 878.9.