K Number
K240598
Manufacturer
Date Cleared
2024-06-03

(91 days)

Product Code
Regulation Number
878.4960
Panel
SU
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

Device Description

The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement design changes to the previously cleared Maestro System. The following modifications have been implemented to the Maestro System:

  • · System Positioning Guidance
  • · System Hold Status Indication
  • Instrument Coupling
  • · System Setup
  • Bedside Setup Joint Control
AI/ML Overview

The provided text describes a 510(k) premarket notification for the "Maestro System (REF100)". This document is an FDA clearance letter and a 510(k) summary, which outlines the device, its intended use, and a comparison to a predicate device. It also briefly mentions the types of testing performed (design verification and validation testing).

However, the document does not provide a detailed breakdown of acceptance criteria and the results of a study proving the device meets those criteria, especially in the context of an AI/human-in-the-loop system that would typically have specific performance metrics like sensitivity, specificity, or accuracy.

The Maestro System is described as a two-arm system that utilizes software and hardware to support surgeons by manipulating and maintaining instrument position in laparoscopic surgical procedures. The modifications made to the device relate to user interface, setup guidance, and instrument coupling, rather than an AI component that would perform diagnostic or interpretive tasks.

Therefore, many of the requested elements for an AI-powered device's acceptance criteria and study results (e.g., sample size for test set, data provenance, number of experts for ground truth, MRMC study, standalone performance, ground truth type, training set details) are not applicable or not present in this document, as the device is characterized as an operating table accessory with electromechanical functions, not an AI/ML diagnostic or assistive imaging system.

The document indicates that the device has undergone design verification and validation testing, which are standard for medical devices to ensure they meet their specified requirements and are safe and effective for their intended use. These tests typically focus on engineering and functional performance rather than AI-specific metrics.

Here's a breakdown based on the available information and an explanation of why other requested items are not provided:


Acceptance Criteria and Study for Maestro System (REF100)

Based on the provided 510(k) summary, the Maestro System is an electromechanical device designed to assist in laparoscopic surgery by holding and positioning instruments, effectively an accessory to an operating table. It does not appear to be an AI/ML-driven diagnostic or image analysis system. Therefore, the types of "acceptance criteria" and "study" details requested for AI systems (e.g., sensitivity, specificity, expert consensus for ground truth, MRMC studies) are not pertinent to this device's classification and described functionality.

The testing performed is primarily focused on the device's mechanical and software functions to ensure safety and effectiveness in its intended use.

1. Table of Acceptance Criteria and Reported Device Performance:

The document lists various "Testing Performed" which serve as the verification and validation activities against unstated, but implied, acceptance criteria related to engineering specifications and functional safety. It does not provide explicit numerical acceptance criteria or performance results in a table format typical for AI system performance.

Test CategorySpecific Tests PerformedImplied Acceptance Criteria (General)Reported Performance (Generally Stated)
Functional SafetyPayload capacity, Single fault condition, Emergency stop, Back-up fault response, Drape integrity, System cleaningDevice maintains intended function and safety under various conditions, including faults.Device found to be safe and effective, substantial equivalence established. (Specific results not detailed in this summary).
Accuracy & PrecisionForce accuracy, Hold position accuracy, Positioning guidance and collision detection, System positioning accuracyDevice holds and positions instruments accurately and precisely as intended.Specific quantitative results not provided, but implicitly met for substantial equivalence.
Software & ControlSystem end-to-end workflow, Bedside joint control, System setup, System latency, LED status, Software verification, Electrical safety, EMCSoftware and controls function correctly, respond as expected, and meet electrical/EMC standards.All clinical input requirements were validated. Software verified. Electrical and EMC compliance implied. (Specific results not detailed).
UsabilityHuman factors testing, IFU inspectionDevice is user-friendly and instructions for use are clear.Human factors testing performed. Implies usability and safety in user interaction.
Physical IntegrityDesign inspection, Coupler performanceDevice components are robust and the instrument coupling works reliably.Designed inspection performed. Coupler performance tested.
Clinical RelevanceCadaver testingDevice functions as intended in a simulated surgical environment.Cadaver testing performed.

2. Sample Size Used for the Test Set and Data Provenance:

  • Not specified for discrete quantitative test sets in the provided summary. The testing appears to be functional and engineering-based rather than data-driven in the sense of AI model validation.
  • Data Provenance: Not applicable in the context of clinical data for AI model training/testing. The "data" here refers to engineering test results.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

  • Not applicable/Not specified. This device does not generate diagnostic outputs that require expert ground truth labeling in the way an AI diagnostic tool would. Testing likely involves engineers, usability experts, and potentially surgeons during cadaver or human factors testing, but not for "ground truth labeling" of imaging data.

4. Adjudication Method for the Test Set:

  • Not applicable. No adjudication method for ground truth labeling is mentioned or expected for this type of device.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

  • No, not indicated. MRMC studies are typically performed for AI-assisted diagnostic tools (e.g., radiology AI) to assess the impact of AI on human reader performance. This is not pertinent to the Maestro System's described function.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

  • Partially applicable. For an electromechanical device with software, "standalone performance" refers to the device's functional operation independent of human interaction within its specified parameters (e.g., holding force, positioning accuracy). The various engineering and software verification tests (e.g., "Payload capacity," "Hold position accuracy," "Software verification") would assess various aspects of its standalone performance. The document states "Testing described in this 510(k) consisted of verification of all system input requirements and product specifications."

7. The Type of Ground Truth Used:

  • Engineering specifications and functional requirements. For this type of device, "ground truth" equates to the pre-defined target values for forces, positions, response times, and the successful completion of intended actions (e.g., maintaining position, allowing easy manipulation). This is established through design validation against known physical principles and user requirements, not from clinical outcomes or expert consensus on clinical data.

8. The Sample Size for the Training Set:

  • Not applicable. This device is not an AI/ML model trained on a dataset in the conventional sense. Its "training" is in the form of engineering design, calibration, and software programming.

9. How the Ground Truth for the Training Set was Established:

  • Not applicable. As above, there is no "training set" for an AI model. The "ground truth" for the device's design and programming comes from engineering principles, user requirements, and clinical needs defined during the device development process.

§ 878.4960 Operating tables and accessories and operating chairs and accessories.

(a)
Identification. Operating tables and accessories and operating chairs and accessories are AC-powered or air-powered devices, usually with movable components, intended for use during diagnostic examinations or surgical procedures to support and position a patient.(b)
Classification. Class I (general controls). The device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to § 878.9.