Search Results
Found 2 results
510(k) Data Aggregation
(220 days)
The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.
The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement the ScoPilot feature. ScoPilot is an on-demand, optional, ease-of-use feature of the Maestro System, allowing the laparoscope which is attached to a Maestro Arm to seamlessly follow a desired instrument tip. The surgeon remains in control of laparoscope positioning, without having to disengage from the instrument in their hand, helping maintain surgical flow and focus.
The provided text describes the Moon Surgical Maestro System, including its features and the testing performed for its 510(k) submission. However, the document does not contain a detailed table of acceptance criteria or the reported device performance against those criteria as would typically be found in a study summary with quantifiable results. It lists various tests performed but does not present the specific metrics and their outcomes in a structured format.
Therefore, I cannot fully complete the requested information for acceptance criteria and reported performance with quantitative data. I can, however, extract related information about the testing and ground truth establishment.
Here's an attempt to answer your questions based on the provided text, with limitations acknowledged:
1. Table of acceptance criteria and the reported device performance
The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters, until it reached our predefined performance requirements. An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning."
While this indicates that performance requirements were predefined and that "AP" (presumably Average Precision) and "AR" (presumably Average Recall) were metrics, the specific numerical values for these "predefined performance requirements" (acceptance criteria) and the "compliant" reported performance are not detailed in the provided text.
Therefore, a table with specific numbers cannot be generated from the given information.
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: The document mentions "An independent testing dataset containing videos" was used. The specific number of videos or cases in this test set is not provided.
- Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document mentions "ScoPilot Vision Performance" as one of the tests. For the ML model validation, it states: "The ML model was trained and tuned... An independent testing dataset containing videos was used to verify that the model performance...". However, the document does not specify the number of experts or their qualifications used to establish the ground truth for the test set.
4. Adjudication method for the test set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document mentions "Human factors testing" and "Cadaver testing." However, there is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The described "ScoPilot" feature is an "on-demand, optional, ease-of-use feature" that allows the laparoscope to follow a desired instrument tip, aiming to help "maintain surgical flow and focus." This implies a focus on a specific functionality rather than a broad comparative effectiveness study with human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the ML model was performed. The text states: "An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning." This describes an algorithm-only evaluation.
7. The type of ground truth used
For the "ScoPilot Vision Performance" and ML model validation, the ground truth would likely involve annotated video frames where the "desired instrument tip" is precisely identified. The text mentions "detection and tracking of specified instrument tips." However, it does not elaborate on how these ground truth annotations (e.g., expert consensus, pathology, outcomes data) were generated. Given the nature of the device (laparoscopic instrument tracking), it would most likely be based on expert manual annotation of video frames.
8. The sample size for the training set
The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters..." The specific sample size (number of videos/frames) for the training set is not provided.
9. How the ground truth for the training set was established
The document states "Machine Learning methodology used to develop software algorithm responsible for identifying tool tip." While it indicates that an ML model was trained to identify the tool tip, it does not explicitly state how the ground truth was established for this training set. Similar to the test set, it would logically involve expert annotation of video data to delineate the "tool tip."
Ask a specific question about this device
(60 days)
The Senhance® Surgical System is intended to assist in the accurate control of laparoscopic instruments for visualization and endoscopic manipulation of tissue including grasping, cutting, blunt and sharp dissection, approximation, ligation, electrocautery, suturing, mobilization. The Senhance Surgical System is intended for use in laparoscopic gynecological surgery, colorectal surgery, cholecystectomy, and inguinal hernia repair. The system is indicated for adult use. It is intended for use by trained physicians in an operating room environment in accordance with the instructions for use.
The purpose of this submission is to seek clearance for an alternate Node component called the Smart Node (to be marketed as the Intelligent Surgical Unit (ISU)), which introduces enhanced image processing features and augments the endoscope movement capabilities of the TransEnterix® Senhance® Surgical System. The Smart Node adds three new methods of camera control for the surgeon operating at the Senhance Cockpit.
The provided text describes a 510(k) submission for the TransEnterix Senhance Surgical System with an alternate Node component called the Smart Node (marketed as the Intelligent Surgical Unit or ISU). This submission aims to demonstrate substantial equivalence to a previously cleared predicate device (K192877).
However, the document does not contain the following information regarding acceptance criteria and a study proving the device meets those criteria:
- A table of acceptance criteria and the reported device performance: The document lists types of testing performed (Bench Testing, Electrical Safety and Compatibility, Software Verification and Validation, Pre-Clinical Design Validation, Usability Testing) and states that performance was evaluated and requirements were met, but it does not provide specific quantitative acceptance criteria or detailed reported performance values in a table.
- Sample size used for the test set and the data provenance: For the pre-clinical design validation and usability testing, it mentions "a single-center" and "users who represented the intended primary user population," but specific sample sizes for the test set are not provided. Data provenance is not explicitly mentioned beyond the type of model used (live porcine model for pre-clinical, simulated use for usability).
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not provided.
- Adjudication method: This information is not provided.
- Multi-reader multi-case (MRMC) comparative effectiveness study: This type of study is not mentioned. The device is a surgical system, not an AI diagnostic tool, so an MRMC study comparing human readers with and without AI assistance is not applicable in this context.
- Standalone performance: The document focuses on the integrated performance of the "Senhance Surgical System with Smart Node" and its comparison to the predicate device. It does not describe a standalone performance study of the Smart Node component in isolation, outside of specific functional tests.
- Type of ground truth used: For the pre-clinical design validation, it states it was conducted in a live porcine model, which "most closely represents the human anatomy," implying the physiological outcomes in this model served as validation. For usability, "user level requirements were assessed and found to be met," suggesting user feedback and task completion were the ground truth.
- Sample size for the training set: The document describes performance testing to support substantial equivalence for a new component (Smart Node) of an existing surgical system. It does not mention a "training set" in the context of machine learning model development. The software verification and validation are for "software modifications to support the subject Smart Node," not the training of a new AI model with a distinct training set.
- How the ground truth for the training set was established: As no training set is described for a machine learning model, this information is not applicable.
Summary of available information related to performance testing:
- Device: TransEnterix® Senhance® Surgical System with Smart Node (Intelligent Surgical Unit (ISU))
- Purpose of Submission: Seek clearance for an alternate Node component (Smart Node) that introduces enhanced image processing features and augments endoscope movement capabilities.
- Performance Tests Conducted:
- Bench Testing: Evaluated the performance of the Smart Node and the overall system, confirming compatibility, reliability, functionality, safety, and efficacy. (Specific criteria/results not provided).
- Electrical Safety and Compatibility: Compliance with IEC 60601-1, IEC 60601-1-2, and IEC 60601-2-18. (No specific numerical results/acceptance criteria given, just compliance statement).
- Software Verification and Validation Testing: Conducted on software modifications for the Smart Node, following FDA guidance for "major" level of concern software. (No specific test results/acceptance criteria given).
- Pre-Clinical Design Validation:
- Environment: Single-center, un-blinded, observational, simulated use design validation.
- Model: Live porcine model.
- Users: Users representing the intended primary user population.
- Ground Truth: Assessed user-level requirements; all found to be met.
- Sample Size/Provenance: Not specified beyond "single-center" and "live porcine model."
- Usability Testing:
- Modifications: Based on new Smart Node features.
- Study Type: Confirmatory summative study.
- Users: Performed by users in a simulated use environment.
- Ground Truth: Users were able to independently perform all critical tasks without use errors that would lead to harm.
- Sample Size/Provenance: Not specified.
Conclusion stated by the submitter: The performance testing supported the safety and functionality of the device and demonstrated that the device is substantially equivalent to the predicate device.
Ask a specific question about this device
Page 1 of 1