Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K242323
    Manufacturer
    Date Cleared
    2025-03-14

    (220 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    | FQO / 878.4960

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement the ScoPilot feature. ScoPilot is an on-demand, optional, ease-of-use feature of the Maestro System, allowing the laparoscope which is attached to a Maestro Arm to seamlessly follow a desired instrument tip. The surgeon remains in control of laparoscope positioning, without having to disengage from the instrument in their hand, helping maintain surgical flow and focus.

    AI/ML Overview

    The provided text describes the Moon Surgical Maestro System, including its features and the testing performed for its 510(k) submission. However, the document does not contain a detailed table of acceptance criteria or the reported device performance against those criteria as would typically be found in a study summary with quantifiable results. It lists various tests performed but does not present the specific metrics and their outcomes in a structured format.

    Therefore, I cannot fully complete the requested information for acceptance criteria and reported performance with quantitative data. I can, however, extract related information about the testing and ground truth establishment.

    Here's an attempt to answer your questions based on the provided text, with limitations acknowledged:

    1. Table of acceptance criteria and the reported device performance

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters, until it reached our predefined performance requirements. An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning."

    While this indicates that performance requirements were predefined and that "AP" (presumably Average Precision) and "AR" (presumably Average Recall) were metrics, the specific numerical values for these "predefined performance requirements" (acceptance criteria) and the "compliant" reported performance are not detailed in the provided text.

    Therefore, a table with specific numbers cannot be generated from the given information.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: The document mentions "An independent testing dataset containing videos" was used. The specific number of videos or cases in this test set is not provided.
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document mentions "ScoPilot Vision Performance" as one of the tests. For the ML model validation, it states: "The ML model was trained and tuned... An independent testing dataset containing videos was used to verify that the model performance...". However, the document does not specify the number of experts or their qualifications used to establish the ground truth for the test set.

    4. Adjudication method for the test set

    The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions "Human factors testing" and "Cadaver testing." However, there is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The described "ScoPilot" feature is an "on-demand, optional, ease-of-use feature" that allows the laparoscope to follow a desired instrument tip, aiming to help "maintain surgical flow and focus." This implies a focus on a specific functionality rather than a broad comparative effectiveness study with human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the ML model was performed. The text states: "An independent testing dataset containing videos was used to verify that the model performance (lower bound of the 95%CI for AP and AR) is compliant with our specification when using data including brands unseen during training/tuning." This describes an algorithm-only evaluation.

    7. The type of ground truth used

    For the "ScoPilot Vision Performance" and ML model validation, the ground truth would likely involve annotated video frames where the "desired instrument tip" is precisely identified. The text mentions "detection and tracking of specified instrument tips." However, it does not elaborate on how these ground truth annotations (e.g., expert consensus, pathology, outcomes data) were generated. Given the nature of the device (laparoscopic instrument tracking), it would most likely be based on expert manual annotation of video frames.

    8. The sample size for the training set

    The document states: "The ML model was trained and tuned through a K-fold cross-tuning process to optimize hyperparameters..." The specific sample size (number of videos/frames) for the training set is not provided.

    9. How the ground truth for the training set was established

    The document states "Machine Learning methodology used to develop software algorithm responsible for identifying tool tip." While it indicates that an ML model was trained to identify the tool tip, it does not explicitly state how the ground truth was established for this training set. Similar to the test set, it would logically involve expert annotation of video data to delineate the "tool tip."

    Ask a Question

    Ask a specific question about this device

    K Number
    K240598
    Manufacturer
    Date Cleared
    2024-06-03

    (91 days)

    Product Code
    Regulation Number
    878.4960
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Gardnerville, Nevada 89460

    Re: K240598

    Trade/Device Name: Maestro System (REF100) Regulation Number: 21 CFR 878.4960
    |
    | Product Code /
    Regulation | FQO / 878.4960
    | FQO / 878.4960

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopes and laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System is a 2-arm system which utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location. This 510(k) is being submitted to implement design changes to the previously cleared Maestro System. The following modifications have been implemented to the Maestro System:

    • · System Positioning Guidance
    • · System Hold Status Indication
    • Instrument Coupling
    • · System Setup
    • Bedside Setup Joint Control
    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the "Maestro System (REF100)". This document is an FDA clearance letter and a 510(k) summary, which outlines the device, its intended use, and a comparison to a predicate device. It also briefly mentions the types of testing performed (design verification and validation testing).

    However, the document does not provide a detailed breakdown of acceptance criteria and the results of a study proving the device meets those criteria, especially in the context of an AI/human-in-the-loop system that would typically have specific performance metrics like sensitivity, specificity, or accuracy.

    The Maestro System is described as a two-arm system that utilizes software and hardware to support surgeons by manipulating and maintaining instrument position in laparoscopic surgical procedures. The modifications made to the device relate to user interface, setup guidance, and instrument coupling, rather than an AI component that would perform diagnostic or interpretive tasks.

    Therefore, many of the requested elements for an AI-powered device's acceptance criteria and study results (e.g., sample size for test set, data provenance, number of experts for ground truth, MRMC study, standalone performance, ground truth type, training set details) are not applicable or not present in this document, as the device is characterized as an operating table accessory with electromechanical functions, not an AI/ML diagnostic or assistive imaging system.

    The document indicates that the device has undergone design verification and validation testing, which are standard for medical devices to ensure they meet their specified requirements and are safe and effective for their intended use. These tests typically focus on engineering and functional performance rather than AI-specific metrics.

    Here's a breakdown based on the available information and an explanation of why other requested items are not provided:


    Acceptance Criteria and Study for Maestro System (REF100)

    Based on the provided 510(k) summary, the Maestro System is an electromechanical device designed to assist in laparoscopic surgery by holding and positioning instruments, effectively an accessory to an operating table. It does not appear to be an AI/ML-driven diagnostic or image analysis system. Therefore, the types of "acceptance criteria" and "study" details requested for AI systems (e.g., sensitivity, specificity, expert consensus for ground truth, MRMC studies) are not pertinent to this device's classification and described functionality.

    The testing performed is primarily focused on the device's mechanical and software functions to ensure safety and effectiveness in its intended use.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document lists various "Testing Performed" which serve as the verification and validation activities against unstated, but implied, acceptance criteria related to engineering specifications and functional safety. It does not provide explicit numerical acceptance criteria or performance results in a table format typical for AI system performance.

    Test CategorySpecific Tests PerformedImplied Acceptance Criteria (General)Reported Performance (Generally Stated)
    Functional SafetyPayload capacity, Single fault condition, Emergency stop, Back-up fault response, Drape integrity, System cleaningDevice maintains intended function and safety under various conditions, including faults.Device found to be safe and effective, substantial equivalence established. (Specific results not detailed in this summary).
    Accuracy & PrecisionForce accuracy, Hold position accuracy, Positioning guidance and collision detection, System positioning accuracyDevice holds and positions instruments accurately and precisely as intended.Specific quantitative results not provided, but implicitly met for substantial equivalence.
    Software & ControlSystem end-to-end workflow, Bedside joint control, System setup, System latency, LED status, Software verification, Electrical safety, EMCSoftware and controls function correctly, respond as expected, and meet electrical/EMC standards.All clinical input requirements were validated. Software verified. Electrical and EMC compliance implied. (Specific results not detailed).
    UsabilityHuman factors testing, IFU inspectionDevice is user-friendly and instructions for use are clear.Human factors testing performed. Implies usability and safety in user interaction.
    Physical IntegrityDesign inspection, Coupler performanceDevice components are robust and the instrument coupling works reliably.Designed inspection performed. Coupler performance tested.
    Clinical RelevanceCadaver testingDevice functions as intended in a simulated surgical environment.Cadaver testing performed.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Not specified for discrete quantitative test sets in the provided summary. The testing appears to be functional and engineering-based rather than data-driven in the sense of AI model validation.
    • Data Provenance: Not applicable in the context of clinical data for AI model training/testing. The "data" here refers to engineering test results.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • Not applicable/Not specified. This device does not generate diagnostic outputs that require expert ground truth labeling in the way an AI diagnostic tool would. Testing likely involves engineers, usability experts, and potentially surgeons during cadaver or human factors testing, but not for "ground truth labeling" of imaging data.

    4. Adjudication Method for the Test Set:

    • Not applicable. No adjudication method for ground truth labeling is mentioned or expected for this type of device.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    • No, not indicated. MRMC studies are typically performed for AI-assisted diagnostic tools (e.g., radiology AI) to assess the impact of AI on human reader performance. This is not pertinent to the Maestro System's described function.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Partially applicable. For an electromechanical device with software, "standalone performance" refers to the device's functional operation independent of human interaction within its specified parameters (e.g., holding force, positioning accuracy). The various engineering and software verification tests (e.g., "Payload capacity," "Hold position accuracy," "Software verification") would assess various aspects of its standalone performance. The document states "Testing described in this 510(k) consisted of verification of all system input requirements and product specifications."

    7. The Type of Ground Truth Used:

    • Engineering specifications and functional requirements. For this type of device, "ground truth" equates to the pre-defined target values for forces, positions, response times, and the successful completion of intended actions (e.g., maintaining position, allowing easy manipulation). This is established through design validation against known physical principles and user requirements, not from clinical outcomes or expert consensus on clinical data.

    8. The Sample Size for the Training Set:

    • Not applicable. This device is not an AI/ML model trained on a dataset in the conventional sense. Its "training" is in the form of engineering design, calibration, and software programming.

    9. How the Ground Truth for the Training Set was Established:

    • Not applicable. As above, there is no "training set" for an AI model. The "ground truth" for the device's design and programming comes from engineering principles, user requirements, and clinical needs defined during the device development process.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221410
    Device Name
    Maestro Platform
    Manufacturer
    Date Cleared
    2022-12-02

    (200 days)

    Product Code
    Regulation Number
    878.4960
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Gardnerville, Nevada 89460

    Re: K221410

    Trade/Device Name: Maestro Platform Regulation Number: 21 CFR 878.4960
    Classification Name(s): | Table, Operating-Room, AC-Powered |
    | Product Code/ Regulation: | FQO 21 CFR 878.4960
    |
    | Product Code /
    Regulation | FQO / 878.4960
    | FQO / 878.4960

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Maestro System is intended to hold and position laparoscopic instruments during laparoscopic surgical procedures.

    Device Description

    The Moon Maestro System utilizes software and hardware to provide support to surgeons for manipulating and maintaining instrument position. Motors compensate for gravitational force applied to laparoscopic instruments, while surgeon control is not affected. Conventional laparoscopic tools are exclusively controlled and maneuvered by the surgeon, who grasps the handle of the surgical laparoscopic instrument and moves it freely until the instrument is brought to the desired position. Once surgeon hand force is removed, the Maestro system reverts to maintenance of the specified tool position and instrument tip location.

    AI/ML Overview

    The Moon Surgical Maestro System is a device designed to hold and position laparoscopic instruments during surgical procedures. The provided FDA 510(k) summary outlines its design, intended use, and comparison to a predicate device (ENDEX Endoscopic Positioning System K936308) to establish substantial equivalence.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly list "acceptance criteria" with specific quantitative thresholds that are then directly matched with reported device performance in a single table. Instead, it describes various performance tests conducted. The "Performance Testing" section states that "All testing had passed in accordance with the pre-specified success criteria, international standards or FDA guidances." This implies that the acceptance criteria were internal to the company, based on these standards and guidances.

    However, we can infer some performance aspects based on the comparison table and the performance testing section.

    Acceptance Criteria (Inferred/Stated)Reported Device Performance
    Functional & Mechanical
    Positional reach & trocar accommodationPassed
    Payload capacity4.4 lbs tested (vs. predicate's 5 lbs maximum force generated)
    System cart stabilityPassed
    Emergency stopPassed
    Gravity compensation accuracyPassed
    Coupler performancePassed
    Brake holdPassed
    Safety & Electrical
    Electrical insulationPassed
    Electrical safetyPassed
    EMC (Electromagnetic Compatibility)Passed
    Single fault conditionPassed (Automatic System Performance Monitoring, redundant encoders, velocity/acceleration/current/torque limits, brakes engage if power removed)
    LED statusPassed (LEDs change colors, pulse for status; red for critical fault)
    Back-up fault responseBrakes engage on motorized axis in the event of a fault state to prohibit any arm motion
    Biocompatibility & Sterilization
    Sterilization validation (for couplers)Passed (steam sterilization)
    Sterility barrier (drape integrity)Passed
    Software
    Software validationPassed
    Usability
    Human factors testingPassed
    Cadaver testingPassed

    2. Sample Size Used for the Test Set and the Data Provenance

    The document does not specify the sample size used for the various performance tests (e.g., how many units were tested for payload capacity, how many cadaver procedures were performed).

    The data provenance is internal to Moon Surgical, described as "Design validation testing." No specific country of origin for the data is mentioned, nor is it explicitly stated whether the tests were retrospective or prospective, though "design validation testing" typically implies prospective testing of the device.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions "Cadaver testing" and "Human factors testing" which would implicitly involve experts (surgeons, medical professionals), but details are absent.

    4. Adjudication Method for the Test Set

    The document does not specify any formal adjudication method (e.g., 2+1, 3+1). The testing appears to be based on pre-specified success criteria and compliance with standards, implying that results were evaluated against these benchmarks.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The submission focuses on demonstrating substantial equivalence to a predicate device through technical comparison and performance testing, rather than a clinical effectiveness study involving human readers or operators and their performance with and without AI assistance. The device is a mechanical robotic assist system, not an AI diagnostic tool.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device is a system that supports surgeons, meaning it inherently has a "human-in-the-loop." Therefore, a "standalone algorithm only" performance evaluation is not applicable in the context of this device. The software's role is to maintain instrument position based on surgeon input (movements and release). Software validation was performed, but this is a component of the system's overall function, not a standalone diagnostic AI.

    7. The Type of Ground Truth Used

    For the performance testing, the "ground truth" would be defined by:

    • Pre-specified success criteria: Internal benchmarks for how the device should perform.
    • International standards or FDA guidances: Established metrics for device safety and performance.
    • Engineering specifications: Design parameters for mechanical and electrical functions.
    • Clinical feasibility (Cadaver testing): Demonstrating the device's ability to be used effectively in a simulated surgical environment.

    8. The Sample Size for the Training Set

    The document does not mention a training set in the context of machine learning or AI algorithm development. The "software validation" mentioned would refer to the validation of its control logic and functions, not typically a machine learning model trained on a dataset. The device's function as described (compensating for gravitational force, maintaining instrument position upon release) relies on electromechanical control and sensor feedback, not a learned model from a training set.

    9. How the Ground Truth for the Training Set Was Established

    Since a training set for machine learning is not applicable as per point 8, the method for establishing its ground truth is also not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K090136
    Manufacturer
    Date Cleared
    2009-03-20

    (58 days)

    Product Code
    Regulation Number
    878.4960
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    5085 SRT Surgical Table

    Operating Table classified as Class I device (Product Code [FQO] per 21 CFR 878.4960
    44060-1834

    MAR 2 0 2009

    Re: K090136

    Trade/Device Name: STERIS® 5085 SRT Regulation Number: 21 CFR 878.4960

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The STERIS® 5085 SRT is a general surgical table with high patient weight capacity, extended width capability, and lower minimal table top elevation. The STERIS® 5085 SRT accommodates all general surgical procedures including but not limited to, cardiac and vascular, endoscopic, gynecology, urology, nephrectomy, neurology, ophthalmologic, orthopedics and other procedures requiring intraoperative fluoroscopic C-arm imaging and also supports laparoscopic surgical technique for the largest surgical patients.

    The STERIS® 5085 SRT enables patient transport on hard level surfaces within the surgical suite (from pre-operative areas to the operating room and again from the operating room to post operative recovery).

    Device Description

    The STERIS® 5085 SRT Surgical Table is a mobile, electro-hydraulically operated surgical table designed to support all general surgical procedures including cardiac and vascular, endoscopic, gynecology, urology, nephrectomy, neurology, ophthalmology and orthopedics with the addition of STERIS table accessories. The STERIS® 5085 SRT Surgical Table features powered lateral tilt, Trendelenburg / reverse Trendelenburg, Zip-Slide™ movable tabletop, and adjustable height functions. The STERIS® 5085 SRT has a patient transport feature that allows the user to transport patients to and from the surgical suite on hard level surfaces.

    AI/ML Overview

    The provided text is a 510(k) summary for the STERIS® 5085 SRT Surgical Table, which is a medical device. This document focuses on demonstrating substantial equivalence to predicate devices and adherence to relevant safety and performance standards.

    There is no study described in the provided text that defines acceptance criteria for a device's performance based on diagnostic metrics (like sensitivity, specificity, or accuracy) and then reports on how the device meets those criteria using clinical data or an AI-driven analysis.

    Instead, the document primarily discusses:

    • Device Name, Classification, and Predicate Devices: Identifies the device and its regulatory context.
    • Description of Device and Intended Use: Explains what the device does and for whom it is intended.
    • Safety and Substantial Equivalence: States that the device is substantially equivalent to predicate devices and complies with various voluntary safety standards (UL, IEC, EN/IEC, CAN/CSA). These standards relate to electrical safety, electromagnetic compatibility, usability, and specific requirements for operating tables.

    The "acceptance criteria" presented are in the form of compliance with these safety and performance standards, and the "study that proves the device meets the acceptance criteria" refers to testing conducted to ensure this compliance. However, this is not a study in the context of evaluating a diagnostic or AI-powered medical device's performance against clinical ground truth.

    Therefore, I cannot provide the requested information about acceptance criteria and device performance in the format of diagnostic metrics because this type of study is not presented in the provided text. The document describes compliance with engineering and safety standards, not a clinical performance study using ground truth and diagnostic metrics.

    None of the specific subsections (1-9) of your request can be directly addressed from this document, as they pertain to clinical performance studies, ground truth establishment, expert adjudication, or AI performance, which are not detailed here.

    Ask a Question

    Ask a specific question about this device

    K Number
    K080506
    Date Cleared
    2008-06-09

    (105 days)

    Product Code
    Regulation Number
    890.3850
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    The Martin Examination Table is a 510(k) exempt device, 21 CFR 878.4960, product code LGX.
    |
    | Regulation
    Number: | 890.3850 | 878.4960

    Device Description :

    The Martin Examination Table is a 510(k) exempt device, 21 CFR 878.4960, product code LGX.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Martin Chair Model C4S1 is indicated for providing mobility to persons limited to a sitting position. It is also specifically indicated to transfer a patient to and from the Martin Examination Table.

    The Martin Examination Table is indicated for use during diagnostic examinations or surgical procedures to support and position a patient.

    Device Description

    The Martin Chair Model C4S1 mechanical wheelchair is an indoor/outdoor wheelchair that has a base with two larger rear wheels and two smaller front wheels and a seat. The wheelchair is intended to be manually propelled by a person seated in the wheelchair or by an attendant or clinician. The device is made from composites of steel, plastics and fabrics. The wheelchair is for use by adult persons.

    The wheelchair can be secured to a compatible, electrically elevated examination table which allows for the seat of the wheelchair to become part of the examination table. This removes the need for the patient to be lifted during transfer from the wheelchair to the examination table. The wheelchair is latched to the examination table and the side frame and wheels are removed for the examination. The sides and wheels are replaced prior to lowering the examination table allowing the wheelchair to be used according to its intended use.

    The Martin Examination Table is a 510(k) exempt device, 21 CFR 878.4960, product code LGX. It is an accessory to the Martin Chair. It is a device intended as a powered examination table to provide positioning and support to patients during general examinations and procedures. It is intended for medical purposes as an electrically operated table with movable components that can be adjusted to various positions, the same intended use as other currently marketed powered tables. The Martin Examination Table is a standard powered examination table that includes standard components and features of other currently marketed powered examination tables including side rails for additional safety. The Martin Examination Table includes latches under the seat cushion that are compatible with the fixed metal receivers of the Martin Chair Model C4S1,

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and supporting study for the Koso506 Martin Chair C4S1:

    Important Note: The provided document is a 510(k) Summary for a mechanical wheelchair, which is a relatively low-risk Class I device. As such, the depth of performance testing and the types of studies typically required for more complex or higher-risk devices (like those involving AI algorithms, for instance) are not present here. The questions you've asked are more geared towards AI/software as a medical device (SaMD) clearances. I will answer them to the best of my ability based only on the provided text, and will explicitly state if information is not available.


    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Mechanical Wheelchair Performance (Martin Chair C4S1)
    Meets applicable FDA recognized ANSI/RESNA consensus standards for mechanical wheelchairs."The Martin Chair Model C4S1 mechanical wheelchair meets the applicable FDA recognized ANSI/RESNA consensus standards tested by Human Engineering Research Laboratories (HERL) for mechanical wheelchairs and has successfully passed testing."
    Meets flame retardant standards."Data within the 510(k) demonstrates successful performance against flame retardant standards."
    Examination Table Performance (Martin Examination Table)
    Meets UL 60601-1 standard."The Martin Examination Table has been tested to in accordance with standards: UL 60601-1"
    Meets UL 60601-1-2 standard."The Martin Examination Table has been tested to in accordance with standards: UL 60601-1-2"
    Meets CSA 22.2 No 601-1 standard."The Martin Examination Table has been tested to in accordance with standards: CSA 22.2 No 601-1."

    Study Details

    1. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

      • Sample Size: Not specified in the provided text. The document states that the devices (Martin Chair C4S1 and Martin Examination Table) were "tested" and "passed testing" according to specific standards (ANSI/RESNA, UL, CSA). These standards typically involve a defined number of test samples (e.g., specific units of the wheelchair or examination table) for various mechanical, safety, and performance evaluations, but the exact number isn't detailed in this summary.
      • Data Provenance: Not explicitly stated. The testing was performed by "Human Engineering Research Laboratories (HERL)" for the wheelchair and implicitly by a qualified entity for the examination table standards (UL/CSA). The country of origin of testing data is not mentioned, but given the US FDA submission, it's presumed to be from a reputable testing facility. The nature of these tests is prospective (the actual devices are subjected to specified physical tests).
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

      • This question is not applicable to the type of device and testing described. The "ground truth" for a mechanical wheelchair and examination table is established by direct physical measurement, stress testing, and functional evaluation against engineering and safety standards, not by expert interpretation of data like in medical imaging. The "experts" involved would be engineers and technicians at the testing laboratories (HERL, UL, CSA labs) who are qualified to perform and interpret the results of these standards-based tests. Their specific numbers and qualifications are not detailed in this 510(k) summary.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set

      • Not applicable. Adjudication methods like 2+1 are used for human interpretation tasks, especially in clinical studies or for establishing ground truth in AI model training/testing. This submission pertains to physical device testing against established engineering and safety standards, where outcomes are typically objectively measured (e.g., "passed" or "failed" a specific physical test).
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

      • No. An MRMC study is relevant for AI-assisted image interpretation or diagnostic tools, where human readers (e.g., radiologists) are involved. This submission is for a mechanical wheelchair and examination table, which do not involve human "readers" or AI assistance in a diagnostic context.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

      • No. This device is a mechanical wheelchair and an examination table; it does not contain an AI algorithm. Therefore, "standalone" algorithm performance is not relevant.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

      • The "ground truth" for this device's performance is objective compliance with recognized consensus standards (ANSI/RESNA for wheelchairs, UL/CSA for examination tables) and physical performance criteria (e.g., successful flame retardancy, meeting specified mechanical stress tolerances, electrical safety adherence). There is no "expert consensus" or "pathology" in the sense of medical diagnosis; rather, it's engineering and safety validation.
    7. The sample size for the training set

      • Not applicable. This device does not use machine learning or AI, so there is no "training set."
    8. How the ground truth for the training set was established

      • Not applicable, as there is no AI or machine learning "training set" for this device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K965001
    Date Cleared
    1997-07-31

    (227 days)

    Product Code
    Regulation Number
    876.1500
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    | 21 CFR 878.4960

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Intuitive Surgical Endoscopic Instrument Control System is intended for accurate control of selected endoscopic instruments including, rigid endoscopes, blunt endoscopic dissectors and endoscopic retractors during thoracoscopic and laparoscopic surgical procedures. It is intended to be used by professionals in operating room environments.

    Device Description

    The Intuitive Surgical Endoscopic Instrument Control System is an electro-mechanical device consisting of a Surgical Console including "Master Manipulators", articulated Instrument Control Arms or "Slave Manipulators" and Limited Reuse Tools or end effectors.

    AI/ML Overview

    The provided text describes the 510(k) summary for the Intuitive Surgical Endoscopic Instrument Control System (K965001). However, it does not contain the specific details required to complete all sections of your request, particularly regarding detailed acceptance criteria, specific reported device performance metrics against those criteria, and granular information about the study design (e.g., sample sizes for test and training sets, expert qualifications, ground truth establishment methods for training data).

    Here's an analysis based on the available information and what is missing:


    Acceptance Criteria and Study for Intuitive Surgical Endoscopic Instrument Control System (K965001)

    The submission focuses on establishing substantial equivalence to predicate devices rather than demonstrating performance against predefined, quantitative acceptance criteria via a comprehensive clinical study as might be seen for novel devices.

    1. Table of Acceptance Criteria and Reported Device Performance

    Based on the provided text, specific quantitative acceptance criteria and corresponding reported device performance metrics are NOT explicitly stated. The document refers to "internal specification requirements" and "external standard requirements and predicate performance expectations" but does not detail what these are.

    Acceptance Criteria (Quantitative/Qualitative)Reported Device Performance
    Reproducibility"All data fell within... internal specification requirements as well as external standard requirements and predicate performance expectations."
    Hysteresis"All data fell within... internal specification requirements as well as external standard requirements and predicate performance expectations."
    Functional Adequacy"All data fell within... internal specification requirements as well as external standard requirements and predicate performance expectations."
    Substantial Equivalence to PredicatesConfirmed through "Design analysis and in vitro data" and comparison of intended use, basic functionality, and tissue effects.
    Capability of precisely moving and controlling endoscopic tools"The Intuitive system is substantially equivalent to both the Computer Motion and Andronic devices in terms of the capability of precisely moving and controlling endoscopic tools."
    Tissue Effects"The Intuitive system is substantially equivalent to the cited predicates in terms of the tissue effects."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): Not specified. The document mentions "in vitro test data" and "design analysis" but does not give a number of tests or samples.
    • Data Provenance: The tests are described as "in vitro data," suggesting laboratory or bench testing. No information on country of origin or whether it was retrospective or prospective human clinical data is provided, as it appears to be primarily bench/pre-clinical testing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. It's likely that internal engineers or subject matter experts evaluated the in vitro test results against specifications, but this is not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. Given the nature of "in vitro test data," it's more likely that direct measurements were taken and compared against pre-defined specifications rather than requiring expert adjudication in the traditional sense of clinical imaging studies.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No. The provided information does not describe any MRMC study comparing human readers with and without AI assistance. This device is an electromechanical control system for surgical instruments, not an AI-powered diagnostic or assistive tool in the context of interpretation.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Performance Study: Yes, in a sense. The "in vitro test data" and "design analysis" would represent the "standalone" performance of the device's functional characteristics (reproducibility, hysteresis, functional adequacy) in a controlled environment. However, it's not an "algorithm only" study in the typical AI sense, but rather a performance study of the electromechanical system.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The "ground truth" for the in vitro tests would be the pre-defined engineering specifications and performance expectations for reproducibility, hysteresis, and functional adequacy. These would likely be based on established engineering principles, predicate device performance, and clinical needs for precise instrument control.

    8. Sample Size for the Training Set

    • Sample Size (Training Set): Not applicable in the context of machine learning. This device is an electromechanical system, not an AI/ML algorithm that requires a training set. Its design and validation rely on engineering principles, material science, and control systems, not data-driven machine learning models.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth (Training Set): Not applicable, as there is no training set for this type of device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1