Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K231611
    Date Cleared
    2023-08-31

    (90 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K202360, K203346

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The HOLO Portal™ Surgical Guidance System is indicated as an aid for precisely locating anatomical structures in either open or percutaneous orthopedic procedures in the lumbosacral spine region. Their use is indicated for any medical condition of the lumbosacral spine in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the iliac crest, can be identified relative to intraoperative CT images of the anatomy.

    The HOLO Portal™ Surgical Guidance System simultaneously displays 2D stereotaxic data along with a 3D virtual anatomy model over the patient during surgery. The stereotaxic display is indicated for continuously tracking instrument position and orientation to the registered patient anatomy while the 3D display is indicated for localizing the virtual instrument to the virtual anatomy model over the patient during surgery. The 3D display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.

    Device Description

    The HOLO Portal™ System is a combination of hardware and software that provides visualization of the patient's internal anatomy and surgical guidance to the surgeon based on patient-specific digital imaging.

    HOLO Portal™ is a navigation system for surgical planning and/or intraoperative guidance during stereotactic surgical procedures. The HOLO Portal™ System consists of two mobile devices: 1) the surgeon workstation, which includes the display unit and the augmented reality visor (optional), and 2) the control workstation, which houses the optical navigation tracker and the computer. The optical navigation tracker utilizes infrared cameras and active infrared lights to triangulate the 3D location of passive markers attached to each system component to determine their 3D positions and orientations in real time. Software algorithms combine tracking information and high-resolution 3D anatomical models to display representations of patient anatomy, compared to traditional two-dimensional (2D), displays during surgical procedures.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study conducted for the HOLO Portal™ Surgical Guidance System, primarily focusing on performance testing. Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated in a quantitative manner (e.g., "positional error must be less than X mm"). Instead, the document refers to "worst-case" measurements and relies on clinical meaningfulness relative to the predicate. The performance data is summarized in tables.

    MetricPerformance Validation (Cadaver) - MeanPerformance Validation (Cadaver) - Standard Dev.Performance Validation (Cadaver) - 95% CI Upper BoundPerformance Validation (Cadaver) - 99% CI Upper BoundPerformance Verification (Benchtop) - MeanPerformance Verification (Benchtop) - Standard Dev.Performance Verification (Benchtop) - 95% CI Upper Bound
    Positional Error [mm]2.370.722.582.691.540.741.75
    Angular Error [degrees]1.400.841.651.731.500.681.69

    Acceptance Criteria Implied: The device must meet "performance requirements under the indications for use" and demonstrate "equivalent safety and efficacy of the system to the cited predicate device." The provided numbers represent the achieved performance, which is presumably within acceptable limits for its intended use and considered substantially equivalent to the predicate. No explicit numerical acceptance thresholds are provided in the text.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document does not explicitly state the number of cadavers or benchtop phantoms used. It mentions "surgical simulations conducted on cadavers" and "rigid benchtop phantoms." It refers to "a set of test samples presenting lumbosacral spine, extracted from stationary and intraoperative Computed Tomography scans" for software validation. The number of measurements (which would be related to the sample size of "pedicle screws" or placements) is not provided.
    • Data Provenance: The document does not specify the country of origin for the data. The studies are described as "surgical simulations conducted on cadavers" and "rigid benchtop phantoms," and testing for software validation used "extracted from stationary and intraoperative Computed Tomography scans." It is retrospective in the sense that the CT scans for software segmentation were "extracted," but the practical "performance validation" and "verification" studies appear to be prospective experimental setups (cadaver labs, benchtop testing).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: For the software validation study of anatomical segmentation, the ground truth was established by "manual segmentations prepared by trained analysts." The specific number of analysts is not provided.
    • Qualifications of Experts: The experts are simply described as "trained analysts." No specific qualifications (e.g., radiologist, years of experience) are mentioned.

    4. Adjudication Method for the Test Set

    • For the software validation (segmentation comparison), an adjudication method is not explicitly stated. The comparison was based on "mean Sørensen-Dice coefficient (DSC) calculations" against manual segmentations. This implies a direct quantitative comparison rather than a human-based adjudication process for discrepancies.
    • For the positional and angular error measurements in cadaver and benchtop testing, "the 3D (Euclidean) distance between the tips of the virtual and real implants" and "the angle between the 3D trajectories of the virtual and real implants" were measured. This is an objective measurement, not requiring human adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI vs. without AI assistance was not explicitly mentioned or described in the provided text. The device is a surgical guidance system, not an AI for image interpretation that would typically involve a diagnostic reader study.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    Yes, aspects of standalone performance were conducted:

    • Positional and Angular Error (Cadaver and Benchtop): These measurements assess the system's accuracy in tracking and guiding, which is a standalone performance metric of the system itself in a controlled environment.
    • Software Validation (Segmentation): "A set of test samples presenting lumbosacral spine... was subjected to the autonomous spine segmentation process performed by the HOLO Portal System." The quality was determined by comparing the system's segmentation to manual ground truth. This is a clear example of standalone algorithm performance testing.

    7. The Type of Ground Truth Used

    • For Positional/Angular Error (Cadaver and Benchtop): The ground truth was based on the "real implants" or "real pedicle screws" placed, with measurements taken relative to these physical realities. This could be considered a form of objective measurement against physical reality.
    • For Software Segmentation: The ground truth was "manual segmentations prepared by trained analysts." This is expert consensus/manual annotation based ground truth.

    8. The Sample Size for the Training Set

    The document does not provide information on the sample size used for the training set of the HOLO Portal™ Surgical Guidance System's software, including the autonomous spine segmentation process.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established. It only describes the establishment of ground truth for the test set (manual segmentations by trained analysts).

    Ask a Question

    Ask a specific question about this device

    K Number
    K223070
    Date Cleared
    2022-10-28

    (28 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K203346, K160126, K202360, K190360

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The REMI™ Robotic Navigation System is intended for use as an aid for precisely locating anatomical structures and for the spatial positioning and orientation of a tool holder or guide tube to be used by surgeons for navigating and/or guiding compatible surgical instruments in open or percutaneous spinal procedures in reference to rigid patient anatomy and fiducials that can be identified on a 3D imaging scan. The REMI™ Robotic Navigation System is indicated for assisting the surgeon in placing pedicle screws in the posterior lumbar region (LI-S1). The system is designed for lumbar pedicle screw placement with the prone position and is compatible with the Accelus LineSider® Spinal System.

    Device Description

    The Remi Robotic Navigation System (Remi) is an image guided system primarily comprised of a computer workstation, software, a trajectory system, including a targeting platform, a camera, and various image guided instruments intended for assisting the surgeon in placing screws in the pedicles of the lumbar spine. The system operates in a similar manner to other optical-based image y systems.

    AI/ML Overview

    The provided text outlines the FDA 510(k) clearance for the REMI Robotic Navigation System, focusing on its substantial equivalence to a predicate device. However, it does not contain a detailed study report or explicit acceptance criteria with reported device performance metrics in the format requested.

    The document primarily focuses on demonstrating that the updated REMI system, with additional compatible 3D imaging systems, is substantially equivalent to its predicate. The "Performance Testing - Bench" section mentions tests conducted but does not provide specific numerical acceptance criteria or performance results.

    Therefore, much of the requested information cannot be extracted directly from the provided text. I will indicate where information is missing or inferred.


    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Accuracy (Bench) - Worst Case0.74 ± 0.36 mm
    (95% CI: 1.46mm) - This is the reported performance of the predicate device, which the subject device is stated to be "Same as Predicate."
    Image Quality (with added 3D imagers)Stated to be "equivalent" to the predicate's performance with the Medtronic O-arm. (No specific metric provided)
    Image Transfer Speed (with added 3D imagers)Stated to be "equivalent" to the predicate's performance with the Medtronic O-arm. (No specific metric provided)
    Image Registration Speed (with added 3D imagers)Stated to be "equivalent" to the predicate's performance with the Medtronic O-arm. (No specific metric provided)
    Registration Accuracy (with added 3D imagers)Stated to be "equivalent" to the predicate's performance with the Medtronic O-arm. (No specific metric provided)
    Usability ValidationTesting was done to ensure the risk profile was maintained. (No specific metric or outcome provided)
    Compatibility with PSIS PinsBiocompatibility assessment for Ti6Al4V ELI (used in PSIS pins) included in K190360 (referring to a previous clearance for the pedicle screws).
    Robot collision avoidance/detectionManual movement of Trajectory Platform to gross location. Small fine tuning of Trajectory Platform location is automatic but is current limited to cease when platform encounters a force greater than 9lbs. (This is for the predicate, and again, the subject device is "Same as Predicate.")

    Study Details from the provided text:

    1. Sample size used for the test set and the data provenance:

      • The document mentions "Performance Testing - Bench" and "Verification and validation testing" but does not specify the sample size for any test set or the data provenance (e.g., country of origin, retrospective/prospective). It suggests bench testing was primarily used for equivalence.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • No information provided. The "performance testing" described appears to be technical validation against specified equivalence factors rather than expert review of clinical outcomes or images.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • No information provided.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC study was not described. This device is a robotic navigation system for spinal surgery, not an AI-assisted diagnostic imaging interpretation tool that would typically involve human readers. Its purpose is to aid surgeons in pedicle screw placement.
    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

      • The document does not explicitly describe a "standalone" algorithmic performance test in the context of an AI-only system. The device is a navigation system that guides a human surgeon. Its performance metrics, like accuracy, are inherently tied to the system's ability to guide to a planned trajectory, which can be measured quantitatively in bench tests. The bench testing mentioned covers aspects like "Accuracy," "Image Quality," "Image Transfer Speed," and "Image Registration Speed."
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Based on the description of "Performance Testing - Bench" and "Accuracy verification on anatomical landmarks" (for the predicate), the ground truth for accuracy testing would typically involve precisely measured physical points or targets on a phantom or model, measured by a highly accurate reference system (e.g., CMM). For image quality, transfer, and registration speed, the ground truth would be objectively defined technical specifications or measurements.
    7. The sample size for the training set:

      • No information provided about a "training set." The REMI system is a robotic navigation system, not described as a deep learning or machine learning-based algorithm that typically requires a large training dataset for model development. The system uses pre-programmed logic, image processing, and control algorithms.
    8. How the ground truth for the training set was established:

      • Not applicable, as no training set for an AI/ML model is mentioned.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1