Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K252054
    Date Cleared
    2025-09-29

    (90 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SpineAR SNAP is intended for use for pre-operative surgical planning on-screen and in a virtual environment, and intra-operative surgical planning and visualization on-screen and in an augmented environment using the HoloLens2 AR headset display with validated navigation systems as identified in the device labeling.

    SpineAR SNAP is indicated for spinal stereotaxic surgery, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to images of the anatomy. SpineAR is intended for use in spinal implant procedures, such as Pedicle Screw Placement, in the lumbar and thoracic regions with the HoloLens2 AR headset.

    The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.

    Device Description

    The SpineAR SNAP does not require any custom hardware and is a software-based device that runs on a high-performance desktop PC assembled using "commercial off-the-shelf" components that meet minimum performance requirements.

    The SpineAR SNAP software transforms 2D medical images into a dynamic interactive 3D scene with multiple point of views for viewing on a high-definition (HD) touch screen monitor. The surgeon prepares a pre-operative plan for stereotaxic spine surgery by inserting guidance objects such as directional markers and virtual screws into the 3D scene. Surgical planning tools and functions are available on-screen and when using a virtual reality (VR) headset. The use of a VR headset for preoperative surgical planning further increases the surgeon's immersion level in the 3D scene by providing a 3D stereoscopic display of the same 3D scene displayed on the touch screen monitor.

    By interfacing to a 3rd party navigation system such as a Medtronic StealthStation S8, the SpineAR SNAP extracts the navigation data (i.e. tool position and orientation) and presents the navigation data into the advanced interactive, high quality 3D image, with multiple point of views on a high-definition (HD) touch screen monitor. Once connected, the surgeon can then execute the plan through the intra-operative use of the SpineAR SNAP's enhanced visualization and guidance tools.

    The SpineAR SNAP supports three (3) guidance options from which the surgeon selects the level of guidance that will be shown in the 3D scene. The guidance options are dotted line (indicates deviation distance), orientation line (indicates both distance and angular deviation), and ILS (indicates both distance and angular deviation using crosshairs). Visual color-coded cues indicate alignment of the tracker tip to the guidance object (e.g. green = aligned).

    The SpineAR SNAP is capable of projecting all the live navigated and guidance information into an AR headset such as the Microsoft HoloLens2 that is worn by the surgeon during surgery. When activated, the surgeon sees a projection of the 3D model along with the optional live navigated DICOM (Floating DICOM) and guidance cues. This AR projection is placed above, not directly over the patient in order to not impede the surgeon's field of view, but still allow the surgeon to visualize all the desired information (navigation tracker, DICOM images, guidance data) while maintaining their focus on the patient and the surgical field of view (see Figure 1).

    SpineAR Software Version SPR.2.0.0 incorporates AI/ML-enabled vertebra segmentation into the clinical workflow to optimize the preparation of a spine surgical plan for screw placement and decompression. The use of the AI/ML device software function is not intended as a diagnostic tool, but as visualization tool for surgical planning.

    The use of AI/ML-enabled vertebrae segmentation streamlines the initial processing stage by generating a segmented poly object of each volume-rendered vertebra that requires only minimal to no manual processing, which may significantly reduce the overall processing time.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for SpineAR SNAP:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (AI-Enabled Vertebra Segmentation)Performance MetricReported Device PerformanceMeets Criteria
    Lower bound of the 95% confidence interval for Mean Dice Coefficient (MDC) must be > 0.8 for individual vertebrae (CT scans)MDC 95% CI Lower Bound0.907Yes
    Lower bound of the 95% confidence interval for Mean Dice Coefficient (MDC) must be > 0.8 for sacrum (excl. S1) (CT scans)MDC 95% CI Lower Bound0.861Yes
    Lower bound of the 95% confidence interval for Mean Dice Coefficient (MDC) must be > 0.8 for individual vertebrae (MRI scans)MDC 95% CI Lower Bound0.891Yes

    2. Sample Size Used for the Test Set and Data Provenance

    • CT Performance Validation:
      • Sample Size: 95 scans from 92 unique patients.
      • Data Provenance: Retrospective. The validation set was composed of the entire Spine-Mets-CT-SEG dataset and the original test set from the VerSe dataset.
        • Country of Origin: Diverse, with 60% of scans from the United States and 40% from Europe.
        • Representativeness: Included a balanced distribution of patient sex, a wide age range (18-90), and data from three major scanner manufacturers (Siemens, Philips, GE).
    • Sacrum Validation (CT):
      • Sample Size: 38 scans.
      • Data Provenance: A separate set from the TotalSegmentator dataset, reserved exclusively for testing. Implicitly retrospective.
    • MRI Performance Validation:
      • Sample Size: 31 scans from 15 unique patients.
      • Data Provenance: A portion of the publicly available SPIDER dataset, reserved exclusively for performance validation. Implicitly retrospective.
        • Country of Origin: The training data for the MRI model (SPIDER dataset) was collected from four different hospitals in the Netherlands, suggesting the validation data is also from the Netherlands.
        • Representativeness: Included data from both Philips and Siemens scanners and a balanced distribution of male and female patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document states that the ground truth segmentation was "provided by expert radiologists." It does not specify the number of experts or their specific qualifications (e.g., years of experience). This information would typically be found in a more detailed study report.

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for establishing the ground truth for the test set. It only mentions that the ground truth was "provided by expert radiologists."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not mentioned. The study focused on the standalone performance of the AI algorithm for segmentation. The document mentions "Human Factors and Usability testing," which often involves user interaction, but does not describe a comparative study measuring human reader improvement with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study of the AI algorithm was done. The document reports the Mean Dice Coefficient (MDC) and its 95% confidence interval for the AI model's segmentation accuracy against expert-provided ground truth, indicating an algorithm-only performance evaluation.

    7. The Type of Ground Truth Used

    The ground truth used for both training and validation sets was expert consensus / expert-provided segmentation. Specifically, the document states: "This score measures the degree of overlap between the AI's segmentation and the ground truth segmentation provided by expert radiologists."

    8. The Sample Size for the Training Set

    • CT Vertebrae Model Development: A total of 1,244 scans were used for model development (training and tuning).
    • CT Sacrum Model Development: A total of 430 scans were used for model development.
    • MRI Vertebrae Model Development: A total of 348 scans were used for model development.

    9. How the Ground Truth for the Training Set Was Established

    The training data was aggregated from several independent, publicly-available academic datasets: VerSe 2020, TotalSegmentator, and SPIDER. For these datasets, the ground truth would have been established by medical experts (radiologists, clinicians) often as part of larger research initiatives, typically through manual or semi-automated segmentation and subsequent review, often involving expert consensus to ensure accuracy and consistency. The document mentions "sacrum ground-truth data" for the TotalSegmentator dataset, implying expert-derived ground truth.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243623
    Date Cleared
    2024-12-24

    (29 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SpineAR SNAP is intended for use for pre-operative surgical planning on-screen and in a virtual environment, and intraoperative surgical planning and visualization on-screen and in an augment using the HoloLens2 and Magic Leap 1 AR headset displays with validated navigation systems as identified in the device labeling.

    SpineAR SNAP is indicated for spinal stereotaxic surgery, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to images of the anatomy. SpineAR is intended for use in spinal implant procedures, such as Pedicle Screw Placement, in the lumbar and thoracic regions with the Magic Leap 1 and HoloLens2 AR headsets.

    The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.

    Device Description

    The SpineAR SNAP does not require any custom hardware and is a software-based device that runs on a high-performance desktop PC assembled using "commercial off-the-shelf" components that meet minimum performance requirements.

    The SpineAR SNAP software transforms 2D medical images into a dynamic interactive 3D scene with multiple point of views for viewing on a high-definition (HD) touch screen monitor. The surgeon prepares a pre-operative plan for stereotaxic spine surgery by inserting guidance objects such as directional markers and virtual screws into the 3D scene. Surgical planning tools and functions are available on-screen and when using a virtual reality (VR) headset. The use of a VR headset for preoperative surgical planning further increases the surgeon's immersion level in the 3D scene by providing a 3D stereoscopic display of the same 3D scene displayed on the touch screen monitor.

    By interfacing to a 3rd party navigation system such as a Medtronic StealthStation S8, the SpineAR SNAP extracts the navigation data (i.e. tool position and orientation) and presents the navigation data into the advanced interactive, high quality 3D image, with multiple point of views on a high-definition (HD) touch screen monitor. Once connected, the surgeon can then execute the plan through the intraoperative use of the SpineAR SNAP's enhanced visualization and guidance tools.

    The SpineAR SNAP supports three (3) guidance options from which the surgeon selects the level of guidance that will be shown in the 3D scene. The guidance options are dotted line (indicates deviation distance), orientation line (indicates both distance and angular deviation), and ILS (indicates both distance and angular deviation using crosshairs). Visual color-coded cues indicate alignment of the tracker tip to the guidance object (e.g. green = aligned).

    The 3D scene with guidance tools can also be streamed into an AR wireless headset (Magic Leap 1 or HoloLens2) worn by the surgeon during surgery. The 3D scene and guidance shown within the AR headset is projected above the patient and does not obstruct the surgeon's view of the surgical space.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study conducted to prove that the SpineAR SNAP device meets these criteria, specifically addressing the expansion of its indications for use with the HoloLens2 AR headset to include the thoracic region in addition to the lumbar region.

    Here's a breakdown of the requested information:

    1. A table of acceptance criteria and the reported device performance

    The acceptance criteria are presented as "Accuracy Requirement" in the table, and the "HoloLens2 AR Headset Results" are the reported device performance for the subject device.

    MetricAcceptance CriteriaReported Device Performance (HoloLens2 AR Headset Results)
    Mean Positional/Displacement Error≤ 2.0 mm0.76 mm
    Max Positional/Displacement Error≤ 3.0 mm1.47 mm
    Mean Trajectory/Angular Error≤ 2°1.73°
    Max Trajectory/Angular Error≤ 3°2.96°

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not specify the exact sample size (number of screws or spine models) used for the test set. It only mentions that the final placement of "each screw in a spine model" was assessed.

    The data provenance is not explicitly stated in terms of country of origin or whether it was retrospective or prospective. However, the study describes a controlled performance test involving a "spine model," suggesting a prospective, in-vitro (non-human) study designed to evaluate accuracy.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not specify the involvement of human experts for establishing the ground truth of the pedicle screw placement in this particular performance study. The ground truth appears to be established through "post-surgical CT scan" data compared to a "pre-surgical plan."

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    No adjudication method involving human experts is described for this performance study. The assessment of screw placement was based on quantifiable measurements derived from CT scans.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not performed. This study focuses on the device's accuracy in a non-human, simulated environment (spine model) and compares the HoloLens2 performance to the previously cleared Magic Leap 1. It does not evaluate human reader performance or the improvement with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This study is essentially a standalone (algorithm only) performance assessment, although the device (SpineAR SNAP) is an augmented reality system intended for human use. The performance test specifically evaluates the accuracy of the system's guidance during simulated pedicle screw placement, independent of the surgeon's skill. The "human-in-the-loop" aspect during clinical use is acknowledged, but the performance testing here isolates the device's accuracy in a controlled setup.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The ground truth used was objective quantitative measurements derived from post-surgical CT scans of a spine model, which were then compared against the pre-surgical plan. This is a form of objective physical ground truth.

    8. The sample size for the training set

    The document does not provide any information about the training set size or methodology. This submission is for a modification to an existing device (expanding indications for an existing AR headset), not the initial development or de novo clearance of the core algorithm.

    9. How the ground truth for the training set was established

    As no information on a training set is provided, how its ground truth was established is also not detailed. The current study focuses on validation of expanded indications through performance testing only.

    Ask a Question

    Ask a specific question about this device

    K Number
    K213034
    Device Name
    SpineAR SNAP
    Date Cleared
    2022-09-29

    (373 days)

    Product Code
    Regulation Number
    882.4560
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SpineAR SNAP is intended for use or pre-operative surgical planning on-screen and in a virtual environment, and intraoperative surgical planning and visualization on-screen and in an augment using the HoloLens2 and Magic Leap 1 AR headset displays with validated navigation systems as identified in the device labeling.

    SpineAR SNAP is indicated for spinal stereotaxic surgery, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to images of the anatomy. SpineAR is intended for use in spinal implant procedures, such as Pedicle Screw Placement, in the lumbar and thoracic regions with the Magic Leap 1 AR headset, and in the lumbar region with the HoloLens2 AR headset.

    The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.

    Device Description

    The SpineAR SNAP does not require any custom hardware and is a software-based device that runs on a high-performance desktop PC assembled using "commercial off-the-shelf" components that meet minimum performance requirements.

    The SpineAR SNAP software transforms 2D medical images into a dynamic interactive 3D scene with multiple point of views for viewing on a high-definition (HD) touch screen monitor. The surgeon prepares a pre-operative plan for stereotaxic spine surgery by inserting guidance objects such as directional markers and virtual screws into the 3D scene. Surgical planning tools and functions are available on-screen and when using a virtual reality (VR) headset. The use of a VR headset for preoperative surgical planning further increases the surgeon's immersion level in the 3D scene by providing a 3D stereoscopic display of the same 3D scene displayed on the touch screen monitor.

    By interfacing to a 3rd party navigation system such as a Medtronic StealthStation S8, the SpineAR SNAP extracts the navigation data (i.e. tool position and orientation) and presents the navigation data into the advanced interactive, high quality 3D image, with multiple point of views on a high-definition (HD) touch screen monitor. Once connected, the surgeon can then execute the plan through the intraoperative use of the SpineAR SNAP's enhanced visualization and guidance tools.

    The SpineAR SNAP supports three (3) guidance options from which the surgeon selects the level of guidance that will be shown in the 3D scene. The guidance options are dotted line (indicates deviation distance), orientation line (indicates both distance and angular deviation), and ILS (indicates both distance and angular deviation using crosshairs). Visual color-coded cues indicate alignment of the tracker tip to the guidance object (e.g. green = aligned).

    The 3D scene with guidance tools can also be streamed into an AR wireless headset (Magic Leap 1 or HoloLens2) worn by the surgeon during surgery. The 3D scene and guidance shown within the AR headset is projected above the patient and does not obstruct the surgeons view of the surgical space.

    AI/ML Overview

    The provided text describes the SpineAR SNAP device and its performance data to establish substantial equivalence for FDA 510(k) clearance. Here's a breakdown of the requested information based on the document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the "System Accuracy Requirements" section and the "Performance Data" section.

    Acceptance CriteriaReported Device Performance
    Navigation Accuracy3D Positional Accuracy: < 2.0 mm (mean positional error)
    3D Trajectory Accuracy: < 2 degrees (mean trajectory error)
    Max Positional/Displacement Error: 2.80 mm
    Max Trajectory/Angular Error: 3.00°
    Virtual Screw Library VerificationVirtual screws accurately represent real screws (length, diameter) and are accurately positioned at the tip of the tracked tool.
    Headset Display PerformanceField of View (FOV), resolution, luminance, transmittance, distortion, contrast ratio, temporal, display noise, motion-to-photon latency meet requirements.
    Projection LatencyTime delay between instrument movement and display in AR headset < 250 ms.
    Electromagnetic Compatibility (EMC)Compliance with IEC 60601-1-2:2014+A1:2020
    Wireless CoexistenceCompliance with AAMI TIR69: 2017/(R)2020 and ANSI IEEE C63.27-2017
    Software Verification and ValidationSoftware meets its requirements specifications.
    Human Factors and Usability ValidationIntended users can safely and effectively perform tasks for intended uses in expected use environments.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not explicitly state the sample size (e.g., number of cases or subjects) used for the navigation accuracy testing. It mentions "a spine model" implying a physical phantom rather than patient data.
    • Data Provenance: The data appears to be from retrospective (bench/phantom) testing using a "spine model" in a controlled environment, not patient data. The country of origin of the data is not specified, but the manufacturer is based in Ohio, USA.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not detail the number or qualifications of experts for establishing ground truth for the test set. For the navigation accuracy, "post-surgical CT scan" was used, which is objective data. For human factors, "users" provided feedback; their specific qualifications beyond being "intended users" are not detailed.

    4. Adjudication Method for the Test Set

    No explicit adjudication method (e.g., 2+1, 3+1) is mentioned. For objective metrics like navigation accuracy (measured from CT), adjudication by experts might not be applicable in the same way as for subjective image interpretation. For human factors, it mentions "users providing feedback," implying a qualitative assessment that might not require formal adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not describe an MRMC comparative effectiveness study involving human readers and AI assistance. The study focuses on the device's standalone performance and its impact on the accuracy of a connected navigation system (Medtronic StealthStation S8). The device is an augmented reality visualization tool, not an AI diagnostic tool designed to directly improve human reader accuracy in image interpretation.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the navigation accuracy testing can be considered a form of standalone performance evaluation for the SpineAR SNAP's ability to maintain the accuracy of the connected navigation system. The data presented (mean positional/displacement error, mean trajectory/angular error, projection latency, etc.) reflects the algorithm's performance in conjunction with the navigation system on a physical model, without directly involving human interpretation or decision-making as the primary endpoint. The device itself is software-based and augments visualization rather than providing automated diagnosis.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • For Navigation Accuracy: "post-surgical CT scan" of the spine model was used to assess the final placement of screws compared to the pre-surgical plan. This is an objective, image-based ground truth.
    • For Virtual Screw Library Verification, Headset Display Performance, Projection Latency, EMC, and Wireless Coexistence: The ground truth typically comes from engineering specifications, established measurement techniques, and industry standards. This is generally quantitative/technical ground truth.
    • For Software Verification and Validation and Human Factors and Usability Validation: The ground truth is established by design requirements and user feedback/observational assessment against defined usability goals.

    8. The Sample Size for the Training Set

    The document does not provide any information about a training set size. This device appears to be primarily an augmented reality visualization and planning tool that integrates with existing navigation systems, rather than a machine learning model that requires a large training dataset.

    9. How the Ground Truth for the Training Set was Established

    Since no training set is mentioned (implying the device is not an AI model requiring a training phase), this question is not applicable based on the provided text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1