Search Results
Found 1 results
510(k) Data Aggregation
(373 days)
SpineAR SNAP is intended for use or pre-operative surgical planning on-screen and in a virtual environment, and intraoperative surgical planning and visualization on-screen and in an augment using the HoloLens2 and Magic Leap 1 AR headset displays with validated navigation systems as identified in the device labeling.
SpineAR SNAP is indicated for spinal stereotaxic surgery, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to images of the anatomy. SpineAR is intended for use in spinal implant procedures, such as Pedicle Screw Placement, in the lumbar and thoracic regions with the Magic Leap 1 AR headset, and in the lumbar region with the HoloLens2 AR headset.
The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.
The SpineAR SNAP does not require any custom hardware and is a software-based device that runs on a high-performance desktop PC assembled using "commercial off-the-shelf" components that meet minimum performance requirements.
The SpineAR SNAP software transforms 2D medical images into a dynamic interactive 3D scene with multiple point of views for viewing on a high-definition (HD) touch screen monitor. The surgeon prepares a pre-operative plan for stereotaxic spine surgery by inserting guidance objects such as directional markers and virtual screws into the 3D scene. Surgical planning tools and functions are available on-screen and when using a virtual reality (VR) headset. The use of a VR headset for preoperative surgical planning further increases the surgeon's immersion level in the 3D scene by providing a 3D stereoscopic display of the same 3D scene displayed on the touch screen monitor.
By interfacing to a 3rd party navigation system such as a Medtronic StealthStation S8, the SpineAR SNAP extracts the navigation data (i.e. tool position and orientation) and presents the navigation data into the advanced interactive, high quality 3D image, with multiple point of views on a high-definition (HD) touch screen monitor. Once connected, the surgeon can then execute the plan through the intraoperative use of the SpineAR SNAP's enhanced visualization and guidance tools.
The SpineAR SNAP supports three (3) guidance options from which the surgeon selects the level of guidance that will be shown in the 3D scene. The guidance options are dotted line (indicates deviation distance), orientation line (indicates both distance and angular deviation), and ILS (indicates both distance and angular deviation using crosshairs). Visual color-coded cues indicate alignment of the tracker tip to the guidance object (e.g. green = aligned).
The 3D scene with guidance tools can also be streamed into an AR wireless headset (Magic Leap 1 or HoloLens2) worn by the surgeon during surgery. The 3D scene and guidance shown within the AR headset is projected above the patient and does not obstruct the surgeons view of the surgical space.
The provided text describes the SpineAR SNAP device and its performance data to establish substantial equivalence for FDA 510(k) clearance. Here's a breakdown of the requested information based on the document:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the "System Accuracy Requirements" section and the "Performance Data" section.
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Navigation Accuracy | 3D Positional Accuracy: < 2.0 mm (mean positional error) |
| 3D Trajectory Accuracy: < 2 degrees (mean trajectory error) | |
| Max Positional/Displacement Error: 2.80 mm | |
| Max Trajectory/Angular Error: 3.00° | |
| Virtual Screw Library Verification | Virtual screws accurately represent real screws (length, diameter) and are accurately positioned at the tip of the tracked tool. |
| Headset Display Performance | Field of View (FOV), resolution, luminance, transmittance, distortion, contrast ratio, temporal, display noise, motion-to-photon latency meet requirements. |
| Projection Latency | Time delay between instrument movement and display in AR headset < 250 ms. |
| Electromagnetic Compatibility (EMC) | Compliance with IEC 60601-1-2:2014+A1:2020 |
| Wireless Coexistence | Compliance with AAMI TIR69: 2017/(R)2020 and ANSI IEEE C63.27-2017 |
| Software Verification and Validation | Software meets its requirements specifications. |
| Human Factors and Usability Validation | Intended users can safely and effectively perform tasks for intended uses in expected use environments. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state the sample size (e.g., number of cases or subjects) used for the navigation accuracy testing. It mentions "a spine model" implying a physical phantom rather than patient data.
- Data Provenance: The data appears to be from retrospective (bench/phantom) testing using a "spine model" in a controlled environment, not patient data. The country of origin of the data is not specified, but the manufacturer is based in Ohio, USA.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not detail the number or qualifications of experts for establishing ground truth for the test set. For the navigation accuracy, "post-surgical CT scan" was used, which is objective data. For human factors, "users" provided feedback; their specific qualifications beyond being "intended users" are not detailed.
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1) is mentioned. For objective metrics like navigation accuracy (measured from CT), adjudication by experts might not be applicable in the same way as for subjective image interpretation. For human factors, it mentions "users providing feedback," implying a qualitative assessment that might not require formal adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study involving human readers and AI assistance. The study focuses on the device's standalone performance and its impact on the accuracy of a connected navigation system (Medtronic StealthStation S8). The device is an augmented reality visualization tool, not an AI diagnostic tool designed to directly improve human reader accuracy in image interpretation.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the navigation accuracy testing can be considered a form of standalone performance evaluation for the SpineAR SNAP's ability to maintain the accuracy of the connected navigation system. The data presented (mean positional/displacement error, mean trajectory/angular error, projection latency, etc.) reflects the algorithm's performance in conjunction with the navigation system on a physical model, without directly involving human interpretation or decision-making as the primary endpoint. The device itself is software-based and augments visualization rather than providing automated diagnosis.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- For Navigation Accuracy: "post-surgical CT scan" of the spine model was used to assess the final placement of screws compared to the pre-surgical plan. This is an objective, image-based ground truth.
- For Virtual Screw Library Verification, Headset Display Performance, Projection Latency, EMC, and Wireless Coexistence: The ground truth typically comes from engineering specifications, established measurement techniques, and industry standards. This is generally quantitative/technical ground truth.
- For Software Verification and Validation and Human Factors and Usability Validation: The ground truth is established by design requirements and user feedback/observational assessment against defined usability goals.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size. This device appears to be primarily an augmented reality visualization and planning tool that integrates with existing navigation systems, rather than a machine learning model that requires a large training dataset.
9. How the Ground Truth for the Training Set was Established
Since no training set is mentioned (implying the device is not an AI model requiring a training phase), this question is not applicable based on the provided text.
Ask a specific question about this device
Page 1 of 1