Search Results
Found 1 results
510(k) Data Aggregation
(157 days)
Remex Medical Corporation
The Anatase Spine Surgery Navigation System is indicated for precise positioning of surgical instruments or spinal implants during general spinal surgery when reference to a rigid anatomical structure, such as the vertebra, can be identified relative to a patient's fluoroscopic or CT imagery. It is intended as a planning and intraoperative guidance system to enable open or percutaneous image guided surgery by means of registering intraoperative 2D fluoroscopic projections to pre-operative 3D CT imagery.
Example procedures include but are not limited to:
Posterior-approach spinal implant procedures, such as pedicle screw placement, within the lumbar region.
The Anatase Spine Surgery Navigation System, also known as an Image Guided System, is comprised of a platform, clinical software, surgical instruments, and a referencing system. The system uses optical tracking technology to track the position of instruments in relation to the surgical anatomy and identifies this position on diagnostic or intraoperative images of a patient. The system helps guide surgeons during spine procedures such as spinal fusion. The software functionality in terms of its feature sets is categorized as imaging modalities, registration, planning, interfaces with medical devices, and views.
The modified Anatase Spine Surgery Navigation System, the subject of these 510(k) applications, introduces software, hardware and instruments modifications to the original Surgery Navigation System cleared in 510(k) K180523.
The Anatase Spine Surgery Navigation System, Model number: SNS-Spine2-S and SNS-Spine2-V, is indicated for precise positioning of surgical instruments or spinal implants during general spinal surgery.
Here's a breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance:
Test | Acceptance Criteria/Standard | Reported Device Performance |
---|---|---|
Sterilization | ISO 17665-1:2006 | Moist heat sterilization of reusable accessories validated. |
Repeated Reprocessing | ISO 11737-2: 2019 | Reliability of reusable instruments validated. |
Biocompatibility | FDA guidance for ISO 10993-1 (June 16, 2016), ISO 10993-1:2009 | Accessories in contact with patient evaluated. |
Software | FDA guidance for software in medical devices (May 11, 2005) | Software verified and validated. |
Electrical Safety | ANSI/AAMI ES60601-1:2005/(R)2012, A1:2012, C1:2009/(R)2012, A2:2010/(R)2012 | Complied with requirements. |
Electromagnetic Compatibility | IEC 60601-1-2:2014 | Complied with requirements. |
Usability | ANSI/AAMI HE75:2009/(R)2013, IEC 62366-1:2015, IEC 60601-1-6:2010 + A1:2013 | System usability validated. |
Accuracy | ASTM F2554-18 | Positional accuracy evaluated. (Specific results not given in summary) |
Risk Assessment | ISO 14971:2007 | Effectiveness of risk control measures verified. |
Design Verification | Not explicitly stated, but "all design input requirements" | Design output fulfills all design input requirements. |
2. Sample size used for the test set and the data provenance:
The provided document does not specify sample sizes for any test sets nor the data provenance (e.g., country of origin, retrospective/prospective). The studies are non-clinical, meaning they did not involve patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
This information is not provided in the document as these were non-clinical tests.
4. Adjudication method for the test set:
This information is not provided in the document as these were non-clinical tests.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No clinical testing, including MRMC studies, was conducted. The document explicitly states: "No clinical testing has been conducted."
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
The provided information focuses on the entire system's performance, which is an image-guided navigation system that would inherently involve human interaction (a surgeon). While software verification and validation were performed, the document does not distinguish between human-in-the-loop and algorithm-only performance for a standalone assessment in a manner that would typically be seen for an AI diagnostic device. The "Accuracy" test implies an assessment of the system's ability to track and display positions, which is a standalone performance metric for the navigation component, but it's not described as an AI-specific algorithm performance.
7. The type of ground truth used:
For the accuracy testing, the ground truth would likely be established through precise physical measurements to determine the true positional accuracy of the system against a known standard. However, the document does not specify the exact methodology for establishing the ground truth beyond referencing ASTM F2554-18. For other tests like electrical safety, EMC, and sterilization, the "ground truth" is defined by compliance with the referenced standards.
8. The sample size for the training set:
As this is a navigation system and not explicitly an AI diagnostic device in the context of machine learning model training, the concept of a "training set" in that sense is not directly applicable or discussed in the document. Software verification and validation were performed, but details on data used for these processes are not provided.
9. How the ground truth for the training set was established:
Same as above, the concept of a "training set" with established ground truth as typically understood in AI/machine learning is not applicable here. Software verification and validation would use various testing methods to ensure the software performs as designed and meets requirements.
Ask a specific question about this device
Page 1 of 1