(300 days)
The Regulus™ Navigator (RN) is an intraoperative guidance device which uses reference markers or anatomical references to localize the surgical field. The RN is for intra/extracranial usage.
The Regulus Navigator (RN) incorporates preoperative CT and MRI images into a surgical computer system. A minimum of three points (reference markers or anatomical points) are selected with the RN on the patient in the operating room (OR space) and the corresponding locations on the images (image space) are determined and selected by the surgeon. These corresponding points in the OR space and the image space are utilized to calculate a "transformation matrix" which is used to transform the location of the RN instrument into the image space. The location of the RN instrument is interactively displayed as a cursor on the diagnostic images and then used as a tool to guide the surgeon during the intra/extracranial procedure.
Here's a breakdown of the acceptance criteria and the study details for the Regulus™ Navigator, based on the provided 510(k) summary:
Regulus™ Navigator Acceptance Criteria and Study Details
1. Table of Acceptance Criteria and Reported Device Performance
Criteria Category | Acceptance Criteria | Reported Device Performance (Regulus™ Navigator) |
---|---|---|
Non-Clinical Testing | ||
CT Phantom Accuracy (Mean) | Not explicitly stated, but benchmarked against predicate devices (RMU: 2.10mm, ISG Viewing Wand: 1-2mm)* | 1.02mm (CT) |
MRI Phantom Accuracy (Mean) | Not explicitly stated, but benchmarked against predicate devices (RMU: 2.10mm, ISG Viewing Wand: 1-2mm)* | 1.67mm (MR) |
Clinical Registration Testing | ||
Overall Mean Accuracy | Not explicitly stated, but benchmarked against predicate devices (RMU: 2.78mm, ISG Viewing Wand: 2.51mm)* | 2.56mm |
Accuracy ≤ 5mm | Not explicitly stated | 97% of cases (Accuracy was 5mm or less) |
Note: The document implicitly uses the performance of predicate devices as a benchmark for substantial equivalence, rather than setting explicit numerical acceptance criteria for the Regulus™ Navigator prior to testing. However, the reported performance clearly demonstrates it is either better than or comparable to the predicate devices.
2. Sample Size Used for the Test Set and Data Provenance
- Non-Clinical Tests (Phantom Testing):
- CT Scans: 12 phantom tests
- MRI Scans: 9 phantom tests
- Data Provenance: Not explicitly stated, but implies laboratory/bench testing, not patient data. Likely within the US (Rochester, MN being the submitter's address).
- Clinical Registration Tests:
- Sample Size: 221 patients
- Data Provenance: Not explicitly stated, but given the company's location in Rochester, Minnesota, U.S.A., it is highly probable the data is from the U.S. hospital settings. The study is prospective by definition, as it involves "patients requiring conventional surgery."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- For Non-Clinical (Phantom) Tests: No human experts are mentioned for establishing ground truth. Phantom testing typically uses physical measurements against known values, so the "ground truth" would be the precisely known configuration of the phantom.
- For Clinical Registration Tests: The document does not specify a number of experts or their qualifications for establishing the ground truth of the clinical registration accuracy. The "registration accuracy" itself is a technical measurement (likely comparing the RN's displayed location to a known anatomical point or a fiducial marker's true location). The surgeon's interactive use and selection of points (described in Section D.1) suggests their role in the process, but not in establishing an independent ground truth for accuracy validation. The clinical study's focus is on measuring the device's accuracy in a real-world surgical setting.
4. Adjudication Method for the Test Set
- For Non-Clinical (Phantom) Tests: No adjudication method is mentioned as it's a technical measurement against a known phantom.
- For Clinical Registration Tests: No adjudication method is explicitly stated. The accuracy measurement (2.56mm) would likely be a direct technical measurement of the system's output against a reference in the surgical field.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study, particularly one assessing how much human readers improve with AI vs. without AI assistance, was not explicitly described or conducted. This submission is for an intraoperative guidance device, not an AI-powered diagnostic imaging tool that would typically involve human readers interpreting images. The closest comparison is "Intra-operative Guidance," where the device provides information to the surgeon, but it's not framed as an "AI assistance" study in the modern sense. The "Tip Location Software Feature" and "Tip's Eye View Software Feature" could be considered forms of assistance, but there's no comparative study of surgeon performance with and without this specific device.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance Study
- Yes, the Non-Clinical Phantom Tests represent the standalone performance of the device's measurement accuracy.
- For CT scans, the average three-dimensional error was 1.02mm with a standard deviation of 0.16mm over 12 tests.
- For MRI scans, the average three-dimensional error was 1.67mm with a standard deviation of 0.42mm over 9 tests.
- These tests measure the inherent accuracy of the system in a controlled environment, largely independent of human intervention other than setting up the phantom.
7. Type of Ground Truth Used
- Non-Clinical (Phantom) Tests: The ground truth for these tests would be the known, precisely defined physical dimensions and locations within the CT/MRI test phantoms. These phantoms are designed with specific, measurable targets.
- Clinical Registration Tests: The ground truth for clinical registration accuracy is likely derived from direct anatomical measurements or the known location of fiducial markers, measured against the system's reported position. While "expert consensus" is not explicitly stated for determining the ground truth of accuracy, the surgeon's input is crucial for selecting reference points. However, the accuracy is a technical measurement against these established reference points.
8. Sample Size for the Training Set
- The document does not explicitly describe a "training set" in the context of machine learning (AI). This device appears to be a stereotactic instrument that uses traditional computational geometry and image processing techniques to transform coordinates, rather than a machine learning model that requires a distinct training phase. Therefore, the concept of a "training set" as understood in AI/ML is not applicable here. The software components are described as "treatment planning software" and features like "Tip Location" and "Tip's Eye View" which are deterministic algorithms.
9. How the Ground Truth for the Training Set Was Established
- Since a "training set" for an AI/ML model is not described or implied by the device's functionality, information on how its ground truth was established is not provided in the document. The device relies on mathematical transformations and measurements, where parameters might be calibrated, but there isn't a "ground truth" for a learning process in the AI sense.
§ 882.4560 Stereotaxic instrument.
(a)
Identification. A stereotaxic instrument is a device consisting of a rigid frame with a calibrated guide mechanism for precisely positioning probes or other devices within a patient's brain, spinal cord, or other part of the nervous system.(b)
Classification. Class II (performance standards).