Search Results
Found 2 results
510(k) Data Aggregation
(255 days)
The ViewPoint is intended for use as a device that uses diagnostic images of the patient acquired specifically to assist the physician with presurgical planning and to provide orientation and reference information during intra-operative procedures.
The ViewPoint is indicated for use in:
· Intra-cranial surgical procedures involving space occupying lesions or malformations (including soft tissue, vascular and osseous)
· Spinal surgical procedures involving spinal stabilization, neural decompression, or resection of spinal neoplasms.
Prior to use the ViewPoint tools must be sterilized. Testing has been completed to validate the use of the Sterrad 100 system for this process. The Guide Block (Non-Trackable and Trackable) and the Trackable Awl are three additional tools now available for the ViewPoint. The Guide Blocks guide and track the trajectory of a biopsy needle during a procedure. The Trackable Awl is a standard awl that has been adapted to include infrared emitting diodes so that the tip of the awl can be tracked similar to the standard ViewPoint Y-probe.
The provided text describes the ViewPoint Tools - Sterrad, an Image Assisted Surgery Device. The submission is a 510(k) for new tools (Guide Blocks and Trackable Awl) and a new sterilization technique (Sterrad 100 system) for the previously cleared ViewPoint system. The document focuses on demonstrating substantial equivalence to predicate devices rather than a standalone clinical study on the device's diagnostic performance for its intended use.
Here's the breakdown of the acceptance criteria and the study as described in the document:
1. Table of Acceptance Criteria and Reported Device Performance
The submission primarily aims to demonstrate that the new ViewPoint tools (Guide Blocks and Trackable Awl) and the new sterilization technique (Sterrad 100 system) are substantially equivalent to previously cleared predicate devices and sterilization methods. The acceptance criteria are implicit in proving this equivalence, particularly regarding the existing accuracy specifications of the ViewPoint system.
| Parameter | Acceptance Criteria (from Predicate Device) | Reported Device Performance (ViewPoint - Sterrad) |
|---|---|---|
| Tools | Y-Probe, Cable, Head Tracker, Spine Tracker, Drill Guide (Non-Trackable and Trackable) | Y-Probe, Cable, Head Tracker, Spine Tracker, Drill Guide (Non-Trackable and Trackable), Guide Block (Non-Trackable and Trackable), Trackable Awl (New tools claimed equivalent) |
| Material Considerations | Combination of metal and non-metal, IREDs sensitive to heat. | Same. |
| Lumens | Dead-end lumen in LEMO Connector for trackable tools; single-channel stainless steel lumen for drill guides. | Same. |
| Use limits | None specified for predicate. | Same (None). |
| Accuracy (Y-probe) | Repeatability/Resolution: 1mm; Distance measurement: $\pm$ 0.75 mm; 3D Localization: $\leq$ 1.57 mm; Fourth Fiducial Checkpoint: $<$ 5.0 mm | Same (Implied: the new tools/sterilization do not degrade the existing accuracy). |
| Sterilization Technique | Ethylene Oxide | Sterrad (New technique claimed equivalent in effectiveness and non-impact on tool function/accuracy). |
| Intended Use | As device for presurgical planning and intra-operative orientation/reference. | Same. |
| Indications for Use | Intra-cranial and Spinal surgical procedures. | Same. |
Study Proving Acceptance Criteria:
The "study" described is a demonstration of substantial equivalence to predicate devices (K963221 and K970604). This is a regulatory pathway, not a traditional clinical accuracy or effectiveness study. The primary focus is on showing that the modifications (new tools and sterilization method) do not raise new questions of safety or effectiveness and perform as well as the legally marketed predicate devices.
The document states: "The use of the Sterrad 100 system adequately sterilizes the ViewPoint tools for intraoperative procedures and does not affect the accuracy or function of the tools. The Guide Blocks and the Trackable Awl described in this submission are equivalent to the tools described in the 510(k) submissions K963221 and K970604. This equivalence is demonstrated in the following table."
This suggests that:
- Sterilization Validation: Testing was completed to
validate the use of the Sterrad 100 system for this process.This implies a sterilization validation study was performed to ensure the Sterrad 100 achieves sterility and does not negatively impact the tools' accuracy or function. - Tool Equivalence: The new tools (Guide Blocks and Trackable Awl) are stated to be "equivalent" to the predicate tools, implying design and performance similarity where applicable, and that their addition does not change the overall system's fundamental performance characteristics (like accuracy).
2. Sample Size for the Test Set and Data Provenance
- Sample Size: The document does not specify a distinct "test set" in the context of a clinical performance study with a patient cohort. The submission is focused on demonstrating equivalence through technical comparisons and sterilization validation. For the sterilization validation, the sample size would refer to the number of devices or cycles tested, but this detail is not provided.
- Data Provenance: Not applicable in the context of a clinical test set. The data originates from internal company testing (for sterilization validation and tool comparison) and references previously cleared predicate device characteristics.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- This is not applicable as the submission is not a clinical effectiveness study requiring expert interpretation of results for ground truth establishment. The ground truth for tool accuracy parameters would be established by engineering measurements and metrology standards. For sterilization, ground truth is microbiological sterility testing.
4. Adjudication Method for the Test Set
- Not applicable as there is no human-read test set requiring adjudication in this 510(k) submission.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, an MRMC comparative effectiveness study was not done or described in this document. The submission focuses on device equivalence, not clinical effectiveness studies with human readers.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This device is an Image Assisted Surgery Device, implying a human-in-the-loop system. The concept of "standalone" performance for an algorithm without human intervention generally applies to diagnostic AI systems, which is not the primary focus of this submission. The accuracy parameters (e.g., repeatability, 3D localization) relate to the device itself, not a diagnostic algorithm.
7. The Type of Ground Truth Used
- For Accuracy: The ground truth for the stated accuracy parameters (e.g., 3D Localization $\leq$ 1.57 mm) would be based on engineering measurement standards and metrology, verified through controlled laboratory testing.
- For Sterilization: The ground truth for adequate sterilization would be established through microbiological testing (e.g., sterility testing, bioburden reduction validation) following recognized sterilization standards.
- For Equivalence of Tools: The ground truth for equivalence (e.g., material considerations, lumens, form, and fit) is via design specifications, material certifications, and comparative technical analysis against the predicate devices.
8. The Sample Size for the Training Set
- Not applicable. This is not an AI/ML device submission that involves a training set.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As above, there is no AI/ML training set.
Ask a specific question about this device
(90 days)
The intended use of the ViewPoint is unchanged by the 3.0 software, but the indications for use have been expanded to include spinal surgical procedures. The intended use and indications for use are as follows:
The ViewPoint is intended for use as a device which uses diagnostic images of the patient acquired specifically to assist the physician with presurgical planning and to provide orientation and reference information during intra-operative procedures.
The ViewPoint is indicated for use in:
· Intra-cranial surgical procedures involving space occupying lesions or malformations (including soft tissue, vascular and osseous)
· Spinal surgical procedures involving spinal stabilization, neural decompression, or resection of spinal neoplasms.
The new features in the 3.0 Software include a detector positioning feature and support for the following optional hardware accessories: a tracking device, drill guides and a CT spine phantom. The indications for use of the ViewPoint have also been expanded to include use in spinal surgeries.
The provided text describes a 510(k) submission for the ViewPoint - 3.0 Operating Software, focusing on demonstrating substantial equivalence to a predicate device rather than presenting a de novo study with specific acceptance criteria and performance data. Therefore, many of the requested details about acceptance criteria, study design parameters (like sample size, number of experts, adjudication methods), and specific performance metrics for the device itself are not explicitly stated within the provided document.
The document highlights the substantial equivalence of the ViewPoint 3.0 software to its predicate device (ViewPoint & Optical Digitizer Option - K961168, K963221). The "Acceptance Criteria" here are implicitly linked to demonstrating that the new software maintains the safety and effectiveness characteristics of the predicate device, particularly its accuracy and intended use, while expanding its indications to include spinal surgery.
Here's an analysis based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" in a quantitative, measurable form for the ViewPoint 3.0 software as if it were a new device proving its efficacy. Instead, it demonstrates substantial equivalence to a predicate device. The "performance" is primarily described in terms of matching or improving upon the predicate device's specifications.
| Parameter | Predicate Device Performance / Acceptance (Implicit) | ViewPoint with 3.0 Software Performance | Device Meets Acceptance Criteria? |
|---|---|---|---|
| Average Tool Accuracy | 2.0 - 5.0 mm (from K961168) | Same (2.0 - 5.0 mm) | Yes (Matches predicate) |
| Active Digitizer Volume | Silo shape, with 1 meter diameter and 1 meter length (from K963221) | Same (Silo shape, 1m diameter, 1m length) with Detector Positioning Feature added | Yes (Matches predicate, with enhancement) |
| Intended Use | Uses diagnostic images to assist presurgical planning and intra-operative orientation (from K961168) | Same | Yes (Matches predicate) |
| Indications for Use | Intra-cranial surgical procedures (K961168) | Expanded to include Intra-cranial and Spinal Surgical procedures | Yes (Expands on predicate) |
| Tools | A long and short tool with a minimum of four IREDs per tool (K963221) | Same | Yes (Matches predicate) |
| Type of Detector | Infrared signals from diodes detected by a Position Sensor Assembly (K963221) | Same | Yes (Matches predicate) |
| Accessories | MR/CT Head Phantoms (K961168) | Expanded to include MR/CT Head Phantoms, CT Spine Phantom, Tracking device, Drill Guide | Yes (Expands on predicate) |
| Registration Technique | Scanned Fiducials (K961168) | Expanded to include Scanned Fiducials and Anatomical Fiducials | Yes (Expands on predicate) |
| Operating Software Structure | UNIX environment with three processes (Import, Surgery Application, Foot Switch), Graphical User Interface | Same structure. Modified Graphical User Interface with similar functionality | Yes (Matches predicate, with similar functionality) |
| Image Manipulation | MPR and surface rendering | Same | Yes (Matches predicate) |
| Other Features | None (K961168) | Detector Positioning Feature | Yes (New feature, presumably beneficial) |
2. Sample Size for the Test Set and Data Provenance
The document does not describe a specific test set or clinical study for the ViewPoint 3.0 software in this submission. Its approach is to demonstrate substantial equivalence to previously cleared predicate devices. Therefore, details like sample size, country of origin, or retrospective/prospective nature of a test set are not provided for the 3.0 software itself. The basis for "Average Tool Accuracy" (2.0 - 5.0 mm) would have been established in the predicate device’s 510(k) (K961168), but the specifics of that study are not included here.
3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)
Not applicable, as no new test set study for the 3.0 software is described in this submission. The ground truth for the predicate device's accuracy would have been established during its clearance.
4. Adjudication Method (Test Set)
Not applicable, as no new test set study for the 3.0 software is described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC study is mentioned. This submission focuses on hardware and software feature equivalence and expansion of indications rather than comparative effectiveness with or without AI assistance for human readers.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance)
The ViewPoint system is an "Image Assisted Surgery Device," meaning it is inherently designed for "human-in-the-loop" use to assist the physician. Therefore, a standalone (algorithm only) performance assessment would not be applicable or relevant to its intended use. The "Average Tool Accuracy" is a standalone technical specification of the device's measurement capabilities.
7. Type of Ground Truth Used
For the predicate device's accuracy, the ground truth would likely be established by precise physical measurements (e.g., using a coordinate measuring machine or similar high-precision measurement system on phantoms) to verify the tool's reported position against its true physical position. For the 3.0 software, the ground truth for its new features (like Detector Positioning Feature, support for CT Spine Phantom, Anatomical Fiducials) would be centered around functional verification and validation that these features operate as intended and maintain the previously established accuracy.
8. Sample Size for the Training Set
The document does not describe the development or training of an AI algorithm in the contemporary sense. This device (from 1997) is an early image-assisted surgery system, not a machine learning-based AI device that typically relies on "training sets." Its "operating software" refers to conventional programming.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there is no mention of a training set for a machine learning model.
Ask a specific question about this device
Page 1 of 1