Search Results
Found 1 results
510(k) Data Aggregation
(111 days)
The 7D Surgical System is a stereotaxic image guidance system intended for the spatial positioning and orientation of neurosurgical instruments used by surgeons. The system is also intended to be used as the primary surgical luminaire during image guided surgery. The device is indicated for cranial surgery where reference to a rigid anatomical structure can be identified.
The 7D Surgical System Cranial Application is intended for use as a stereotaxic image guided surgical navigation system during cranial surgical procedures. The Cranial Application software assists in guiding surgeons during cranial surgical procedures such as biopsies, tumor resections, and shunt and lead placements. The Cranial Application software works in conjunction with 7D Surgical Machine Vision Guidance System which consists of clinical software, optically tracked surgical Pointer, a reference frame instrument and platform/computer hardware which is substantially equivalent to K162375. Image guidance, or Machine Vision, tracks the position of instruments in relation to the surgical anatomy and identifies this position on DICOM scan images or intraoperative structured light images of the patient. The Cranial software functionality is described in terms of its feature sets which are categorized as imaging modalities, registration, planning, and views. Feature sets include functionality that contributes to clinical decision making and are necessary to achieve system performance.
The 7D Surgical System Cranial Application is comprised of 5 major components:
- Cart
- Arm
- Head
- Tracked surgical 7D Surgical System Cranial Instruments
- Software
The provided document, a 510(k) summary for the 7D Surgical System Cranial Application, outlines the device's acceptance criteria and the studies conducted to demonstrate its safety and effectiveness for FDA clearance.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are generally framed around demonstrating substantial equivalence to predicate devices and meeting established safety and performance standards for image-guided surgical systems. The document doesn't provide specific quantitative "acceptance criteria values" in the format of a typical performance table (e.g., "accuracy must be < X mm"). Instead, it states that tests were performed to verify "absolute accuracy and repeatability of the accuracy of the device, and the navigation accuracy according to ASTM F2554-10." It also mentions evaluating "Target Registration Error (TRE) and Angular Trajectory Error (ATE)" to assess clinical accuracy.
The reported performance is summarized qualitatively:
| Acceptance Criterion (Test Type) | Reported Device Performance |
|---|---|
| System Verification (Design Requirement Specs) | Verification successful, all design requirements have been fulfilled. |
| System Validation (Indications for Use & Customer Requirements) | Validation successful, all user needs met. |
| Usability | Validation successful, device safe and effective with respect to use errors. |
| Safety regarding Risk Analysis | Risk Control requirements are effective and mitigate the associated risks to an acceptable level. |
| Biocompatibility (ISO 10993-1) | Compliance with recognized standards has previously been established in the predicate device K142024. (For Cranial Instrumentation) |
| Sterilization (ISO 17665-1) | Compliance with recognized standards have been verified in this application. (For Cranial Instrumentation) |
| Product Safety Standards (IEC 60601 series, IEC 60825-1) | Compliance with recognized standards have been verified in the previous application K162375. Previous test results have not been affected by this change. (For System Cart, as no substantial changes were made) |
| Non-Clinical Accuracy (ASTM F2554-10, TRE, ATE) | All accuracy specifications have been met. (Tested on phantom models in a clinical simulated environment) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a numerical sample size for the "Non-Clinical Performance Surgical Simulations Conducted on Phantom Models" or for the system verification/validation tests. It states "phantom models" but not the quantity or specific cases.
- Data Provenance: The studies were non-clinical simulations conducted on phantom models. The location of the testing is not explicitly stated, but the submitter's address is in Toronto, ON, Canada, suggesting the testing likely occurred there. The studies are retrospective in the sense that they are conducted on a developed device, rather than prospective clinical trials.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
The document does not mention the use of human experts to establish ground truth for the non-clinical phantom studies. Ground truth for accuracy tests on phantom models is typically established by precise physical measurements on the phantom or through highly accurate reference tracking systems, rather than expert consensus on medical images.
4. Adjudication Method for the Test Set
Not applicable. As the tests were non-clinical simulations on phantom models, there was no need for human adjudication of results in the traditional sense of clinical imaging studies.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC study was performed. The device is a computer-assisted surgical navigation system, not an AI-assisted diagnostic imaging tool. Its purpose is to guide surgeons, not to "improve human readers." The document explicitly states: "A clinical trial was not required to demonstrate safety and effectiveness of the 7D Surgical System Cranial Application. Clinical validation is unnecessary as the 7D Surgical System Cranial Application introduces no new indications for use, and device features are equivalent to the previously cleared predicate device identified."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the accuracy and performance tests (e.g., Non-Clinical Accuracy, System Verification, System Validation, Usability, Biocompatibility, Sterilization, Product Safety standards) are essentially standalone performance evaluations of the device and its components, without a human-in-the-loop study in a clinical setting. The "human-in-the-loop" aspect during surgery is the surgeon using the navigation system, but the testing itself evaluates the system's inherent accuracy and functionality.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
For the non-clinical accuracy tests (TRE and ATE on phantom models), the ground truth was established through physical measurements or highly accurate reference tracking systems on the phantom. This is inherent in the ASTM F2554-10 standard. The document states: "TRE and ATE evaluates the error discrepancy between the position reported by the image guided surgery system and the ground truth position measured physically or otherwise."
8. The Sample Size for the Training Set
Not applicable. The document describes a medical device (a navigation system), not a machine learning model that requires a "training set" in the context of deep learning for image analysis. The "software" component is described as providing "functional components" and guidance, implying a rule-based or algorithmic system, not a data-driven machine learning model.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as there was no explicit "training set" in the typical machine learning sense. The device's software and algorithms are developed and verified through engineering principles and testing against established performance requirements and standards.
Ask a specific question about this device
Page 1 of 1