Search Results
Found 5 results
510(k) Data Aggregation
(90 days)
VV FLUORO 3D (K070106)
The xvision Spine System, with xvision System Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous spine procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine or pelvis, can be identified relative to CT imagery of the anatomy. This can include the spinal implant procedures, such as Posterior Pedicle Screw Placement in the thoracic and sacro-lumbar region.
The Headset of the xvision Spine System displays 2D stereotaxic screens and a virtual anatomy screen. The stereotaxic screen is indicated for correlating the tracked instrument location to the registered patient imagery. The virtual screen is indicated for displaying the virtual instrument location to the virtual anatomy to assist in percutaneous visualization and traiectory planning.
The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed stereotaxic information
The xvision Spine (XVS) system is an image-guided navigation system that is designed to assist surgeons in placing pedicle screws accurately, during open or percutaneous computer-assisted spinal surgery. The system consists of a dedicated software, Headset, single use passive reflective markers and reusable components. It uses wireless optical tracking technology and displays to the surgeon the location of the tracked surgical instruments relative to the acquired intraoperative patient's scan, onto the surgical field. The 2D scanned data and 3D reconstructed model, along with tracking information, are projected to the surgeons' retina using a transparent near-eye-display Headset, allowing the surgeon to both look at the patient and the navigation data at the same time.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (Stated) | Reported Device Performance |
---|---|
System Level Accuracy with a mean 3D positional error of 2.0mm and mean trajectory error of 2° | Phantom and Cadaver Studies: |
- Mean positional error: 2.32mm (99% UBL= 2.58mm)
- Mean angular error: 1.66° (99% UBL=1.93°)
Statistically significantly lower than 3mm positional and 3 degrees angular error. |
| Clinical accuracy for pedicle screw placement in sacral/lumbar vertebrae (Gertzbein score) | A total accuracy of 97.7% demonstrated.
Very similar to the literature control rate of 95%. |
| Electrical safety | Tested in accordance with ANSI AAMI ES60601-1:2005/(R)2012 and A1:2012,C1:2009/(R)2012 and A2:2010/(R)2012. Successfully completed. |
| Electromagnetic Compatibility (EMC) | Tested in accordance with IEC 60601-1-2:2014. Successfully completed. |
| Sterilization validation for single-use components | Conducted in accordance with ANSI AAMI ISO 11137-1:2006/(R)2015. Shelf life and packaging testing performed. All tests successfully completed. |
| Cleaning and steam sterilization validation for reusable components | Cleaning: AAMI TIR30:2011. Steam sterilization: ANSI/AAMI/ISO 17665-1:2006/(R)2013 and ANSI/AAMI/ISO 14937:2009/(R)2013. Successfully completed. |
| Biocompatibility of patient contact materials | Verified according to ISO 10993-1:2018 and FDA guidance on the use of ISO 10993-1, June 16, 2016. All tests successfully completed. |
| Software verification and validation | Conducted as required by IEC 62304 and FDA guidance on general principles of software validation, January 11, 2002. |
Note: While the reported positional error (2.32mm) is higher than the stated acceptance criteria (2.0mm), the text explicitly states it's "statistically significantly lower than 3mm," which implies it met a broader acceptable threshold or was deemed clinically acceptable despite exceeding the initial numerical target slightly.
2. Sample size used for the test set and the data provenance:
- Cadaver Study (Accuracy): The sample size for the cadaver study is not explicitly stated as a number of cadavers or individual pedicle screws. It generically mentions "pedicle screws were positioned percutaneously in thoracic and sacro-lumbar vertebrae."
- Provenance: This was an ex-vivo study (cadaver study), implying it likely occurred in a controlled lab environment. No specific country of origin is mentioned, but the submitter is based in Israel.
- Clinical Study (Clinical Accuracy):
- Sample Size: Seventeen (17) subjects.
- Provenance: Prospective, single-arm, multicenter study. No specific country of origin for the clinical sites is provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Cadaver Study: The text does not explicitly state the number of experts or their qualifications for establishing ground truth (e.g., measuring actual screw tip positions from post-op scans). This aspect is implicit in the "calculated as the difference between the actual screw tip position... and its virtual tip" description.
- Clinical Study: The ground truth for clinical accuracy was established using the Gertzbein score by "viewing the post-op scans." The number and qualifications of experts (e.g., experienced radiologists, spine surgeons) assessing the Gertzbein score are not specified in the provided text.
4. Adjudication method for the test set:
- The text does not explicitly describe an adjudication method (like 2+1, 3+1) for either the cadaver or the clinical study. It mentions the Gertzbein score being assessed by "viewing the post-op scans," but not how discrepancies among multiple reviewers, if any were used, would be resolved.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. The studies described focus on the standalone performance of the xvision Spine system, not on its impact on human reader performance or a comparison of human readers with and without AI assistance. The device is an image-guided navigation system for surgical procedures, assisting the surgeon directly during the procedure rather than improving pre-operative image interpretation by radiologists.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, standalone performance was evaluated. The "System Level Accuracy" criteria and the results from both the bench testing (phantoms) and the cadaver study directly assess the accuracy of the device itself (positional and angular errors) without human in-the-loop influence on the measurement of accuracy. The clinical study also measures the system's accuracy in a real-world setting using post-operative scans.
7. The type of ground truth used:
- Bench Testing (Phantoms): The ground truth was established by the known mechanical properties and precise measurements within the phantom setup, as well as presumably known parameters for partial detectability scenarios.
- Cadaver Study: Ground truth was established by comparing the device's recorded "virtual tip" and "virtual trajectory" to the "actual screw tip position" and "screw orientation" derived from post-operative imaging scans.
- Clinical Study: Ground truth for clinical accuracy was based on the Gertzbein score obtained from viewing post-operative imaging scans. The Gertzbein score implicitly provides a categorization of screw placement accuracy (e.g., ideal, acceptable, minor breach, major breach).
8. The sample size for the training set:
- The text does not provide any information about the sample size used for the training set of the xvision Spine system's algorithms. It focuses entirely on verification and validation testing.
9. How the ground truth for the training set was established:
- Since no information about a training set is provided, there is no information on how its ground truth was established.
Ask a specific question about this device
(168 days)
The VIPER PRIME navigated inserter is a navigated instrument for insertion of VIPER PRIME screws in open or percutaneous procedures. The VIPER PRIME navigated inserter is indicated for use in spinal surgical procedures, in which:
- use of the VIPER System is indicated,
- use of stereotactic surgery may be appropriate, and
- where reference to a rigid anatomical structure, such as the pelvis or a vertebrae can be identified relative to the acquired image (CT, MR, 2D fluoroscopic image or 3D fluoroscopic image reconstruction) and/or an image data based model of the anatomy using a navigation system which includes universal tracking arrays supplied by the navigation manufacturer.
These procedures include but are not limited to spinal fusion. The VIPER PRIME navigated inserter requires manual calibration.
The VIPER PRIME™ navigated inserter is a reusable manual screwdriver for insertion of the VIPER PRIME screws of the VIPER System in open and percutaneous procedures. The VIPER PRIME navigated inserter also features attachment sites for universal tracking arrays supplied by the navigation manufacturer to enable use with the respective spine navigation system. The VIPER PRIME navigated inserter must be manually calibrated with the third-party navigation system.
This document is not about an AI/ML powered device, but rather a navigated inserter for spinal surgery. Therefore, the questions related to AI/ML specific concepts like training sets, ground truth establishment for training, MRMC studies, and effect size of human reader improvement with AI assistance are not applicable.
However, I can extract information related to the device's acceptance criteria and the study proving it meets these criteria based on the provided text.
Based on the provided text for the VIPER PRIME navigated inserter, the primary method for demonstrating acceptable performance is through non-clinical sawbones testing. The document does not provide a formal table of acceptance criteria with specific numerical thresholds, nor does it detail a comparative study with a "reported device performance" against explicit criteria beyond general confirmation of function.
Here's an attempt to answer the questions based on the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a formal table with quantitative acceptance criteria and corresponding reported performance metrics. Instead, the performance evaluation is described qualitatively as "confirm[ing] device performance for the intended use."
The study confirmed the following functions:
- Acceptance Criteria (Implicit): The device should successfully allow for:
- Assembly with third-party universal tracking arrays.
- Manual calibration with the third-party navigation system.
- Navigated insertion of VIPER PRIME screws in a sawbones model.
- Final screw position in the software should be verifiable by a second imaging modality.
- Reported Device Performance (Qualitative): The non-clinical sawbones testing "confirmed device performance for the intended use" by demonstrating successful assembly, manual calibration, and navigated insertion of screws, with verification of screw position using a second imaging modality.
2. Sample size used for the test set and the data provenance
- Sample Size: The document only states "non-clinical sawbones testing" and "insertion of VIPER PRIME screws in a sawbones model." It does not specify the number of sawbones models used, the number of screws inserted, or the number of trials performed.
- Data Provenance:
- Country of Origin: Not specified, but given the submission is to the FDA in the USA, the testing would likely adhere to US regulatory standards.
- Retrospective or Prospective: This was likely a prospective study designed to demonstrate performance for regulatory submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- The document does not mention the use of human experts to establish "ground truth" for the test set in the way one might for an AI/ML diagnostic device (e.g., radiologist reads).
- The ground truth in this context appears to be the physical confirmation of the screw's final position via a second imaging modality. It is implied that the test was performed by qualified individuals, but their specific roles or qualifications (e.g., orthopedic surgeons, engineers) are not detailed.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable in the context of this device. There is no mention of consensus reading or multi-reader adjudication for establishing ground truth, as the "ground truth" is the physical location of the screw confirmed by imaging.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This device is a surgical instrument, not an AI-powered diagnostic or assistive tool. Therefore, an MRMC study related to human reader improvement with/without AI assistance was not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This is a hardware device requiring human interaction and navigation system input. There is no standalone algorithm to evaluate.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- The ground truth was established by physical verification of the final screw position using a second imaging modality after insertion in a sawbones model. This is a form of objective measurement/outcomes data within the controlled test environment.
8. The sample size for the training set
- Not applicable. This device does not involve a "training set" in the context of machine learning.
9. How the ground truth for the training set was established
- Not applicable. There is no training set for this device.
Ask a specific question about this device
(198 days)
The AIS S4 Navigation Instruments are intended to assist the surgeon in precisely locating anatomical structures in either open, minimally invasive, or percutaneous procedures. They are indicated for use in surgical spinal procedures, in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the pelvis or a vertebrae can be identified relative to the acquired image (CT, MR, 2D fluoroscopic image or 3D fluoroscopic image reconstruction) and/or an image data based model of the anatomy. These procedures include but are not limited to spinal fusion during the navigation of polyaxial screws (T1-T3).
The AIS S4 Navigation Instruments are manual surgical instruments which are designed to interface with BrainLAB's already cleared surgical navigation systems. Instruments in this system may be pre-calibrated or manually calibrated to already cleared systems using manufacturers' instructions. These instruments are intended to be used in spine applications to perform general or manual functions within the orthopedic surgical environment.
- Table of Acceptance Criteria and Reported Device Performance:
The document explicitly states: "The AIS S4 Navigation Instruments met the performance requirements. No safety or effectiveness issues were raised by the performance testing." However, specific numerical acceptance criteria (e.g., accuracy thresholds, precision values) are not provided in this submission. The nature of the device (surgical navigation instruments designed to interface with other cleared systems) suggests that the performance requirements likely relate to the accuracy and reliability of tracking and spatial localization when used with the BrainLAB navigation systems.
Acceptance Criteria (e.g., accuracy, precision) | Reported Device Performance |
---|---|
Not explicitly stated in the document | Met all performance requirements; no safety or effectiveness issues raised. |
(Likely related to accurate tracking and spatial localization in conjunction with BrainLAB navigation systems) | The instruments functioned as intended during validation activities. |
- Sample Size Used for the Test Set and Data Provenance:
The document states "BrainLAB conducted validation activities including usability testing with the AIS S4 Navigation Instruments." However, no information regarding the sample size used for the test set or the data provenance (e.g., country of origin, retrospective/prospective) is provided.
- Number of Experts Used to Establish Ground Truth and Their Qualifications:
The document does not describe the specific ground truth establishment process for the performance data. Therefore, the number of experts and their qualifications are not mentioned. Given that the performance data appears to be from "validation activities including usability testing," it's plausible that healthcare professionals were involved in assessing the usability and functionality, but their specific roles in establishing a quantifiable ground truth are not detailed.
- Adjudication Method:
No adjudication method is described in the provided text.
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
No MRMC comparative effectiveness study is mentioned. The submission focuses on the standalone performance and equivalence of the AIS S4 Navigation Instruments when used with existing BrainLAB navigation systems, rather than comparing human readers with and without AI assistance.
- Standalone Performance (Algorithm Only without Human-in-the-Loop Performance):
This submission is about surgical navigation instruments, which are physical tools that assist a surgeon; they are not an AI algorithm in the typical sense that would have "algorithm-only" performance without human interaction. The "performance data" described refers to "validation activities including usability testing" of the instruments themselves. Therefore, while technically these instruments are used "standalone" in the sense that they are physical tools, their performance is inherently tied to human use and their interface with the BrainLAB navigation system. The study described focuses on their functional performance in this context, rather than a quantifiable, algorithm-only output.
- Type of Ground Truth Used:
The document mentions "validation activities including usability testing," and states that the instruments "met the performance requirements." This suggests the ground truth was likely based on functional assessment and verification against predefined specifications for accuracy, precision, and usability when integrated with the BrainLAB navigation systems. It is not explicitly stated to be based on expert consensus, pathology, or outcomes data in the traditional sense, but rather on the technical performance and usability of the instruments.
- Sample Size for the Training Set:
This device is a set of physical surgical instruments, not an AI or machine learning algorithm that requires a "training set" of data. Therefore, this concept is not applicable, and no training set sample size is provided.
- How Ground Truth for the Training Set Was Established:
As the device is a set of physical surgical instruments and not an AI algorithm, there is no training set and therefore no ground truth establishment for a training set.
Ask a specific question about this device
(137 days)
The AIS S4 Cervical Navigation Instruments are intended to assist the surgeon in precisely locating anatomical structures in either open, minimally invasive, or percutaneous procedures. They are indicated for use in surgical spinal procedures, in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure , such as the pelvis or a vertebrae can be identified relative to the acquired image (CT, MR, 2D fluoroscopic image or 3D fluoroscopic image reconstruction) and/or an image data based model of the anatomy. These procedures include but are not limited to spinal fusion during the navigation of pedicle screws (T1-T3).
The AIS S4 Cervical Navigation Instruments are manual surgical instruments which are designed to interface with BrainLAB's already cleared surgical navigation systems. Instruments in this system may be pre-calibrated or manually calibrated to already cleared systems using manufacturers' instructions. These instruments are intended to be used in spine applications to perform general or manual functions within the orthopedic surgical environment.
The provided text describes the Aesculap S4 Cervical Navigation Instrumentation, which is a set of manual surgical instruments designed to interface with BrainLAB's surgical navigation systems. The submission is a Traditional 510(k) Premarket Notification.
Here's the breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly list specific quantitative acceptance criteria for the device's performance (e.g., a certain level of accuracy in millimeters). Instead, it describes a more qualitative assessment.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Device functions as intended for surgical navigation. | AIS Navigation Instruments met the performance requirements. |
No safety issues are raised by performance testing. | No safety issues were raised by the performance testing. |
No effectiveness issues are raised by performance testing. | No effectiveness issues were raised by the performance testing. |
Substantially equivalent to predicate devices for intended use. | Found substantially equivalent. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document states that "BrainLAB conducted validation activities including usability testing with the AIS Navigation Instruments." However, it does not specify the sample size (e.g., number of users, number of cases tested) for this usability testing or any other performance testing.
- Data Provenance: The document does not specify the country of origin of the data. The testing appears to be conducted by BrainLAB, a company with international operations, but the specific location of the testing is not mentioned. It is also not explicitly stated whether the data was retrospective or prospective, though usability testing typically involves prospective data collection.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- The document mentions "usability testing with the AIS Navigation Instruments." Usability testing typically involves end-users (surgeons) but does not specify the number or qualifications of these experts for establishing ground truth related to navigational accuracy or effectiveness. The study relies on the outcome of the usability testing and performance testing rather than expert-established ground truth in a clinical or imaging sense.
4. Adjudication Method for the Test Set
- The document does not mention any adjudication method for the test set. Given the nature of the testing described (usability and performance requirements), it's unlikely a formal adjudication process (like 2+1 or 3+1 consensus) would be used as it would be in an imaging diagnostic study. The assessment would likely be based on whether the instruments appropriately facilitated the surgical steps and met performance specifications.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No MRMC comparative effectiveness study was done. The document states, "Clinical data was not needed for the AIS Navigation Instruments." The submission focuses on substantial equivalence based on technological characteristics and performance testing.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- This question is not applicable as the device (AIS S4 Cervical Navigation Instruments) is a set of manual surgical instruments designed to interface with surgical navigation systems. It is not an AI algorithm or a standalone software. The performance testing would inherently involve human interaction with the instruments and the navigation system.
7. The Type of Ground Truth Used
- The document implies that the ground truth for "performance requirements" would be established by the functional specifications and design requirements of the instruments when used with the BrainLAB navigation systems. For usability testing, the "ground truth" would be whether the instruments are usable and meet the functional needs of the surgeons. There is no mention of expert consensus, pathology, or outcomes data being used as ground truth for this submission, as clinical data was not required.
8. The Sample Size for the Training Set
- This question is not applicable. The device is a set of manual surgical instruments; it is not an AI algorithm that requires a training set.
9. How the Ground Truth for the Training Set Was Established
- This question is not applicable as there is no AI algorithm or training set involved.
Ask a specific question about this device
(145 days)
The Synthes Navigable Pedicle Preparation Instruments are intended to assist the surgeon in precisely locating anatomical structures in either open, minimally invasive, or percutaneous procedures. These are indicated for use in surgical spinal procedures, in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the pelvis or a vertebra can be identified relative to the acquired image (CT, MR, 2D fluoroscopic image or 3D fluoroscopic image reconstruction) and/or an image data based model of the anatomy. These procedures include but are not limited to spinal fusion.
The Navigable Pedicle Preparation Instruments are manual surgical instruments which are designed to interface with already-cleared surgical navigation systems. Instruments in this system may be pre-calibrated to already-cleared surgical navigation systems, or may be manually calibrated to already-cleared surgical navigation systems using manufacturers' instructions. These instruments are intended to be used in spine applications to perform general manual functions within the orthopaedic surgical environment.
This is a 510(k) summary for a set of surgical instruments, not an AI/ML device. Therefore, the requested information regarding acceptance criteria, study data, expert involvement, and ground truth for an AI device is not applicable and cannot be extracted from the provided text.
The document describes the Synthes Navigable Pedicle Preparation Instruments, which are manual surgical instruments designed to interface with existing surgical navigation systems.
Here's a breakdown of the relevant information provided:
1. Acceptance Criteria and Reported Device Performance (Non-AI/ML):
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Meet performance requirements for intended use | "The Navigable Pedicle Preparation Instruments met the performance requirements, providing assurance of device performance for their intended use." |
No safety or effectiveness issues | "No safety or effectiveness issues were raised by the performance testing." |
2. Sample size used for the test set and data provenance (Not applicable for AI/ML):
The document mentions "usability testing" as a validation activity in the "Performance Data" section. However, it does not specify sample sizes for test sets, data provenance (e.g., country of origin), or whether the data was retrospective or prospective, as these details are typically associated with clinical studies or AI/ML performance evaluations, not characterization of manual surgical instruments.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (Not applicable for AI/ML):
This information is not provided because the validation involved usability testing of manual instruments, not the establishment of ground truth for an AI/ML algorithm.
4. Adjudication method (Not applicable for AI/ML):
This concept is not relevant to the validation described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance (Not applicable for AI/ML):
No MRMC study was conducted as this device is not an AI/ML system or a reading assistance tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done (Not applicable for AI/ML):
This is not applicable as the device is a set of manual surgical instruments.
7. The type of ground truth used (Not applicable for AI/ML):
Ground truth as it relates to AI/ML performance is not relevant here. The validation focused on the functional performance and usability of the physical instruments.
8. The sample size for the training set (Not applicable for AI/ML):
There is no "training set" in the context of manual surgical instrument validation, as this is a concept associated with AI/ML model development.
9. How the ground truth for the training set was established (Not applicable for AI/ML):
This is not applicable for the reasons stated above.
In summary: The provided 510(k) summary focuses on demonstrating the substantial equivalence of manual surgical instruments through non-clinical performance testing (usability testing). It does not contain the detailed information related to AI/ML device validation, such as specific acceptance criteria for algorithm performance, sample sizes for test/training sets, expert involvement in ground truth establishment, or comparative effectiveness studies of AI assistance. The document explicitly states: "Clinical data was not needed for the Navigable Pedicle Preparation Instruments."
Ask a specific question about this device
Page 1 of 1