Search Results
Found 2 results
510(k) Data Aggregation
(243 days)
ClarifEye is intended to be an intra-operative image-guidance tool used during surgical and interventional therapy. It provides assistance to the performing physician to align a device with a virtual path that is planned on a 3D volume of the anatomy. This alignment is provided in the following ways:
-The virtual path is superimposed with a live video image of the area of interest.
-The position of the ClarifEye Needle is superimposed with the video images of the area of interest and/or the 3D images of the anatomy.
ClarifEye is intended to be used on patients who have been elected for procedures where a straight, rigid device is placed in the spine, such as sacral, lumbar and thoracic pedicle screw placement. ClarifEye is indicated for procedures where a reference to bony anatomical structures can be established using skin markers as a reference.
The ClarifEye Needle is a manual, surgical instrument intended to be used during spine surgery to facilitate placement of guidewires. The needle may be used as part of a planning and intraoperative guidance system (i.e. Philips intra-operative image guidance tool) to enable open or percutaneous image guided therapy. The ClarifEye Needle is indicated for use during posterior pedicle screw procedures, such as in the sacral, lumbar and thoracic spinal regions, in which the use of image guided surgery may be appropriate.
The ClarifEye Needle is EtO sterilized, for single use only and have to be disposed after use, according to local waste disposal methods for potentially bio hazardous material.
ClarifEye is a software medical device that is intended to be an intra-operative image-guidance tool used during surgical and interventional therapy.
It will be offered as an optional accessory to the Philips interventional fluoroscopic X-ray system, from which it receives 2D X-ray and video images. ClarifEye implements an automatic reconstruction (algorithm) to create 3D CBCT images from a rotational scan acquired on the X-ray system.
Clarif Eye integrates the live video images of the surgical view and live 2D X-rav image which it overlays on the planned path shown in the reconstructed 3D CBCT image to provide navigational assistance, in real-time. ClarifEye provides assistance to the performing physician to align a device, such as a needle with a virtual path that is planned on a 3D image of the anatomy. The created 3D planning can be overlaid on live video images ("Augmented View") or live 2D fluoroscopy images, to monitor device deployment during the procedure.
ClarifEye is intended to be used in combination with the compatible ClarifEye Needle and ClarifEye Markers.
The ClarifEye Needle is for optional use only, when needle tip tracking is desired.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Criteria | Reported Device Performance |
---|---|
Phantom Tests (Navigational Accuracy - Device Positional Displacement) | ≤ 2 mm |
Phantom Tests (Navigational Accuracy - Trajectory Angular Displacement) | ≤ 2° |
Pig Cadaver Study (Pedicle Screw Placement - Screw Tip Accuracy) | 2.0 ± 1.1 mm |
Pig Cadaver Study (Pedicle Screw Placement - Screw Head Accuracy) | 1.6 ± 0.8 mm |
Pig Cadaver Study (Pedicle Screw Placement - Angular Accuracy (Axial)) | 1.7 ± 1.7° |
Pig Cadaver Study (Pedicle Screw Placement - Angular Accuracy (Sagittal)) | 1.6 ± 1.2° |
Human Cadaver Study (Needle Placement - Entry Point/Needle Tip Accuracy, without device tracking) | 2.2 ± 1.3 mm |
Human Cadaver Study (Needle Placement - Angular Accuracy (Axial and Sagittal), without device tracking) | 0.9 ± 0.8° |
Clinical Study (Accuracy of pedicle screw placement using ClarifEye, according to Gertzbein classification, grades 0 and 1 considered accurate) | 94.1% (238/253 accurately placed screws) |
Clinical Study (Distance between planned path and device position - Screw Tip) | 2.2 ± 1.56 mm |
Clinical Study (Distance between planned path and device position - Screw Head) | 2.0 ± 1.31 mm |
Clinical Study (Angular Accuracy - Axial) | 2.0 ± 2.0° |
Clinical Study (Angular Accuracy - Sagittal) | 1.7 ± 1.5° |
Software Verification Testing | All executed verification tests passed. |
Usability Validation (ClarifEye) | Found to be safe and effective for intended use, users, and environment. |
Usability Validation (ClarifEye Needle) | Found to be safe and effective for intended use, users, and environment. |
In-house Simulated Use Design Validation | All executed validation protocols passed; ClarifEye conforms to intended use. |
Service User Needs Validation | All executed validation protocols passed. |
FDA Recognized Consensus Standards and Guidance Documents | Complies with listed standards and guidance documents. |
Study Details:
The document describes several non-clinical studies (phantom, pig cadaver, human cadaver) and one clinical study to demonstrate the device's performance and meet acceptance criteria.
1. Non-Clinical Performance Data (K9, K10):
- Test Set Sample Size:
- Phantom Tests: Not specifically enumerated, but refers to "phantom tests" (multiple tests indicated by plural).
- Pig Cadaver Study: Not explicitly stated, but measured on "thorocolumbar vertebrae."
- Human Cadaver Study: Not explicitly stated, but measured on "thoracolumbar."
- Data Provenance:
- Phantom Tests: In-house (implied by context of non-clinical testing for compliance with ASTM F2554-10).
- Pig Cadaver Study: In-house (implied by "a pig cadaver study demonstrated...").
- Human Cadaver Study: In-house (implied by "a human cadaver study demonstrated...").
- Number of Experts & Qualifications for Ground Truth: Not specified for these non-clinical studies. The ground truth would likely be established by precise measurements and engineering methods.
- Adjudication Method: Not specified.
- MRMC Comparative Effectiveness Study: No, this section describes standalone performance of the device in controlled environments.
- Standalone Performance: Yes, these are standalone (algorithm only or device-only) performance evaluations.
- Type of Ground Truth:
- Phantom Tests: Positional and angular accuracy measurements against known targets.
- Cadaver Studies: Accuracy of screw/needle placement relative to planned paths, measured with imaging or physical means.
- Training Set Sample Size: Not applicable. These are performance evaluations of the device, not descriptions of algorithm training.
- Ground Truth for Training Set: Not applicable.
2. Usability Validation (K9, K10):
- Test Set Sample Size:
- ClarifEye: "both orthopedic/neuro spine surgeons and monitoring nurse/technicians." Not a specific number, but mentions representative user groups.
- ClarifEye Needle: "representative users." Not a specific number.
- Data Provenance: Simulated use environment, implying in-house or controlled testing facility.
- Number of Experts & Qualifications for Ground Truth: Orthopedic/neuro spine surgeons and monitoring nurse/technicians as test users, evaluating usability and effectiveness. The "ground truth" here is the user's experience and assessment against predetermined usability criteria.
- Adjudication Method: Not specified, but likely based on user feedback and recorded observations against objective usability criteria.
- MRMC Comparative Effectiveness Study: No.
- Standalone Performance: Yes, this is a standalone usability evaluation of the device.
- Type of Ground Truth: User feedback and expert assessment within a simulated clinical workflow.
- Training Set Sample Size: Not applicable.
- Ground Truth for Training Set: Not applicable.
3. In-house Simulated Use Design Validation (K9):
- Test Set Sample Size: "Clinical Scientists/Marketing specialists that fulfill the intended user profile." Not a specific number.
- Data Provenance: In-house simulated use environment.
- Number of Experts & Qualifications for Ground Truth: Clinical Scientists/Marketing specialists, acting as surrogate users to validate workflow and user needs.
- Adjudication Method: Not specified, but likely based on successful execution of predefined validation protocols.
- MRMC Comparative Effectiveness Study: No.
- Standalone Performance: Yes.
- Type of Ground Truth: Successful execution of predefined device workflows in a phantom model.
- Training Set Sample Size: Not applicable.
- Ground Truth for Training Set: Not applicable.
4. Clinical Study (Summary of Clinical Performance Data) (K10, K11):
- Test Set Sample Size: Twenty (20) subjects.
- Data Provenance: Prospective, single-arm, single-center observational study with patients outside the United States.
- Number of Experts & Qualifications for Ground Truth: An unspecified number of experts (implied to be clinical personnel, likely radiologists or surgeons) evaluated the screw placement based on post-procedural CBCT. The grading criteria were the "recognized Gertzbein classification" (and its adaptation for cervical screws).
- Adjudication Method: "Grading of pedicle screw placement was done according to the recognized Gertzbein classification for the lumbar and thoracic region and slightly adapted for the cervical screw placements." The details of how agreement was reached among multiple graders, if any, are not provided.
- MRMC Comparative Effectiveness Study: No. This was an observational study evaluating the accuracy of the device in clinical use, not comparing human readers with and without AI assistance.
- Standalone Performance: While used by surgeons, the study primarily assesses the accuracy of the "navigation software" (ClarifEye) in guiding pedicle screw placement, making it a standalone assessment of the device's accuracy capabilities in a clinical context. The outcome measure (accuracy of screw placement) directly measures the system's performance.
- Type of Ground Truth: Post-procedural CBCT images, evaluated against the Gertzbein classification by clinical experts. This is considered expert consensus/imaging-based ground truth (radiographic outcome).
- Training Set Sample Size: Not applicable. This is a clinical validation study, not algorithm training.
- Ground Truth for Training Set: Not applicable.
Ask a specific question about this device
(66 days)
VesselNavigator provides image guidance by superimposing live fluoroscopic images on a 3D volume of the vessel anatomy to assist in catheter maneuvering and device placement.
VesselNavigator is intended to assist in the treatment of endovascular diseases during procedures such as (but not limited to) AAA, TAA, carotid stenting, iliac interventions.
VesselNavigator is a software product (Interventional Tool) intended to assist in the treatment of endovascular diseases during an endovascular intervention procedure. VesselNavigator is intended to be used in combination with a Philips Interventional X-ray system. VesselNavigator can be used during any endovascular intervention and covers all vascular anatomy except coronaries and intracranial vessels.
It provides live 3D image guidance for navigating endovascular devices through intended vascular structures in the body, reusing previously acquired diagnostic 3D images. After registration, the 3D volume can be used as a 3D roadmap for navigation; live 2D fluoroscopic images will be overlaid on the 3D volume. In addition, VesselNavigator provides tools to segment the relevant vasculature in the 3D volume (where the end-user is able to edit the segmentation results), place landmarks for easy recognition of key anatomical points of interests, and store and recall of preferred view angles.
The provided text does not contain specific acceptance criteria for the device (VesselNavigator) or a detailed study proving it meets such criteria in terms of performance metrics (e.g., accuracy, sensitivity, specificity).
The document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to a predicate device, rather than detailed performance studies with quantitative acceptance criteria typically found in, for example, AI/ML device submissions.
However, based on the information provided, here's a breakdown of what can be extracted or inferred regarding performance and validation:
1. A table of acceptance criteria and the reported device performance
No explicit quantitative acceptance criteria or detailed performance metrics (accuracy, precision, etc.) are provided in this document. The "performance" described is largely functional and safety-related, aimed at demonstrating substantial equivalence.
Acceptance Criteria (Inferred from "Nonclinical Performance Data") | Reported Device Performance |
---|---|
Compliance with IEC 62304 (Medical device software) | Complies |
Compliance with IEC 62366 (Usability engineering) | Complies |
Compliance with ISO 14971 (Risk management) | Complies |
Compliance with NEMA PS 3.1-3.20 (DICOM) | Complies |
Compliance with FDA Guidance for "Software Contained in Medical Devices" | Complies |
Software verification for system level requirements | Tests performed successfully |
Software verification for identified hazard mitigations | Tests performed successfully |
Vessel segmentation tool validation | Tested and validated |
Intended use and commercial claims validation | Tested and validated |
Usability testing with representative intended users | Tested and validated |
Does not raise new questions on safety or effectiveness | No new questions raised |
As safe and effective as its predicate device | Demonstrated |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document primarily describes software verification and validation, and usability testing. It does not mention a "test set" in the context of a dataset of patient cases used to evaluate algorithmic performance (e.g., a test set for an AI model). Therefore, information on sample size and data provenance for such a test set is not available in this document.
The validation included "usability testing with representative intended users," but the number of users or specific test cases is not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not available. The document focuses on software engineering and usability validation, not on evaluating diagnostic or analytical performance against expert-established ground truth on a clinical dataset.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not available. There is no mention of a clinical "test set" requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not performed and is not mentioned. The device is a "VesselNavigator" and is presented as an "Interventional Tool" providing "image guidance by superimposing live fluoroscopic images on a 3D volume." The submission indicates that "clinical studies to support substantial equivalence" were not required. The focus is on the device's functional equivalence and safety, not on improving human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance evaluation (algorithm only) as typically understood for an AI/ML diagnostic device is not described in this document. The "VesselNavigator" is explicitly an "Interventional Tool" intended to "assist in catheter maneuvering and device placement" by superimposing images, implying a human-in-the-loop workflow. While there is a "vessel segmentation tool" as part of the functionality, its standalone segmentation performance (e.g., accuracy against ground truth) is not detailed. The validation mentioned "Software verification testing... as well as the identified hazard mitigations" and "Software validation testing included testing of the vessel segmentation tool, the intended use and commercial claims, and usability testing," but without specific performance metrics for the segmentation algorithm itself.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document primarily discusses validation against software requirements, hazard mitigations, intended use, and commercial claims. For functional aspects like the "vessel segmentation tool," the "ground truth" for validation would likely involve comparing the software's output to an expected or manually derived segmentation, but the specifics (e.g., expert manual segmentation, established anatomical models) are not detailed. Clinical outcomes data or pathology as ground truth are not mentioned in relation to performance validation.
8. The sample size for the training set
This information is not available. The document does not describe the device as a machine learning or AI algorithm that undergoes "training" on a dataset in the modern sense. It's described as a "software product" with "functionality to segment the relevant vasculature," which could imply rule-based or traditional image processing algorithms rather than a trained neural network. Therefore, a "training set" in the context of machine learning is not applicable based on the information provided.
9. How the ground truth for the training set was established
As described in point 8, the concept of a "training set" as used for AI/ML models is not mentioned or applicable based on the information provided in this 510(k) summary.
Ask a specific question about this device
Page 1 of 1