Search Results
Found 5 results
510(k) Data Aggregation
(192 days)
Abys® Medical Cysware® 4H is intended for use as a software interface and image segmentation system for the transfer of medical imaging information to an output file. Abys® Medical Cysware® 4H is also intended as pre-operative software for surgical planning assistance. Abys® Medical Cysware® 4H is intended to be used by clinician with appropriate clinical judgement.
Abys® Medical Cysart@ 4H is a medical display intended for 3D image visualization and image interaction. The stereoscopic 3D images are generated from 3D volumetric data acquired from CT scan source. The device is intended to provide visual information to be used by clinical with appropriate clinical judgement for analysis of surgical options, and the intraoperative display of the images. Abys® Medical Cysart@ 4H is intended to be used as an adjunct to the interpretation of images performed using diagnostic imaging systems and is not intended for primary diagnosis. Abys® Medical Cysart® 4H is intended to be used as a reference display for consultation to assist the clinician with appropriate clinical judgement who is responsible for making all final patient management decisions.
Abys® Medical Cysware® 4H web platform is a web-based medical device designed and intended for use prior to surgery to gather in one place the information needed by the surgeon to make a surgical planning. As a result, a planning assistance file is created and contains medical imaging, 3d models, documents, and notes. The ABYS® MEDICAL Cysware® 4H web platform is used to export the planning assistance file to the Abys® Medical Cysart® 4H mixed reality application, another medical software.
The Abys® Medical Cysart® 4H mixed reality application is a medical device designed and intended for use in office room and in operating room to display and manipulate all documents in the planning assistance file generated from the Abys® Medical Cysware® 4H web platform.
Here's an analysis of the acceptance criteria and study information for Abys Medical's Cysware 4H and Cysart 4H devices, based on the provided text:
Acceptance Criteria and Device Performance Study
The FDA 510(k) summary provides details on the performance testing conducted for the Cysware 4H and Cysart 4H devices. The testing was non-clinical.
1. Table of Acceptance Criteria and Reported Device Performance
For Cysware 4H:
Acceptance Criteria | Reported Device Performance |
---|---|
Global time needed to open a planning assistance file is below 40 seconds (excluding credentials entry). | Global time needed to open a planning assistance file is below 40 seconds. (Note: The text clarifies that "Global time with credentials entering is user dependent and may reach 1-2 minutes, as showed by summative tests.") |
Features are usable when fifteen users are simultaneously connected to Cysware 4H. | Features are usable when fifteen users are simultaneously connected to Cysware 4H. |
Features are usable when three users are simultaneously connected to the same folder. | Features are usable when three users are simultaneously connected to the same folder. |
Accuracy of measures (distances and angles) meets specified thresholds. | Accuracy of measures showed an error lower than 1.6 mm for distances and 2.9° for the angles. |
Accuracy of Cysware 4H segmentation algorithm and Mesh generation for Cysart 4H export allows segmenting DICOM from CT scan sources with an error lower than 1.25mm. | Accuracy of Cysware 4H segmentation algorithm and Mesh generation for Cysart 4H export allows segmenting DICOM from CT scan sources with an error lower than 1.25mm. |
For Cysart 4H:
Acceptance Criteria | Reported Device Performance |
---|---|
Images displayed have a refresh rate always higher than 30 frames per second. | The images displayed have a refresh rate always higher than 30 frames per second, ensuring the smooth movement of the 3D objects. |
Autonomy of the HoloLens 2 allows for the entirety of a surgery (specifically 1h30 without video stream sharing and 45 minutes with video stream sharing). | Autonomy of the HoloLens 2 when the application is open allows for the entirety of a surgery. More specifically 1h30 without sharing the video stream and 45 minutes while sharing the video stream to a workstation connected to the same network. |
The Cysart 4H device reproduces 3D objects at a scale of 1:1. | The Cysart 4H device reproduces the 3D objects at a scale of 1:1 and thus ensures that the 3D medical images displayed are representative of the medical images acquired from the CT scan. |
Global time to connect to a Cysart 4H session is no longer than 3 minutes. | The global time to connect to a Cysart 4H session is no longer than 3 minutes. |
Quality of display is sufficient for intended use and no degradation occurs when adding objects/documents. | The quality of display is sufficient for the intended use and no degradation of display occurs when adding objects or documents to an opened session. |
Voice commands can be used in the operating room as long as ambient noise does not exceed 60dB. | The voice commands can be used in operating room as long as the ambient noise does not exceed 60dB. |
Performance of the Microsoft HoloLens 2 display used with Cysart 4H is adequate (verified for luminance, distortion, contrast, motion-to-photon latency). | The performance of the Microsoft® HoloLens 2 display used with Cysart® 4H is adequate and has been demonstrated by verifying: luminance, distortion, contrast, and motion-to photon latency. |
2. Sample Size Used for the Test Set and Data Provenance
The provided document does not explicitly state the sample size used for the non-clinical performance test set. It mentions "fifteen users," and "three users" for simultaneous connection tests for Cysware 4H, but not for the accuracy of measurements or segmentation where image data would be the primary "sample."
The data provenance is not explicitly mentioned (e.g., country of origin of data, retrospective or prospective). However, the general context is about software testing and validation against technical specifications rather than a clinical study on patient data from specific sources. The segmentation and mesh generation accuracy for Cysware 4H specifically mentions using "DICOM from CT scan source," but the origin of these CT scans is not provided.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
This information is not explicitly provided in the non-clinical performance data section. The testing described focuses on technical specifications and usability, rather than expert-derived ground truth on clinical diagnostic images. For measures like accuracy of segmentation, there would have been a "ground truth" for comparison, but the method of establishing it and the experts involved are not detailed.
4. Adjudication Method for the Test Set
An adjudication method (e.g., 2+1, 3+1) is not mentioned as the study described is non-clinical performance testing rather than a clinical study requiring adjudication of findings (like a diagnostic accuracy study).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document states, "Clinical testing was not required to demonstrate substantial equivalence." Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
The performance tests for Cysware 4H's "Accuracy of Cysware 4H segmentation algorithm and Mesh generation" can be considered a standalone performance assessment of the algorithm's capability. The reported error of "lower than 1.25mm" against a ground truth (though not fully described) indicates a standalone evaluation.
7. The Type of Ground Truth Used
For the accuracy of measures (distance, angle) and segmentation accuracy of Cysware 4H, the ground truth would typically be reference measurements or segmentation derived from the input CT scan data. While the method of establishing this ground truth (e.g., expert consensus, manual annotation by a highly qualified individual, comparison to a gold standard software) is not explicitly detailed, it would inherently be a technical ground truth rather than pathological or outcomes data, as these are non-clinical hardware/software performance tests.
For Cysart 4H, the ground truth for parameters like refresh rate, autonomy, scale, connection time, display quality, and voice command efficacy are based on technical specifications and measurable operational performance criteria rather than clinical ground truth from patient data.
8. The Sample Size for the Training Set
The document does not provide information on the sample size used for the training set for any algorithms within Cysware 4H or Cysart 4H. It is stated that the software was developed, verified, and validated, implying standard software development and QA practices, but details on machine learning model training data are absent.
9. How the Ground Truth for the Training Set was Established
As no information on a training set or specific machine learning models requiring labeled training data is provided, how the ground truth for such a training set was established is not detailed.
Ask a specific question about this device
(422 days)
VSI HoloMedicine® is a software device for displaying digital medical images acquired from CT, Angio CT, MRI, CBCT, PET, and SPECT sources. It is intended to visualize 3D imaging holograms of the patient for pre-operative planning outside and/or inside the surgical room.
When accessing VSI HoloMedicine® from a wireless head-mounted display (HMD) or PC monitor, images viewed are for informational purposes only and not intended for diagnostic use. VSI HoloMedicine® is indicated for use by qualified healthcare professionals including surgeons, radiologists, physicians, and technologists.
VSI HoloMedicine is a software device for displaying digital medical images acquired from CT, Angio CT, MRI, CBCT, PET, and SPECT sources. It is intended to visualize 3D imaging holograms of the patient for pre-operative planning outside and/or inside the surgical room.
When accessing VSI HoloMedicine from a wireless head-mounted display (HMD) or PC monitor, images viewed are for informational purposes only and not intended for diagnostic use. VSI HoloMedicine is indicated for use by qualified healthcare professionals including surgeons, radiologists, physicians, and technologists.
The provided text does not contain detailed performance data or acceptance criteria for the VSI HoloMedicine® device beyond a general statement that "Visual quality testing on software using the Microsoft Hololens Headset has been performed." and that "software verification demonstrate that the VSI Holomedicine should perform as intended in the specified use conditions."
The document focuses on establishing substantial equivalence to a predicate device (Medivis-SurgicalAR K190764) and a reference device (Novarad-OpenSight K172418) based on their design, indications for use, and technology. It lists applicable standards, but does not provide the results of specific performance tests against measurable acceptance criteria.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and the reported device performance: This information is not present in the provided text.
- Sample sizes used for the test set and the data provenance: This information is not present.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not present.
- Adjudication method for the test set: This information is not present.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size: This information is not present.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: This information is not present.
- The type of ground truth used: This information is not present.
- The sample size for the training set: This information is not present.
- How the ground truth for the training set was established: This information is not present.
The document primarily provides regulatory information for a 510(k) submission, confirming the device's substantial equivalence and general safety/effectiveness, rather than detailed performance study results.
Ask a specific question about this device
(263 days)
The ARAI™ System is intended as an aid for precisely locating anatomical structures in either open or percutaneous orthopedic procedures in the lumbosacral spine region. Their use is indicated for any medical condition of the lumbosacral spine in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the iliac crest, can be identified relative to intraoperative CT images of the anatomy.
The ARAI System simultaneously displays 2D stereotaxic data along with a 3D virtual anatomy model over the patient during surgery. The stereotaxic display is indicated for continuously tracking instrument position and orientation to the registered patient anatomy while the 3D display is indicated for localizing the virtual instrument to the virtual anatomy model over the patient during surgery. The 3D display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed 2D stereotaxic information.
The ARAI™ System is a combination of hardware and software that provides visualization of the patient's internal boney anatomy and surgical guidance to the surgeon based on registered patient-specific digital imaging.
ARAI™ is a navigation system for surgical planning and/or intraoperative guidance during stereotactic surgical procedures. The ARAI™ system consists of two mobile devices: 1) the surgeon workstation, which includes the display unit and the augmented reality visor (optional), and 2) the control workstation, which houses the optical navigation tracker and the computer. The optical navigation tracker utilizes infrared cameras and active infrared lights to triangulate the 3D location of passive markers attached to each system component to determine their 3D positions and orientations in real time. The 3D scanned data is displayed with both 2D images and 3D virtual models along with tracking information on computer mounted on workstations near the patient bed and a dedicated projection display mounted over the patient. Augmented reality is accomplished with the 3D virtual models being viewed with dedicated headset(s).
Software algorithms combine tracking information and high-resolution 3D anatomical models to display representations of patient anatomy.
Here's an analysis of the acceptance criteria and study details for the ARAI™ Surgical Navigation System based on the provided FDA 510(k) summary:
The document does not explicitly present a table of acceptance criteria. Instead, it presents the results of performance validation for positional and angular errors. Therefore, the reported device performance is used directly to infer the implied acceptance criteria.
1. Table of Acceptance Criteria and Reported Device Performance
Performance Validation Metric | Implied Acceptance Criteria (Upper Bound) | Reported Device Performance |
---|---|---|
Positional Error [mm] | $\leq$ 2.49 mm (99% CI Upper Bound) | 2.16 mm (Mean) |
$\leq$ 2.41 mm (95% CI Upper Bound) | 1.00 mm (Standard deviation) | |
Angular Error [degrees] | $\leq$ 1.74 degrees (99% CI Upper Bound) | 1.49 degrees (Mean) |
$\leq$ 1.68 degrees (95% CI Upper Bound) | 0.73 degrees (Standard deviation) | |
Display Luminance | Met requirements | Demonstrated via testing |
Image Contrast | Met requirements | Demonstrated via testing |
Latency and Framerate | Met requirements | Demonstrated via testing |
Stereoscopic Crosstalk and Contrast | Met requirements | Demonstrated via testing |
AR Shutter Frequency | Met requirements | Demonstrated via testing |
Spatial Accuracy (AR) | Met requirements | Demonstrated via testing |
User Interface and System Display Usability | Met requirements | Evaluated via Human Factors and Usability Testing |
Software Segmentation Quality | Compared favorably to manual segmentation | Determined by comparing with manual segmentations (mean Sørensen-Dice coefficient - DSC) |
Biocompatibility | Met requirements | Evaluation confirms compliance |
Electrical Safety | Compliant with IEC 60601-1:2012 | Testing assures compliance |
Electromagnetic Compatibility | Compliant with IEC 60601-1-2:2014 | Testing assures compliance |
Software Verification and Validation | Compliant with FDA Guidance | Performed |
2. Sample Sizes Used for the Test Set and Data Provenance
- Positional and Angular Error Validation (Surgical Simulations):
- Sample Size: Not explicitly stated in the provided text. The terms "overall 3D positional error" and "overall 3D angular error" are used, but they do not reveal the number of screws measured or the number of cadavers.
- Data Provenance: Prospective, real-world simulation using cadavers ("Surgical simulations conducted on cadavers were performed for system validation."). The country of origin is not specified.
- Software Segmentation Quality:
- Sample Size: A "set of test samples presenting lumbosacral spine, extracted from stationary and intraoperative Computed Tomography scans" was used. The exact number of samples is not provided.
- Data Provenance: CT scans (both stationary and intraoperative) of the lumbosacral spine. It is unclear if these were retrospective or prospective, or their country of origin.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Positional and Angular Error Validation: The document describes the ground truth as the "real implants." It does not mention experts establishing the ground truth for this measurement directly, as it's a direct comparison between the virtual and physically placed surgical artifacts.
- Software Segmentation Quality: The ground truth was established by "manual segmentations prepared by trained analysts." The number of analysts and their specific qualifications (e.g., years of experience, specific medical specialty) are not provided.
4. Adjudication Method for the Test Set
- Positional and Angular Error Validation: Not applicable, as the ground truth derivation is not a subjective consensus process. It's a measurement against a physical reference.
- Software Segmentation Quality: The ground truth was established by "manual segmentations prepared by trained analysts." The document does not specify an adjudication method (like 2+1 or 3+1) if multiple analysts were involved or if a single analyst's segmentation was considered the ground truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size
- The provided document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study and therefore does not report an effect size for human readers improving with AI vs. without AI assistance. The performance testing focuses on the device's accuracy in tracking and displaying anatomical structures and instruments.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done
- Yes, a standalone performance assessment of the algorithm appears to have been conducted, particularly for:
- Positional and Angular Error Validation: This directly quantifies the system's accuracy in representing physical instrument and screw positions relative to the anatomical model, which is an algorithm-driven output.
- Software Segmentation Quality: The "autonomous spine segmentation process" was compared against manual segmentations, indicating a standalone evaluation of the algorithm's performance in this task.
7. The Type of Ground Truth Used
- Positional and Angular Error Validation: The ground truth was the "real implants" positioned in cadavers. This is a form of direct physical measurement/outcome data.
- Software Segmentation Quality: The ground truth was expert manual segmentation ("manual segmentations prepared by trained analysts").
8. The Sample Size for the Training Set
- The document does not specify the sample size used for the training set for any of the algorithms (e.g., for spine segmentation or tracking). It only mentions test samples.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established, as it does not describe the training process or the dataset used for training. It only details the establishment of ground truth for certain test sets.
Ask a specific question about this device
(281 days)
K172418 OpenSight
Viewer is a software device for display of medical images and other healthcare data. It includes functions for image review, image manipulation, basic measurements and 3D visualization (Multiplanar reconstructions and 3D volume rendering). It is not intended for primary image diagnosis or the review of mammographic images.
Viewer is a software for viewing of DICOM data. The device provides basic measurement functionality for distances and anqles.
These are the operating principles:
- On desktop PCs the interaction with the software is mainly performed with mouse and/or keyboard. -
- On touch screen PCs and on mobile devices the software is mainly used with a touch screen interface. -
- -On Mixed Reality qlasses the interaction is performed with a dedicated pointing device.
The subject device provides or integrates the following frequently used functions:
- -Select medical images and other healthcare data to be displayed
- -Select views (e.g. axial, coronal & sagittal reconstruction views and 3D volume rendering views)
- Change view layout (e.g. maximize / minimize views, close / open / reorder views) -
- Manipulate views (e.g. scroll, zoom, pan, change windowing) -
- Perform measurements (e.g. distance or angle measurements) -
- -Place annotations at points of interests
The provided document is a 510(k) summary for the "Viewer" device from Brainlab AG. It describes the device, its intended use, and its comparison to a predicate device and a reference device to demonstrate substantial equivalence. However, it does not contain the detailed information required to fill out the table of acceptance criteria and the study that proves the device meets those criteria, specifically regarding device performance metrics (e.g., sensitivity, specificity, accuracy), sample sizes, ground truth establishment, or multi-reader multi-case studies for AI components.
The document primarily focuses on verifying the software's functionality, user interface, DICOM compatibility, and integration, rather than clinical performance metrics of an AI algorithm. The device is a "Picture Archiving And Communications System" (PACS) that displays medical images and other healthcare data and is not intended for primary image diagnosis. This indicates that the regulatory requirements for performance metrics such as sensitivity and specificity, which are common for AI algorithms involved in diagnosis, would not apply to this specific device.
Therefore, most of the information requested in your prompt cannot be extracted from this document because the device described is not an AI diagnostic algorithm, and the provided text focuses on software functionality verification rather than clinical performance studies.
Here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria Category | Test Method Summary | Reported Device Performance |
---|---|---|
User interface | Interactive testing of user interface | All tests passed |
DICOM compatibility | Interactive testing with companywide test data, which are identical for consecutive version of the SW | All tests passed |
Views | Interactive testing of user interface | All tests passed |
Unit test /Automatic tests | Automated or semi-automated cucumber tests or unit tests are written on the applicable level for new functionalities of the Viewer in respect to previous versions. Existing tests have to pass. | All tests passed |
Integration test | Interactive testing on various platforms and combination with other products following test protocols, combined with explorative testing. The software is developed with daily builds, which are explanatively tested. | All tests passed |
Usability | Usability tests (ensure user interface can be used safely and effectively) | All tests passed |
Communication & Cybersecurity | Verification of communication and cybersecurity between Viewer and Magic Leap Mixed Reality glasses | Successfully passed |
Missing Information/Not Applicable: The document does not provide acceptance criteria or performance metrics related to diagnostic accuracy (e.g., sensitivity, specificity, AUC) because the device is explicitly stated as not intended for primary image diagnosis.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified. The verification tests mention "companywide test data" and "various platforms and combination with other products" but do not provide specific numbers of cases or images.
- Data Provenance: Not specified. The document mentions "companywide test data" but does not detail the country of origin or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable/Not specified. Since the device is not for primary diagnosis and the tests focus on software functionality, there is no mention of experts establishing ground truth for diagnostic purposes. The "ground truth" for the software functionality tests would be the expected behavior of the software.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not specified. The testing methods described are interactive testing, automated/semi-automated tests, and usability tests. There is no mention of an adjudication method typical for diagnostic performance studies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. The document does not describe an AI algorithm intended to assist human readers in diagnosis. It's a DICOM viewer. Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or reported.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is a viewer, not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The concept of "ground truth" in the context of diagnostic accuracy (e.g., pathology, expert consensus) does not apply here as the device is not for primary diagnosis. For its stated functions, the "ground truth" would be the expected, correct functioning of the software features (e.g., correct display of DICOM data, accurate measurements of distance/angle).
8. The sample size for the training set
- Not applicable/Not specified. The device is a viewer, not an AI model that undergoes a "training" phase with a dataset.
9. How the ground truth for the training set was established
- Not applicable. (See point 8).
Ask a specific question about this device
(255 days)
OpenSight, K172418
The xvision Spine System, with xvision System Software, is intended as an aid for precisely locating anatomical structures in either open or percutaneous spine procedures. Their use is indicated for any medical condition in which the use of stereotactic surgery may be appropriate, and where reference to a rigid anatomical structure, such as the spine, can be identified relative to CT imagery of the anatomy. This can include the spinal implant procedures, such as Posterior Pedicle Screw Placement in the thoracic and sacro-lumbar region.
The Headset of the xvision Spine System displays 2D stereotaxic screens and a virtual anatomy screen. The stereotaxic screen is indicated for correlating the tracked instrument location to the registered patient imagery. The virtual screen is indicated for displaying the virtual instrument location to the virtual anatomy to assist in percutaneous visualization and trajectory planning.
The virtual display should not be relied upon solely for absolute positional information and should always be used in conjunction with the displayed stereotaxic information.
The xvision Spine (XVS) system is an image-guided navigation system that is designed to assist surgeons in placing pedicle screws accurately, during open or percutaneous computer-assisted spinal surgery. The system consists of a dedicated software, Headset, single use passive reflective markers and reusable components. It uses wireless optical tracking technology and displays to the surgeon the location of the tracked surgical instruments relative to the acquired intraoperative patient's scan, onto the surgical field. The 2D scanned data and 3D reconstructed model, along with tracking information, are projected to the surgeons' retina using a transparent near-eye-display Headset, allowing the surgeon to both look at the patient and the navigation data at the same time.
The provided text describes the performance data and testing conducted for the xvision Spine system, particularly focusing on its accuracy in guiding pedicle screw placement.
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided document:
1. Acceptance Criteria and Reported Device Performance
The core acceptance criteria for the xvision Spine system relate to its positional and trajectory angle accuracy. The document implicitly sets these criteria by comparing the device's performance to the predicate device and by reporting the mean errors and 99% Upper Bound Limits (UBLs).
Metric | Acceptance Criteria (Implied) | Reported Device Performance (Phantom Study) | Reported Device Performance (Cadaver Study) |
---|---|---|---|
Overall Positional Error | ≤ 2.0 mm (Mean) | 0.63 - 0.954 mm (Mean) | 1.98 mm (Mean) |
≤ 1.12 mm (99% UBL) | 2.22 mm (99% UBL) | ||
Overall Trajectory Angle Error | ≤ 2° (Mean) | 0.468 - 0.683° (Mean) | 1.3° (Mean) |
≤ 1.08° (99% UBL) | 1.47° (99% UBL) |
Note: The document explicitly states: "Thus, the system has demonstrated performance in 3D positional accuracy with a mean error statistically significantly lower than 3mm and in trajectory angle accuracy with a mean error statistically significantly lower than 3 degrees, in phantom and cadaver studies." However, the "System Accuracy Requirement" for the device, as listed in the comparison table with the predicate, is 2.0mm positional error and 2° trajectory error. The reported performance is compared to this requirement rather than a broader 3mm/3degree standard. Therefore, the "Acceptance Criteria" column above reflects the stricter "System Accuracy Requirement" from the comparison table.
2. Sample Size Used for the Test Set and Data Provenance
- Phantom Study: The sample size for the phantom study is not explicitly stated in terms of the number of measurements or trials. However, it involved testing under "different conditions simulating clinical conditions such as: Headset mounted statically and Headset moving above the markers, different distances between the Headset and the markers, and different angles" and using two Z-link markers (Z1 and Z2).
- Cadaver Study: The sample size is not explicitly stated for the cadaver study either, but it involved positioning pedicle screws percutaneously in "thoracic and sacro-lumbar vertebrae." The number of cadavers or screws tested is not provided.
- Data Provenance:
- Phantom Study: The data provenance is laboratory bench testing. The country of origin is not specified, but the applicant company is located in Israel (Augmedics Ltd.).
- Cadaver Study: The data provenance is from a cadaver study. The country of origin is not specified. This would be considered a prospective study as it involves active experimentation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- The document does not specify the number of experts or their qualifications for establishing ground truth in either the phantom or cadaver studies.
- For the cadaver study, the ground truth for positional error was derived from "the post-op scan," and for trajectory error, it was a "recorded planned/virtual trajectory." It implies an objective measurement rather than expert consensus on anatomical landmarks.
4. Adjudication Method for the Test Set
- The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test sets. The ground truth appears to be based on direct measurements and pre-defined plans rather than subjective assessments requiring adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. The studies focused on the accuracy of the device itself (standalone performance and cadaver-assisted performance), not on the improvement in human reader performance with or without AI assistance. The device is a navigation system, assisting surgeons during procedures, not an AI-assisted diagnostic tool for human readers.
6. Standalone (Algorithm Only) Performance
- Yes, the performance data presented primarily focuses on the standalone performance of the xvision Spine system, particularly its accuracy. The "Bench testing" results demonstrate the algorithm's accuracy in a controlled environment, and the "cadaver study" validates this accuracy in a more realistic anatomical setting, demonstrating the system's ability to guide screw placement. The focus is on the precision of the stereotaxic instrument, not on human interpretation or analysis.
7. Type of Ground Truth Used
- Phantom Study: The ground truth was established through known mechanical positions and precisely defined settings within the phantom, allowing for objective measurement of error from a pre-defined ideal.
- Cadaver Study: The ground truth for positional error was derived from the post-operative scan (objective imaging data), and for trajectory error, it was compared to the recorded planned/virtual trajectory (pre-defined objective plan).
8. Sample Size for the Training Set
- The document does not provide any information regarding a training set or its sample size. This is a medical device for surgical guidance, not a machine learning model that typically requires a separate training set. The descriptions focus on the validation of the system's accuracy and performance.
9. How the Ground Truth for the Training Set Was Established
- Since no training set is mentioned or implied for this type of medical device validation, there is no information on how ground truth for a training set was established.
Ask a specific question about this device
Page 1 of 1