Search Results
Found 1 results
510(k) Data Aggregation
(389 days)
IntraOp VSP is a software device that is indicated for use with an augmented display which allows for visualization and orientation of 3D digital models of selected structures of a patient's anatomy.
IntraOpVSP is intended to supplement conventional Virtual Surgical Planning (VSP) by facilitating perception of the shape and scale of a patient's anatomical targets for use in preoperative planning and heads-up 3D visualization during surgery.
IntraOpVSP is not intended to provide diagnosis or to guide surgical instrumentation. It is not to be used for stereotactic procedures or surgical navigation.
IntraOpVSP is intended for use by surgeons who have been trained to operate IntraOpVSP software is designed for use with performance-tested hardware specified in the User Manual.
IntraOpVSP software displays 3D objects as holograms to inform the user on operative planning. It includes functions for 3D object spatial manipulation and orientation. IntraOpVSP software provides additional information to the surgeon by displaying holograms of the surgical plan, anatomical structures, and guides.
The provided text from the 510(k) summary for the IntraOpVSP software device does not contain a detailed study proving the device meets specific acceptance criteria with reported device performance metrics in a tabular format. The summary states that "design validation was successfully completed, and testing met all predetermined acceptance criteria." However, it does not enumerate these criteria or provide quantitative results from performance testing related to the device's specific functions (e.g., accuracy of 3D object spatial manipulation, accuracy of orientation, or the effectiveness of facilitating perception of shape and scale).
Instead, the summary focuses on:
- Substantial equivalence to a predicate device (OpenSight K172418) based on intended use and technological characteristics.
- Mentioning adherence to various standards and guidance documents for performance testing, human factors, hazard analysis, and software verification/validation.
- A general conclusion that testing demonstrates the device is "at least as safe and effective as the predicate device."
Therefore, based solely on the provided text, the following information cannot be fully extracted or is not explicitly detailed:
- A table of acceptance criteria and the reported device performance: Not provided. The document states "testing met all predetermined acceptance criteria" but does not list the criteria or the specific performance results.
- Sample size used for the test set and the data provenance: Not provided. The document mentions "simulated use, and actual use in simulated surgery" but does not specify the number of cases or the origin of the data used for validation.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not provided.
- Adjudication method for the test set: Not provided.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not explicitly stated. The validation mentions "validation of trained users' operation of the device in a simulated surgical environment," but it does not describe a comparative effectiveness study with and without AI assistance or report an effect size for human reader improvement.
- If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not explicitly detailed. The device is described as "software that is indicated for use with an augmented display" and "intended to supplement conventional Virtual Surgical Planning," implying human interaction is integral to its intended use and evaluation.
- The type of ground truth used: Not explicitly stated. For a device intended for visualization and planning, ground truth might involve measurements compared to anatomical models or clinical outcomes, but this is not specified.
- The sample size for the training set: Not applicable/provided. The document does not describe a machine learning model that would require a distinct training set. The descriptions focus on software functions and visualization of pre-segmented 3D models.
- How the ground truth for the training set was established: Not applicable/provided.
Summary of what can be extracted or inferred regarding the study based on the provided text:
- Study Type: Design verification and design validation, including simulated use and actual use in simulated surgery. Validation with intended users was performed.
- Performance Metrics Mentioned (for augmented reality display): Gamma response, contrast and contrast ratio, resolution, luminance and luminance variation, virtual field of view, display obstructions, and distortion. However, the acceptance criteria for these and the reported device performance against those criteria are not provided.
- Ground Truth (inferred for display performance): Testing was done to IEC 63145-20-10: 2019 and IEC 63145-20-20: 2019. This suggests objective measurements against these standards would serve as the ground truth for display characteristics.
- Human Factors/Usability Testing: Performed following "Guidance for Industry and Food and Drug Administration Staff, February 3, 2011" and IEC 62366-1:2015. This indicates an evaluation of user interface and interaction but doesn't specify performance metrics for the device's functional output.
- Software Verification and Validation: Performed according to IEC 62304: 2015 and FDA's guidance on Software as Medical Device (SAMD). This confirms adherence to software development lifecycle standards.
Ask a specific question about this device
Page 1 of 1