Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K173274
    Manufacturer
    Date Cleared
    2018-07-10

    (271 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Ceevra Reveal 2.0

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Ceevra Reveal 2.0 is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT or MR imaging devices. It is also intended as software for preoperative surgical planning, and as software for the intraoperative display of the aforementioned multi-dimensional digital images. Ceevra Reveal 2.0 is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    Device Description

    Ceevra Reveal 2.0 is a software-only device that allows clinicians to review CT and MR image data in three-dimensional (3D) format and/or stereoscopic 3D format (commonly known as virtual reality, or VR). The 3D and VR images are accessible through the Ceevra Reveal 2.0 mobile application which is used by clinicians for preoperative surgical planning and for the intraoperative display of the aforementioned 3D and VR images.

    Ceevra Reveal 2.0 includes two main software-based user interface components, the Processing Interface and Viewer Interface. The Processing Interface is hosted on a cloud-based, virtual workstation and only accessed by authorized personnel, such as an imaging technician. The Processing Interface contains a graphical user interface where an imaging technician can select DICOM-compatible medical images, segment such imitiate processing into a 3D format. The Viewer Interface is a mobile application that is accessible via a compatible, touchscreen enabled, off-the-shelf mobile device to allow for clinicians to review the medical images in 3D and/or VR formats. Only when the compatible mobile device is used in conjunction with a compatible off-the-shelf VR headset can the surgeon view medical images in the VR format.

    The product is intended to be used by trained medical professionals, including imaging technicians and clinicians/surgeons, and is used to assist in clinical decision making.

    The 3D images generated using Ceevra Reveal 2.0 are intended to be used in connection with surgical operations in which CT or MR images are used for preoperative planning and/or reviewed intraoperatively.

    The manner in which the 3D images are viewed and used does not vary between surgery types. The 3D images are viewed solely from the clinicians' compatible mobile devices, and are not viewed through or otherwise integrated with surgical navigation systems.

    AI/ML Overview

    The provided document, a 510(k) premarket notification for Ceevra Reveal 2.0, does not contain the detailed information necessary to fully address all aspects of the request regarding acceptance criteria and the study that proves the device meets them. The document focuses on establishing substantial equivalence to a predicate device (Clarity Reveal 1.0, K171356), rather than presenting a performance study with detailed acceptance criteria and validation results against ground truth.

    Here's an attempt to extract and infer information based on the provided text, and to explain why certain sections cannot be fully completed:

    Missing Information: It's important to note that this 510(k) summary primarily focuses on demonstrating substantial equivalence to a predicate device. For devices seeking substantial equivalence as a "picture archiving and communication system" (PACS) with features like 3D visualization, the FDA often emphasizes software verification and validation, and occasionally clinical validation to demonstrate that the device performs as intended and is as safe and effective as the predicate. However, detailed studies with specific performance metrics against a defined ground truth, like those required for diagnostic AI algorithms, are typically not a mandatory part of a 510(k) for such a device unless it introduces a new intended use or technology that raises new questions of safety or effectiveness.

    The document indicates: "Safety and performance of Ceevra Reveal 2.0 has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing." And it refers to IEC 62304 and FDA Guidance documents for software. This suggests that the "study" proving the device meets acceptance criteria was primarily software V&V, not a clinical performance study with human readers or standalone AI performance metrics against a specific ground truth.


    1. A table of acceptance criteria and the reported device performance

    Based on the provided document, there are no explicit quantitative acceptance criteria or reported device performance metrics in the format of a clinical study or diagnostic accuracy study. The primary "performance" is implicitly tied to its function as a medical imaging system for processing, review, analysis, communication, and media interchange, as well as for surgical planning and intraoperative display. The acceptance is based on demonstrating that it performs these functions adequately and is substantially equivalent to the predicate.

    Acceptance Criteria (Inferred from functionality and SE claims)Reported Device Performance (Inferred from documentation)
    Functional Equivalence: Ability to process, review, analyze, communicate, and interchange multi-dimensional digital images from CT/MR.Stated to perform these functions, comparable to the predicate device.
    Image Quality / Fidelity: Produce 3D and VR images suitable for preoperative surgical planning and intraoperative display.Images are accessible through the mobile application, and viewable in 3D/VR, suggesting visual fidelity is acceptable for intended use.
    Software Reliability & Safety: Software operates without critical errors and adheres to medical device software standards (IEC 62304)."Safety and performance of Ceevra Reveal 2.0 has been evaluated and verified in accordance with software specifications and applicable performance standards through software verification and validation testing." Compliance with IEC 62304 and FDA guidance on software and cybersecurity noted.
    User Interface & Experience: Intuitive and effective interfaces for imaging technicians (Processing Interface) and clinicians/surgeons (Viewer Interface).Implied through description of interfaces and intended use by medical professionals.
    Intraoperative Use Capability (Delta from Predicate): Ability to display images intraoperatively.Explicitly stated as a new feature for Ceevra Reveal 2.0 and compared against the predicate (which does not have this feature), indicating it was tested for this capability.

    2. Sample sized used for the test set and the data provenance

    The document does not detail a "test set" in the context of a clinical performance study with patient data and ground truth labels. The "testing" referred to is primarily software verification and validation. Therefore, there is no information on:

    • Sample size for a test set (e.g., number of cases or patients).
    • Data provenance (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Again, given that the document does not describe a clinical performance study against a specific ground truth for diagnostic accuracy, this information is not available. The ground truth for this type of device, which is a display and processing system, typically relates to the accuracy of the image reconstruction, segmentation, and visualization, rather than a diagnostic outcome. These are validated through engineering and software testing.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as no external clinical test set requiring adjudication is described.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study is mentioned. This device is described as a "medical imaging system" for processing and display, and "software for preoperative surgical planning" and "intraoperative display." It is not described as an AI/CAD (Computer-Aided Detection/Diagnosis) device, and therefore comparative effectiveness studies demonstrating human improvement with AI assistance are not applicable or described in this document.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device's function is described as providing 3D/VR visualizations for human clinicians. It is explicitly stated that it "is intended to assist the clinician who is responsible for making all final patient management decisions." As such, standalone diagnostic performance in the sense of an algorithm making a decision is not the device's intended function or claimed capability. The "standalone" performance would be related to the accuracy of its 3D reconstruction and segmentation algorithms, which would be validated through internal software testing, not typically reported in detail in the 510(k) summary.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For a device primarily focused on image processing and visualization (like a PACS), the "ground truth" for validation typically refers to the accuracy and fidelity of the 3D reconstructions to the original DICOM data and the anatomical structures within them. This would be established through:

    • Reference standard imaging (DICOM): The input CT/MR scans are the "ground truth" regarding the anatomy captured.
    • Known segmentation accuracy: If segmentation is performed, its accuracy against manually segmented or expert-reviewed ground truth models.
    • Visual inspection and clinical utility assessment: Review by qualified clinicians to ensure the 3D/VR representations are accurate, useful, and do not introduce artifacts or distortions that could mislead.

    The document does not specify the exact methods for establishing this ground truth for the "software verification and validation testing."


    8. The sample size for the training set

    The document does not mention "training sets" as would be relevant for a machine learning or AI-based device. Since it seems to be a rules-based or traditional image processing software, a "training set" in the AI sense is unlikely to have been used or described.


    9. How the ground truth for the training set was established

    Not applicable, as no training set is indicated.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1