Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K240839
    Device Name
    OpenRad Cloud
    Date Cleared
    2024-04-24

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    OpenRad Cloud PACS and Viewer is intended for use in the medical image analysis and diagnosis workflows of hospitals, imaging centres, radiologists, reading practices and is granted access to a patient image and associated demographic and report information, including Peer Review or Teleradiology providers. OpenRad Cloud PACS Viewer displays, modifies, and manages diagnostic quality DICOM images, including 3D visualisation and reordering functionality.

    Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary diagnosis or image interpretations. Mammographic images may only be viewed using cleared monitors intended for mammography display.

    The device is not intended for use with mobile displays. Review of images require an optimum display of images. Only use cleared monitors and printers for diagnosis review of images. Follow the maintenance and care instructions given in the manufacturer's documentation.

    Device Description

    The OpenRad Cloud is a software device for the display and evaluation of 2D/3D visualization of DICOM compliant medical image data, such as CT, MRI, Mammogram, and Ultrasound scans.

    OpenRad Cloud principles of operation and its mode of action:

    1. Receive/Send DICOM Studies;
    2. Pre-process and archive in the correct folder;
    3. Display study in the correct worklists for intended users;
    4. Open the study for visualization together with any existing previous studies of the patient;
    5. Produce, communicate and archive reports.

    The system is a client-server based architecture.

    OpenRad Cloud viewer requires a medical grade monitor for diagnostic reading of images, favouring the use of full radiology workstations.

    The OpenRad Cloud Software provides several levels of functionality to the user:

    • Basic analysis tools they use on a daily basis such as 2D review, orthogonal multiplanar reconstructions (MPR), oblique MPR, curved MPR, Slab MPR AvgIP, MIP, MinIP, measurements, annotations, reporting, distribution etc.
    • Tools for in-depth analysis, such as segmentation, endoscopic view, color volume rendered slabs, grayscale volume rendered slabs, 3D volume review, path definition and boundary detection.
    • Specialist tools and workflow enhancements for specific clinical applications which provide targeted workflows, custom UI, measurement and visualisation, including Virtual Colonoscopy, Vessel Analysis, Calcium Scoring, PET/CT, CT Lung Analysis, CT Dental and DCE-MRI Breast and Prostate.
    AI/ML Overview

    It appears the provided text is a 510(k) Premarket Notification submission for a medical image management and processing system (OpenRad Cloud). While it describes the device's functionality, comparison to predicate devices, and adherence to various standards and regulations, it largely lacks the specific details required to answer your questions about acceptance criteria and the study proving the device meets those criteria, particularly regarding AI performance or human-in-the-loop studies.

    The document states: "The device performance as documented in the verification report affirms design specifications are met and with a profile similar to the predicate device by clinical experts using the same methodology and datasets as the ones used for in-house tests. The evaluation outcome was based on their clinical experience and verifying the outputs against the known results of the datasets and the proposed transfer functions and parameters when used in other similar devices." This suggests some form of performance testing with clinical experts, but the specifics are not provided in this document.

    Many of your questions pertain to the rigorous testing methodologies often required for AI/ML-enabled medical devices, such as multi-reader multi-case studies, detailed ground truth establishment, and specific AI performance metrics. This submission describes a Medical Image Management and Processing System (PACS/Viewer), not an AI diagnostic algorithm. Therefore, the type of performance data typically provided for an AI-CAD or AI-Dx device (like sensitivity, specificity, FROC curves, etc.) is not present because it's not the primary function being regulated here.

    Here's an attempt to answer your questions based only on the provided text, highlighting where information is absent:


    1. A table of acceptance criteria and the reported device performance

    The document does not provide a specific table of quantitative acceptance criteria for performance metrics (e.g., accuracy, sensitivity, specificity) with reported device performance values. The performance claims are qualitative, stating that "The device performance as documented in the verification report affirms design specifications are met" and that it has a "profile similar to the predicate device."

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    This information is not provided in the document. It generally mentions "datasets" and "in-house tests" but gives no specifics on sample size or data provenance.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    The document mentions "clinical experts" but does not specify the number of experts or their qualifications. It also states their evaluation was based on "clinical experience" and "verifying the outputs against the known results of the datasets." This implies the experts used an existing ground truth rather than establishing it from scratch for this particular test set.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    This information is not provided in the document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Given this is a PACS/Viewer system and not explicitly an AI diagnostic algorithm or AI-assisted reading tool, a formal MRMC comparative effectiveness study to measure human reader improvement with AI assistance is not described and would not typically be a requirement for this type of device. The document focuses on demonstrating that the new device (OpenRad Cloud) has "similar indications for use and technological characteristics as the 3DNET SUITE and Ambra PACS including ProViewer" and "does not raise any new questions of safety or effectiveness."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This information is not explicitly provided. Since the device is a PACS/Viewer, its "performance" is more related to its ability to display, modify, and manage images as intended, rather than to provide a diagnostic output from an algorithm in a standalone manner. The performance assessment mentioned uses clinical experts to verify outputs.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document states that clinical experts verified "the outputs against the known results of the datasets." This implies that the "datasets" already had "known results," which serve as the ground truth. However, the type of ground truth (e.g., derived from expert consensus, pathology, follow-up, etc.) is not specified.

    8. The sample size for the training set

    The document describes the device as a "next version of the primary predicate device" (Biotronics3D 3Dnet product) and states "The OpenRad Cloud Software product is based on the Biotronics3D 3Dnet product, developed and marketed by Biotronics3D." This indicates that it's an evolution of an existing software, not necessarily a new AI model that undergoes a separate training phase. Therefore, information about a "training set" in the context of machine learning model development is not applicable or provided in this submission.

    9. How the ground truth for the training set was established

    As explained above, the concept of a "training set" for a machine learning model isn't directly applicable to the description of this device as a PACS/Viewer system. Therefore, this information is not provided.


    Summary of Device Performance Study (Implied by the Text):

    The submission indicates that the OpenRad Cloud Software's performance was evaluated through:

    • Unit and System Integration Testing: Following established company protocols for design specifications and requirements, subsequent risk management analysis, and final design review.
    • Verification and Validation Activities: Designated individuals performed these activities, and the results "demonstrated that the predetermined acceptance criteria were met."
    • Clinical Expert Review: Clinical experts evaluated the device's performance, comparing its output "against the known results of the datasets" and assessing its "profile similar to the predicate device." Their evaluation was based on their "clinical experience."

    However, the specific quantitative details of these tests, such as sample sizes, explicit acceptance criteria values, a detailed breakdown of methodology, or the credentials of the experts, are not included in this 510(k) summary. The submission's focus is on demonstrating substantial equivalence to predicate devices, particularly given its function as a medical image management and processing system, rather than a novel AI diagnostic tool.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1