Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K200546
    Device Name
    ZeeroMED View
    Manufacturer
    Date Cleared
    2020-05-05

    (63 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    O3 Enterprise SRL

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ZeeroMED View software is intended for use as a diagnostic and analysis tool for diagnostic images for hospitals, imaging centers, radiologists, reading practices and any user who requires and is granted access to patient image, demographic and report information. ZeeroMED View displays and manages diagnostic quality DICOM images. ZeeroMED View is not intended for diagnostic use with mammography images. Usage for mammography is for reference and referral only. ZeeroMED View is not intended for diagnostic use on mobile devices.

    Device Description

    The ZeeroMED View Software, or ZeeroMED View, is a Web-based DICOM medical image viewer that allows downloading, reviewing, manipulating, visualizing and printing medical multi-modality image data in DICOM format, from a client machine. ZeeroMED View is a server-based solution that connects to any PACS and displays DICOM images within the hospital, securely from remote locations, or as an integrated part of an EHR or portal. ZeeroMED View enables health professionals to access, manipulate, measure DICOM images and collaborate real-time over full quality medical images using any web-browser without installing client software.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study proving the device meets those criteria:

    The provided document (K200546) is a 510(k) summary for the ZeeroMED View software, establishing substantial equivalence to a predicate device, MedDream.

    Crucially, this document does not describe a study that proves the device meets specific acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, accuracy). Instead, it demonstrates substantial equivalence to a legally marketed predicate device based on technical characteristics and functionality.

    Therefore, most of the requested information regarding "acceptance criteria" for diagnostic performance and a "study that proves the device meets the acceptance criteria" (in the sense of a clinical diagnostic performance study) is not present in the provided text.

    The "acceptance criteria" here are implicitly tied to the demonstration of substantial equivalence, meaning the device must perform similarly and be as safe and effective as the predicate. The "study" proving this is primarily the non-clinical product evaluation, including software verification and validation, and performance testing for measurement accuracy, rather than a clinical trial assessing diagnostic performance against a ground truth.

    Here's an explanation based on the available information:


    1. A table of acceptance criteria and the reported device performance

    The document does not specify quantitative diagnostic performance acceptance criteria (e.g., sensitivity, specificity thresholds) or report such performance metrics. The "performance" being assessed and demonstrated is the similarity in technical characteristics and functionality compared to the predicate device.

    Acceptance Criteria (Implicit for Substantial Equivalence)Reported Device Performance (Demonstrated Similarity to Predicate)
    Safety and Effectiveness: No new questions regarding safety or effectiveness compared to predicate."There are no differences between the devices that affect the usage, safety and effectiveness, thus no new question is raised regarding the safety and effectiveness."
    Measurement Accuracy: Ability to accurately perform various distance and area measurements."Performance Testing (Measurement Accuracy) was conducted on the ZeeroMED View system to determine measurement accuracy when performing the various distance and area measurements." (Specific results not provided in this summary, but presumably demonstrated sufficient accuracy for the intended use.)
    Software Reliability/Robustness: Software functions as intended with a "moderate" level of concern."Software verification and validation testing were conducted on the ZeeroMED View system... Documentation includes level of concern [moderate], software requirements and specifications, design architecture, risk analysis and software validation and verification."
    Functional Equivalence: Possesses similar features and functionality to the predicate.A detailed "Feature Comparison" table (on page 5) is provided, showing near-identical functionalities like DICOM image loading/visualization, patient study search, user authentication, image display operations (flip, rotate, zoom, scroll, layout, PET fusion, volumetric rendering), measurement functions (line, angle, polyline, area), annotations, report generation, etc.
    Technical Equivalence: Basic technical features are the same as the predicate."The basic and main technical features of the subject device are the same as the predicated device."
    Intended Use Equivalence: Shares the same intended use as the predicate (with specific contraindications/limitations).Both devices are "intended for use as a diagnostic and analysis tool for diagnostic images..." with specific exclusions for mammography and mobile devices. Comparison table on page 5 details this.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document does not describe a clinical test set with a "sample size" in the context of diagnostic performance evaluation. The "test set" for the software verification and validation would refer to the internal software testing data, not a patient image dataset for diagnostic performance assessment. No information on data provenance (country of origin, retrospective/prospective) is provided.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not applicable, as no external "test set" requiring expert-established ground truth for diagnostic performance is described in this 510(k) summary.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as no clinical test set requiring adjudication for ground truth is described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC study was done or described. This device is a PACS viewer, not an AI-assisted diagnostic tool in the sense of a CADe/CADx system that would typically undergo MRMC studies to assess AI's impact on human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This is not an AI diagnostic algorithm; it's a medical image viewer. Standalone performance as commonly understood for AI algorithms is not relevant to this device's regulatory pathway as presented. The "standalone performance" here relates to its software functionality and measurement accuracy as a display and analysis tool, which was tested during software V&V.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    For the software verification and validation, the "ground truth" would be the expected performance of the software functions (e.g., a measurement tool should calculate distances correctly based on known image properties, display functions should work as per specifications). This is established through internal engineering testing and validation against defined software requirements. It's not a clinical ground truth like pathology or expert consensus on disease presence.

    8. The sample size for the training set

    Not applicable. This device is not an AI/machine learning product that requires a "training set" of medical images in the common sense.

    9. How the ground truth for the training set was established

    Not applicable, as there is no "training set" described for this non-AI device.


    Summary of what the document does convey regarding validation:

    The validation for ZeeroMED View primarily consists of:

    • Software Verification and Validation: This assesses the software's functionality, adherence to specifications, and reliability according to FDA guidance (specifically, for a "moderate" level of concern). This includes risk analysis.
    • Performance Testing (Measurement Accuracy): This specific non-clinical test confirms that the device's measurement tools (distance, area) provide accurate results.
    • Comparison to Predicate Device: The core of the 510(k) submission relies on demonstrating that ZeeroMED View shares the same intended use, technical characteristics, and functionality as a legally marketed predicate device (MedDream), and that any differences do not raise new questions of safety or effectiveness. This comparison serves as the "proof" for substantial equivalence.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1