Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K190866
    Manufacturer
    Date Cleared
    2019-04-30

    (27 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    XmaruView V1 (Xmaru Chiroview, Xmaru Podview)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Xmaru View V1(Xmaru Chiroview or Xmaru Podview) software carries out the image processing and administration of medical X-ray data which includes adjustment of window leveling, rotation, zoom, and measurements. Ymaru View V (Xmaru Chiroview or Xmaru Podview) is not approved for mammography and is meant to be used by qualified medical personnel only. Xmaru Chiroview or Xmaru Podview) is complying with DICOM standards to assure optimum communications between network systems.

    Device Description

    XmaruView V1 is a software program designed to provide image acquisition, processing and operational management functions for Digital Radiography. XmaruView V1 performs connects with Flat-Panel Detectors and Generator to acquire digital images. The software also manages information on patients, tests and images through an internal database. It also supports DICOM which allows excellent compatibility with other radiography equipment and network programs. XmaruView V1 provides a streamlined process of multiple workflows. This optimizes any hospital environment for digital radiography.

    AI/ML Overview

    The provided document is a 510(k) summary for the XmaruView V1 software, including its variants Xmaru Chiroview and Xmaru Podview. This document focuses on demonstrating substantial equivalence to a predicate device and details the software's functionalities and validation rather than presenting a performance study with specific acceptance criteria and detailed results from a clinical trial or large-scale evaluation.

    Therefore, many of the requested details regarding acceptance criteria, study performance, sample sizes, expert involvement for ground truth, and MRMC studies are not explicitly stated or applicable in the context of this 510(k) submission, which is primarily a declaration of equivalence and software validation against internal testing.

    However, based on the information provided, here's what can be extracted and inferred:

    1. A table of acceptance criteria and the reported device performance

    The document states that the software validation test was "designed to evaluate all input functions, output functions, and actions performed by XmaruView V1." It also mentions that "the risk analysis and individual performance results were within the predetermined acceptance criteria." However, the specific acceptance criteria (e.g., quantitative metrics like accuracy, sensitivity, specificity, or specific error rates) and the reported device performance against these criteria are not detailed in this 510(k) summary. These would typically be found in the manufacturer's internal validation reports, which are summarized but not fully presented here.

    The main functional acceptance criteria implied are:

    • Ability to perform image acquisition and processing (window leveling, rotation, zoom, measurements).
    • Compliance with DICOM standards for communication.
    • Reliable management of patient, test, and image information.
    • Proper functioning of the "Grid ON" feature (for the upgraded version).
    Acceptance Criteria (Implied from functions and safety)Reported Device Performance
    Image acquisition and processing functions work as intended (window leveling, rotation, zoom, measurements, contrast, invert, flip, ROI)."passed all testing acceptance criteria." "The software validation test was designed to evaluate all input functions, output functions, and actions performed by XmaruView V1."
    Compliance with DICOM standards (Worklist, Store, Print)."complying with DICOM standards to assure optimum communications between network systems." "Supports DICOM 3.0 and image transmission to the PACS server, print and Worklist jobs."
    Management of patient, test, and image information."manages information on patients, tests and images through an internal database." "Image management functions: test creation, modify and delete of information, move and delete of image, and image storage management."
    "Grid ON" function performs as designed to enhance contrast and reduce scatter effects."XmaruView V1 SW is updated with Grid ON function to enhance contrast for image, Grid On function is related to Virtual grid where physical grid is not used." Performance details not quantified.
    Software safety and risk mitigation."The SW validation and risk analysis based on FMEA were conducted. The risks identified have been mitigated and any residual risks were evaluated and accepted." Compliance with IEC 62304 and ISO 14971 cited.

    2. Sample sizes used for the test set and the data provenance

    The document does not specify a "test set" in the sense of a distinct set of clinical images used for a performance study. The validation described is primarily a software validation and risk analysis (IEC 62304, ISO 14971), which involves testing the software's functionality and safety internally. This is not a clinical performance study using patient data with a defined sample size for generalization.

    • Sample Size for Test Set: Not specified. The validation described is internal software testing, not a clinical study on a dataset of patient images.
    • Data Provenance: Not specified. Given it's internal software validation, it's likely using test data generated by the manufacturer or potentially anonymized internal clinical data, but this is not detailed. It is not specified as retrospective or prospective clinical data.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not applicable and not provided as the submission describes software functional and safety validation, not a diagnostic performance study requiring expert-established ground truth on a clinical image set.

    4. Adjudication method for the test set

    This information is not applicable and not provided as there is no described clinical "test set" requiring adjudication.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance

    An MRMC study was not done and is not described in this 510(k) summary. The device is image processing software (a PACS component with additional features), not an AI-assisted diagnostic tool that helps human readers. Its primary function is image display, manipulation (zoom, rotation, etc.), and management, including a "Grid ON" feature for image enhancement. It does not provide diagnostic insights or AI assistance to human readers for interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The device is image processing software; it doesn't perform diagnostic functions as a standalone algorithm. Its "performance" is in its ability to correctly acquire, process, and display images and manage data. The validation described is focused on the correct functioning of the software itself ("evaluate all input functions, output functions, and actions performed by XmaruView V1").

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Not applicable in the context of a diagnostic ground truth, as this is software validation. The "ground truth" for the software validation would be based on predefined specifications for how each function should operate and the expected output for given inputs. For example, applying a "rotate 90 degrees" function would be validated by checking if the image is indeed rotated by 90 degrees.

    8. The sample size for the training set

    Not applicable and not specified. This is not an AI/machine learning device that requires a training set. The "Grid ON" function might involve an algorithm, but it's not described as a deep learning model requiring a large training dataset in the context of this submission.

    9. How the ground truth for the training set was established

    Not applicable and not specified, as this is not an AI/machine learning device with a training set.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1