(38 days)
Xmaru View V1(Xmaru Chiroview or Xmaru Podview) software carries out the image processing and administration of medical X-ray data which includes adjustment of window leveling, rotation, zoom, and measurements. Xmaru View V 1(Xmaru Chiroview or Xmaru Podview) is not approved for mammography and is meant to be used by qualified medical personnel only. Xmaru Chiroview or Xmaru Podview) is complying with DICOM standards to assure optimum communications between network systems.
Xmaru PACS receives, stores, searches and views the diagnostic image data from imaging modalities in DICOM compliant. Xmaru PACS is capable of communicating with electronic medical records systems, hospital information systems, and radiology information system via DICOM standard.
Xmaru View V1(Xmaru Chiroview or Xmaru Podview) and Xmaru PACS can be packaged together or offered as a standalone imaging solution to be installed in a PC for trained medical professionals.
XmaruView V1(Xmaru Chiroview or Xmaru Podview) is a software program designed to provide image acquisition, processing and operational management functions for digital radiography.
XmaruView V1(Xmaru Chiroview or Xmaru Podview) controls a flat-panel detector and a X-ray generator to acquire digital images. The software also manages patient information, capture and store diagnostic images in an internal database. It also supports DICOM which allows compatibility with other radiography equipment and network programs. XmaruView V1(Xmaru Chiroview or Xmaru Podview) provides a streamlined and optimized process of multiple workflows in any hospital environment for digital radiography.
Xmaru PACS is in charge of receiving the images from multiple modalities and storing data in the server. This software manages, searches and viewes the stored images in the server. Xmaru PACS is capable of communicating with electronic medical records systems, hospital information systems, and radiology information system via DICOM standard.
The provided text describes a 510(k) premarket notification for a medical image processing software (Xmaru View V1, Xmaru Chiroview, Xmaru Podview, and Xmaru PACS). However, it does not contain the detailed information required to answer many of the specific questions about acceptance criteria and the study that proves the device meets them, especially regarding AI/algorithm performance, ground truth establishment, or multi-reader studies.
The document primarily focuses on establishing substantial equivalence to a predicate device (VATECH Co.,Ltd. XmaruView V1, K102078) based on functional similarities and software validation. It mentions general software validation, risk analysis, and functional tests but lacks specific performance metrics or clinical study details.
Based only on the provided text, here's what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
The document states: "The validation testing verified and validated the risk analysis and individual performance results were within the predetermined acceptance criteria." However, it does not provide a table of specific acceptance criteria or the reported numeric performance of the device against those criteria. It implies that performance was assessed against functional requirements and risk mitigation, but no quantifiable metrics are given.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document mentions "The complete system configuration has been assessed and tested by the manufacturer and passed all in-house testing criteria." This refers to nonclinical testing and software validation. No information about a "test set" in the context of clinical images, sample size, data provenance (country of origin), or retrospective/prospective nature is provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
Not applicable/Not provided. The document describes software for image processing and PACS functionalities, not an AI or diagnostic algorithm that would require expert-established ground truth on a clinical test set. The validation described is for software functionality and risk, not diagnostic accuracy.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable/Not provided. As there's no mention of a clinical test set requiring expert interpretation for ground truth, there's no adjudication method described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document does not describe an MRMC comparative effectiveness study or any study comparing human reader performance with or without AI assistance. The device is image processing and PACS software, not an AI-driven diagnostic aid.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not explicitly described as a "standalone performance" study in the typical sense of diagnostic metrics. The document mentions "The software validation test was designed to evaluate all input functions, output functions, and actions performed by XmaruView V1... and Xmaru PACS." This implies internal functional testing of the software itself. However, it's not a study measuring diagnostic accuracy or classification performance in a clinical context.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not applicable in the clinical diagnostic sense. The "ground truth" for this software's validation would be its adherence to specified functional requirements, data integrity (e.g., DICOM compliance), and risk mitigation. There's no mention of clinical ground truth like pathology or expert consensus on disease presence.
8. The sample size for the training set
Not applicable. This document describes software validation for image processing and PACS systems, not a machine learning model that would have a "training set."
9. How the ground truth for the training set was established
Not applicable. As there's no training set, there's no ground truth established for it.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).