Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K171977
    Date Cleared
    2018-08-02

    (398 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    miPlatform medical imaging suite v3.0 (miPlaftorm v3.0) is an upgrade of miPlatform medical imaging suite v2.0, previous cleared under K131424. miPlatform v3.0 is an internet-based image management system intended to be used by trained professionals, including but not limited to physicians, nurses and medical technicians. The system is a software package that is used with general purpose computing hardware to acquire, store, distribute, process and display images and associated patient data. The software supports and performs reviewing, communication and storage from the following modalities through DICOM 3.0 standard: CT, MR, NM, US, XA, PET, DX, CR/DR, RF, RT, MG, SC, VL, ES, OP, XC, PT, OT, as well as hospital/radiology information systems and any other information systems that support DICOM 3.0 standards. Non-radiology modalities are not for diagnostic use. For radiology modalities, only FDA cleared monitors shall be used to review images for diagnostic use.

    miPlatform ZFP Viewer is offered as extension application to miPlatform medical imaging suite system. This software technology uses HTML5 which allows a browser-enabled device to run the software application, and thus requires no installation (zero foot print). The user is able to access patient images and study reports from a mobile device, such as iPad3, as well as personal computer using Microsoft Windows System, anywhere through a wireless and 3G, 4G network. miPlatform ZFP Viewer has a simple GUI for viewing and includes tools such as zoom, pan, windowing, basic measurement, and 3D visualization functions, including volume rendering and multi-planar reconstruction. Only FDA cleared monitors shall be used to review images for diagnostic use.

    miPlatform ZFP Viewer provides wireless and portable access to medical images, in addition to standard intranet or internet access. This device is not intended to replace full workstations and should be used only when there is no access to a workstation. When used on a mobile device, the miPlatform ZFP Viewer is not for diagnostic use.

    For primary interpretation and review of mammography images, only use display hardware that is specifically designed for and cleared by the FDA for mammography. MIP/MRP tools are not supported for mammography images for diagnostic use.

    Device Description

    miPlatform v3.0 and miPlatform ZFP Viewer is the extension application software of miPlatform medical imaging information system. User can achieve mobile office through the software which is installed in PC, smart mobile phone and other mobile terminals.

    • . Understanding and analyzing patient's information and medical image in real time.
    • . Processing, diagnosing and sharing images in real time. Note that when used on a mobile device, the miPlatform ZFP Viewer is not for diagnostic use.
    • . Support three-dimensional image viewing and processing.
    • Support image analysis and real-time data synchronization exchange in real-time conference.
    AI/ML Overview

    The provided text describes the miPlatform medical imaging suite v3.0 and miPlatform ZFP Viewer, an image management system intended for use by trained professionals to acquire, store, distribute, process, and display medical images. The submission aims to demonstrate substantial equivalence to predicate devices, namely miPlatform Medical Imaging Suite (K131424) and CARESTREAM Vue PACS v11.4 Vue Motion (K132824).

    The document details the device's indications for use, technological characteristics, and performance based on non-clinical testing. It explicitly states that performance comparison testing on retrospective images was conducted to demonstrate substantial equivalence. However, it does not provide specific acceptance criteria or detailed results in a quantitative manner. The document asserts that "all tests successfully passed" and "designated individuals performed all verification and validation activities and results demonstrated that the predetermined acceptance criteria were met. The system passed all testing criteria." This indicates a qualitative assessment against internal criteria rather than a formally presented analytical study with specific metrics.

    Here's an analysis based on the given text, addressing your questions:


    1. Table of Acceptance Criteria and Reported Device Performance:

      The document does not explicitly state quantitative acceptance criteria in the form of specific thresholds (e.g., sensitivity, specificity, accuracy percentages, or error margins). Instead, it broadly claims that the device "passed all testing criteria" and "all tests successfully passed" based on its software requirements specification, design verification, and validation documents.

      The "Performance" section generally states: "Support of the substantial equivalence of the miPlatform v3.0 and miPlatform ZFP Viewer device was provided as a result of software validation, which confirms all features of the miPlatform v3.0 and miPlatform ZFP Viewer device were compliant with the software requirements."

      Without specific metrics or thresholds, a direct table of acceptance criteria vs. reported performance cannot be constructed from the provided text. The document focuses on demonstrating feature parity and compliance with established standards (DICOM, HL7) and internal quality control processes.

    2. Sample Size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

      • Sample Size: The document states that "performance comparison testing on retrospective images" was conducted, but it does not specify the sample size (number of cases or images) used for this test set.
      • Data Provenance: The data used was retrospective. The country of origin of the data is not specified.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

      The document does not provide information on the number of experts used or their qualifications for establishing ground truth. The nature of the device (an image management and viewing system) suggests that "ground truth" might pertain to accurate display, processing, and measurement capabilities rather than diagnostic accuracy against a clinical reference standard.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      The document does not specify any adjudication method for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned or detailed.
      • The device being reviewed (miPlatform medical imaging suite v3.0, miPlatform ZFP Viewer) is described as an image management and viewing system, not an AI-assisted diagnostic tool. Its functionality includes displaying, processing, and measuring images. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not directly apply to the described function of this particular device. The study described is "performance comparison testing on retrospective images to help demonstrate that the proposed device is substantially equivalent to the predicate devices" in terms of its ability to perform its specified functions.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      The term "standalone" in the context of an algorithm's diagnostic performance is also not directly applicable here, as the device is an image management system designed for use by trained professionals, implying human-in-the-loop interaction. The non-clinical testing focused on software validation, functionality, and compliance, not on an algorithm's independent diagnostic interpretation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      The document does not explicitly state the type of ground truth used. Given the device's function (image management, display, basic processing, and measurement), the "ground truth" likely refers to the accuracy of image display, correct implementation of tools (zoom, pan, windowing, measurement), and data synchronization as per DICOM standards, rather than diagnostic outcomes from pathology or long-term clinical follow-up. This "ground truth" would be established through verification against known valid image properties and expected software behavior.

    8. The sample size for the training set:

      The document does not mention a training set or any machine learning/AI components that would require a distinct training set. The device is described as a software package for image management and viewing, not a predictive or diagnostic AI algorithm.

    9. How the ground truth for the training set was established:

      Since no training set is mentioned in the context of machine learning, this question is not applicable. The ground truth for the verification and validation of the software's functionality would have been established through a combination of engineering specifications, DICOM standards, and expected behavior documented in the software requirements.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1