Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K162285
    Date Cleared
    2017-01-27

    (165 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EBM iDO Viewer 1.2.1 software is intended to display images from CT. MR . CR. US. XA and SC for the trained physician 's diagnosis or referring purpose. EBM iDO Viewer 1.2.1 provides wireless and portable access to medical images. It is not intended to be used as, or to replace, a full diagnostic workstation or system and should be used only when there is no access to a workstation. This device is not to be used for mammography diagnosis.

    Device Description

    EBM iDO Viewer 1.2.1 is a software device that can be installed on Apple iPad Pro Through wireless network, user can login, query and display the images which are stored in heir existing EBM PACS server. The device can be installed in iOS 5.0 or later version platform such as iPad, but can't be installed in platforms other than iOS 5.0 or later version . It will be almost the same image quality of CT, MR , US, XA and SC as displayed on iPad Pro when it is used for diagnosis purpose. However, if it is used for CR diagnosis purpose, we will strongly suggest that users should adopt iPad Pro.

    AI/ML Overview

    The provided text describes a 510(k) submission for the EBM iDO Viewer 1.2.1, a software device for displaying medical images. However, it does not contain detailed acceptance criteria or a comprehensive study design that proves the device meets specific performance criteria in the way a clinical trial for an AI/CADe device would.

    The document discusses substantial equivalence to a predicate device and reports on non-clinical and "clinical" testing, but this is presented at a very high level and lacks the granular detail requested in the prompt.

    Therefore, I cannot populate all the requested fields with specific, quantifiable data from the provided text. I will, however, extract what information is available and highlight where information is missing.

    Here's an attempt to answer your request based on the provided text, with many fields marked as "Not provided in text":


    Acceptance Criteria and Device Performance (Based on provided text)

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (Stated or Implied)Reported Device Performance
    Non-clinical:
    Software Verification & Validation (per IEC 62304 workflows)"All met and passed the acceptance criteria referred to medical image software quality request."
    Display Performance (per AAPM Assessment of Display Performance for Medical Imaging Devices (2005))"All tests had passed successfully."
    Clinical/Usability (Implied):
    Acceptable image quality for diagnostic or remote reviewing use under intended conditions"All three radiologists agree that the software and devices provide acceptable quality for diagnostic or remote reviewing use if the device is operated within the intended use."
    Comfort with diagnostic mode"They were comfortable with the diagnostic mode of EBM iDO Viewer 1.2.1 as a device."

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: "the same images" were read, but the number of images is Not provided in text.
    • Data Provenance:
      • Country of origin of the data: Not provided in text (implied to be internal or from a partner, but no specific location beyond the company's Taiwan address for general operations).
      • Retrospective or Prospective: Not provided in text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: Three (3)
    • Qualifications of experts: "three board-certified radiologists"

    4. Adjudication method for the test set

    • Adjudication method: The text states, "All three radiologists agree that the software and devices provide acceptable quality..." This suggests consensus was sought or achieved regarding the qualitative assessment, but a formal adjudication method (e.g., 2+1, 3+1) for establishing a "ground truth" for quantitative performance metrics (like accuracy) is Not provided in text. This "clinical testing" appears to be more of a usability or qualitative assessment rather than a rigorous performance study.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No. The device is a viewer, not an AI/CADe system. The "clinical testing" involved radiologists reading images on the device, but it was not a comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance.
    • Effect size: Not applicable / Not provided in text.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable for a viewer device. The device's primary function is to display images for human interpretation, not to provide an output/diagnosis independently.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • The "clinical testing" described seems to revolve around the radiologists' subjective agreement on the "acceptable quality" and their "comfort" with the device for diagnostic/reviewing purposes, rather than a quantifiable diagnostic ground truth for specific pathologies. Therefore, the ground truth was effectively expert subjective assessment of image quality and usability, not a clinical ground truth for disease presence/absence.

    8. The sample size for the training set

    • This is a medical image viewer software, not an AI/ML algorithm that requires a training set in the typical sense for image interpretation. Therefore, a training set in this context is Not applicable / Not provided in text.

    9. How the ground truth for the training set was established

    • Not applicable (as per point 8).

    Ask a Question

    Ask a specific question about this device

    K Number
    K140399
    Device Name
    EBM IDO VIEWER
    Date Cleared
    2014-09-18

    (212 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    EBM iDO Viewer software is intended to display images from CT/MR for the trained physician 's diagnosis or referring purpose. EBM iDO Viewer provides wireless and portable access to medical images. It is not intended to be used as, or to replace, a full diagnostic workstation or system and should be used only when there is no access to a workstation. This device is not to be used for mammography.

    Device Description

    Not Found

    AI/ML Overview

    The provided text is a 510(k) premarket notification decision letter from the FDA to EBM Technologies, Inc. for their "EBM iDO Viewer" device.

    This document primarily states the FDA's "substantial equivalence" determination and provides regulatory information. It does NOT contain the detailed acceptance criteria or the study results that would prove the device meets those criteria.

    Therefore, I cannot fulfill your request for the following information based solely on the provided text:

    1. A table of acceptance criteria and the reported device performance: This information is not present. The letter only states that the device is "substantially equivalent" to a predicate, meaning its performance characteristics are similar enough, but it doesn't detail the metrics or the measured performance.
    2. Sample size used for the test set and the data provenance: Not mentioned.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not mentioned.
    4. Adjudication method for the test set: Not mentioned.
    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size: Not mentioned. The "Indications for Use" section explicitly states it's "not intended to be used as, or to replace, a full diagnostic workstation or system," which suggests it might not have undergone a rigorous comparative effectiveness study for diagnostic accuracy improvement, but rather for its display and accessibility capabilities.
    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Not mentioned. Given it's a viewer, its primary function is display, not independent algorithmic interpretation.
    7. The type of ground truth used: Not mentioned.
    8. The sample size for the training set: Not mentioned.
    9. How the ground truth for the training set was established: Not mentioned.

    In summary, the provided document is a regulatory approval letter and does not contain the technical study details you are asking for. These details would typically be found in the manufacturer's 510(k) submission itself, which is a much larger technical dossier.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1