Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K070683

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2007-04-12

    (31 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Mirage system is indicated for the acquisition, processing, review and archiving of scintigraphy camera output data and related diagnostic images. It is capable of processing and displaying the acquired information in traditional formats, as well as in pseudo three-dimensional renderings, and in various forms of animated sequences, showing kinetic attributes of the imaged organs.

    Device Description

    Not Found

    AI/ML Overview

    The provided document is a 510(k) clearance letter for the "Mirage Cedars Option" device, which is a Picture Archiving and Communications System (PACS) for scintigraphy camera output data. It does not contain a detailed study report or specific acceptance criteria with reported device performance. Therefore, I cannot provide the requested information based solely on the text provided.

    However, based on the nature of a 510(k) submission for a PACS device, I can infer some general aspects and typical requirements. For a complete answer, one would need to access the full 510(k) summary or the original study data submitted to the FDA.

    From the given document, I can only extract these pieces of information:

    • Trade/Device Name: Mirage Cedars Option
    • Regulation Number: 21 CFR 892.2050
    • Regulation Name: Picture archiving and communications system
    • Regulatory Class: II
    • Product Code: LLZ
    • Indications for Use: The Mirage system is indicated for the acquisition, processing, review and archiving of scintigraphy camera output data and related diagnostic images. It is capable of processing and displaying the acquired information in traditional formats, as well as in pseudo three-dimensional renderings, and in various forms of animated sequences, showing kinetic attributes of the imaged organs.
    • 510(k) Number: K070683 (also K07106 is mentioned, but K070683 is the primary one in the letter header).

    Regarding the specific questions you asked, here's an explanation of why the information is not present in this document and what would typically be expected for a PACS system:

    1. A table of acceptance criteria and the reported device performance:

      • Information in document: Not present.
      • Typical for PACS: For a PACS system, acceptance criteria usually relate to image display accuracy (e.g., adherence to DICOM Part 14 grayscale display function standard), image retrieval speed, data integrity, security, and functionality as described in the indications for use. Performance would be assessed against these functional and technical specifications rather than diagnostic accuracy (as it's a display/archiving tool, not a diagnostic AI algorithm).
    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

      • Information in document: Not present.
      • Typical for PACS: A "test set" in the context of an algorithm's diagnostic performance isn't directly applicable here because the device is a PACS, not an AI diagnostic algorithm. Testing would involve system validation and verification with various types of scintigraphy images (DICOM compliance, correct display, retrieval) rather than a diagnostic evaluation.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

      • Information in document: Not present.
      • Typical for PACS: Ground truth in the diagnostic sense is not relevant for a PACS. Instead, validation would involve technical experts (e.g., medical physicists, IT professionals) to verify system functionality and adherence to standards, and potentially clinical users (e.g., nuclear medicine physicians) to confirm aesthetic and functional usability.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Information in document: Not present.
      • Typical for PACS: Adjudication is not typically used for PACS validation.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Information in document: Not present.
      • Typical for PACS: An MRMC study is not relevant for a PACS, as it's not a diagnostic AI tool intended to assist readers with interpretation.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Information in document: Not present.
      • Typical for PACS: Not applicable. A PACS is a system, not a standalone algorithm.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Information in document: Not present.
      • Typical for PACS: Not applicable in the diagnostic sense. The "ground truth" would be technical specifications and clinical expectations for image display and archiving.
    8. The sample size for the training set:

      • Information in document: Not present.
      • Typical for PACS: Not applicable. PACS systems are not typically "trained" in the machine learning sense. They are engineered software systems.
    9. How the ground truth for the training set was established:

      • Information in document: Not present.
      • Typical for PACS: Not applicable.

    In summary, the provided document is a regulatory clearance letter and does not contain the detailed technical and clinical study results that would typically be found in a 510(k) summary or the full submission. For a PACS device like the "Mirage Cedars Option," the focus of testing and acceptance criteria would be on system functionality, compliance with standards (e.g., DICOM), data integrity, and display accuracy, rather than diagnostic performance metrics of an AI algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1