Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K061538
    Manufacturer
    Date Cleared
    2006-06-30

    (28 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CRystalView R200 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in all general-purpose diagnostic procedures.

    Device Description

    The Alara CRystalView® R200 is a desktop Computed Radiography (CR) system designed to generate digital x-ray images by reading photostimulable phosphor image plates exposed using standard X-ray systems and techniques. The system consists of a CR Reader, a QC Workstation with software, cassettes, and image plates. Image data is sent via a dedicated connection from the Reader to the CRystalView R200 OC Workstation, where it is processed and displayed for review. The system outputs images and patient information to a PACS using the standard DICOM 3.0 protocol. The fully configured CRystalView R200 System includes acquisition console software and postprocessing image enhancement software. A reseller may alternatively provide these two software components or appropriately cleared equivalents, as well as the QC Workstation hardware.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the Alara CRystalView® R200 Computed Radiography System meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state quantitative acceptance criteria in terms of specific performance metrics (e.g., sensitivity, specificity, spatial resolution, contrast-to-noise ratio requirements). Instead, it relies on a qualitative assessment of "substantial equivalence" to predicate devices. The primary performance criterion appears to be:

    Acceptance Criterion (Implicit)Reported Device Performance
    Diagnostic capabilities and image quality equivalent to predicate devices (specifically the Agfa ADC Compact)."The results of these studies show that CRystal View R200 performance characteristics are comparable with those of the predicate devices. Clinically, no statistically significant difference was found in image quality ratings of CRystalView R200 and the Agfa ADC Compact when images were judged by a radiologist."

    The device also demonstrated compliance with electrical, mechanical, EMC, and laser safety standards (though not directly image quality performance). Images are generated for human anatomy, replacing film/screen systems in general diagnostic procedures. |
    | Compliance with applicable FDA and international safety standards. | "CRystalView R200 complies with applicable FDA and international standards pertaining to electrical, mechanical, EMC, and laser safety of medical and/or laser devices." |

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document states that clinical studies were performed, including a "confirmatory clinical concurrence study." However, it does not specify the sample size (number of images or patients) used in this clinical study.
    • Data Provenance: The document does not explicitly state the country of origin of the data or whether it was retrospective or prospective. Given it's a clinical concurrence study comparing to a predicate, it would typically be prospective, but this is not explicitly confirmed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: The document states "images were judged by a radiologist." This implies a single radiologist was involved in the clinical concurrence study.
    • Qualifications of Experts: The document explicitly states the expert was a "radiologist." However, it does not provide any further details on their qualifications (e.g., years of experience, subspecialty).

    4. Adjudication Method for the Test Set

    • The document indicates that images were "judged by a radiologist." With only one radiologist mentioned, an explicit adjudication method (like 2+1 or 3+1) is unlikely to have been used for discrepancies, as there were no multiple independent readers to adjudicate. So, the method would be "none" in the typical sense of adjudicating disagreements. The single radiologist's judgment served as the benchmark.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    • Yes, a comparative effectiveness study was done, though it involved a single reader. The study compared the CRystalView R200 to the Agfa ADC Compact.
    • Effect Size: The document states, "Clinically, no statistically significant difference was found in image quality ratings of CRystalView R200 and the Agfa ADC Compact when images were judged by a radiologist." This implies the effect size was not statistically significant in terms of improved human reader performance with the new device compared to the predicate, as their image quality ratings were comparable. The goal was equivalence, not superiority.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • The device being evaluated is a Computed Radiography System that generates digital X-ray images, which are then reviewed by a human. The "image quality ratings" mentioned are judgments by a radiologist, implying human-in-the-loop performance evaluation.
    • While the system has acquisition console software and postprocessing image enhancement software, the document does not describe any standalone algorithm-only performance evaluation independent of human interpretation for diagnostic purposes. The focus is on the human interpretation of the images produced by the system.

    7. The Type of Ground Truth Used

    • The ground truth in this study appears to be expert consensus (from a single radiologist, acting as the expert) on "image quality ratings." It is not based on pathology, outcomes data, or a gold standard that is independent of image interpretation. The comparison is between images of the new device and the predicate, as interpreted by an expert.

    8. The Sample Size for the Training Set

    • The document does not provide any information regarding a training set or its sample size. This is a medical device clearance submission (510(k)), which typically focuses on demonstrating equivalence to a predicate, not necessarily on detailing the developmental process, including machine learning model training sets.

    9. How the Ground Truth for the Training Set Was Established

    • As no information about a training set is provided, there is no information on how its ground truth was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K032210
    Manufacturer
    Date Cleared
    2003-10-02

    (73 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CRystalView is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen systems in all general-purpose diagnostic procedures.

    Device Description

    The Alara CRystalView™ is a Desktop Computed Radiography (CR) System designed to generate digital x-ray images by scanning photo-stimulable storage phosphor imaging plates exposed using standard X-ray systems and techniques. The system consists of a CR Reader, a OC workstation with software, cassettes, and imaging plates. Image data is sent via a dedicated connection from the Reader to the CRystalView QC Workstation, where it is processed and displayed for review. The system outputs images and patient information to a PACS using the standard DICOM 3.0 protocol. The fully configured CRystalView System includes acquisition console software and post-processing image enhancement software. A reseller may alternatively provide these two software components or appropriately cleared equivalents, as well as the QC Workstation hardware.

    AI/ML Overview

    Here's an analysis of the provided text to extract the acceptance criteria and study details:

    1. A table of acceptance criteria and the reported device performance

    The provided text does not explicitly state quantitative acceptance criteria in a table format. However, it implicitly defines the acceptance criteria as demonstrating substantial equivalence to predicate devices. The performance is reported in terms of comparability.

    Acceptance Criteria CategoryReported Device Performance
    Overall Equivalence"demonstrate that CRystalView is substantially equivalent to the predicate devices."
    Image Quality"Clinically, no statistically significant difference was found in image quality ratings of CRystalView and the Agfa ADC Compact when images were judged by a radiologist."
    Functional Characteristics"CRystalView performance characteristics are comparable with those of the predicate devices."
    Indications for Use"CRystalView has the same or similar indications for use as the predicate devices."
    Technological Characteristics"CRystalView shares the same technological characteristics as the predicate devices."
    Safety and Standards"CRystalView complies with applicable FDA and international standards pertaining to electrical, mechanical. EMC, and laser safety of medical and/or laser devices." (This is a design requirement, but also implies performance in meeting safety standards)

    Notes on Acceptance Criteria:

    • The primary acceptance criterion is substantial equivalence to the predicate devices (PhorMax Eagle Scanner (K001499) and Agfa ADC Compact (K974597)).
    • For clinical performance, the key metric for image quality was "no statistically significant difference" compared to a predicate device.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: The document does not specify the exact number of images or cases used in the clinical concurrence study. It only states that "a clinical concurrence study" was carried out, where "images were judged by a radiologist."
    • Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: "a radiologist" (singular, implying one radiologist).
    • Qualifications of Experts: The specific qualifications (e.g., years of experience, subspecialty) of the radiologist are not provided.

    4. Adjudication method for the test set

    • Adjudication Method: Not explicitly stated. Given that only "a radiologist" was mentioned, it suggests a single-reader assessment rather than a multi-reader adjudication method (like 2+1, 3+1).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance?

    • MRMC Study Conducted: No, an MRMC study was not conducted. The study described is a "clinical concurrence study" where images from the CRystalView and a predicate device (Agfa ADC Compact) were judged by a single radiologist for image quality.
    • Effect Size: Not applicable, as this was not an MRMC study nor an AI-assisted study. The device itself (CRystalView) is a Computed Radiography (CR) system, not an AI-based diagnostic tool for assisting human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, implicitly. The performance of the CRystalView system itself (the algorithm and hardware) was evaluated through laboratory and clinical studies. The "image quality ratings" of the CRystalView were compared to those of the predicate. This is a standalone evaluation of the device's output quality.

    7. The type of ground truth used

    • Type of Ground Truth: "Image quality ratings... judged by a radiologist." This implies expert consensus (or in this case, expert judgment by a single radiologist) was used to establish the "ground truth" for image quality comparison. It's not pathology or outcomes data.

    8. The sample size for the training set

    • Sample Size for Training Set: Not mentioned or applicable. This documentation focuses on the validation of the device and does not describe the development or training of any machine learning component. The CRystalView system described here is a Computed Radiography system for image generation, not an AI diagnostic algorithm that requires a "training set" in the conventional machine learning sense.

    9. How the ground truth for the training set was established

    • Ground Truth for Training Set: Not applicable, as this device does not describe an AI/ML algorithm requiring a training set with established ground truth.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1