Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K092915
    Date Cleared
    2010-06-23

    (274 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CEDARA WEBACCESS, MODEL: 2.4

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cedara WebAccess 2.4 is a software application that provides internet access to multi-modality softcopy medical images, reports and other patient related information for conducting diagnostic review, planning, and reporting through the interactive display and manipulation of medical data.

    Cedara WebAccess 2.4 is capable of being configured to provide either lossless or lossy compressed images for display. The medical professional user must determine the appropriate level of image data compression that is suitable for their purpose.

    Display monitors used for reading medical images for diagnostic purposes must comply with applicable regulatory approvals and with quality control requirements for their use and maintenance.

    Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 MP resolution and meets other technical specifications reviewed and accepted by FDA.

    Device Description

    Cedara WebAccess 2.4 provides medical specialists with access to diagnostic quality images, reports, and various types of patient data over conventional TCP/IP (e.g., internet) networks.

    With no application-specific installation required on the user's computer, qualified medical professionals can use Cedara WebAccess 2.4 with a standard internet browser to view studies and patient information including but not limited to the following content: Diagnostic Reports, Key Images, Presentation Series, Imaging Series and file attachments.

    Cedara WebAccess 2.4 was designed with an easy and convenient workflow providing image viewing tools including zoom, pan, contrast, series/layout change, toggle on/off image text, MPR, CINE, reset and measurement.

    The software displays patient studies and other patient data but does not interpret or provide a diagnosis. Medical diagnosis is the responsibility of the user.

    AI/ML Overview

    The provided document is a 510(k) summary for Cedara WebAccess 2.4. It discusses the device description, indications for use, comparison to predicate devices, and a summary of testing. However, it does not contain the detailed acceptance criteria or a specific study proving the device meets those criteria, as typically found in a clinical study report.

    The "Summary of Testing" section (page 3) generally states:
    "Nonclinical verification and validation test results established that the device meets its design requirements and intended use, and that it is as safe, as effective, and performs as well as the predicate devices and that no new issues of safety and effectiveness were raised. The results also demonstrated that the device complies with industry standards for medical data: the NEMA DICOM 3.0 standard for Digital Imaging and Communications in Medicine, the JPEG standard ISO/IEC 10918-1 for the Digital Compression and Coding of Continuous-Tone Still Images, and the ISO/IEC 15948 standard for Portable Networks Graphics (PNG)."

    This statement indicates that verification and validation testing was performed to demonstrate compliance with design requirements, intended use, and industry standards, and to establish substantial equivalence to predicate devices. However, it does not provide specific quantitative acceptance criteria or detailed results of a study that would typically be used to "prove" meeting those criteria in the context of diagnostic performance (e.g., sensitivity, specificity, accuracy).

    Therefore, based only on the provided document, I cannot fill out the requested table and information about acceptance criteria and a specific study proving device performance in the way you've outlined for a typical diagnostic AI device. The document describes a general medical image processing software (PACS-like) rather than a diagnostic AI algorithm. Its "performance" is more related to its functionality, interoperability, and safe display of images rather than diagnostic accuracy.

    Here's what I can extract and what is missing, based on your request and the provided text:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Functional/Technical Requirements:Device "meets its design requirements and intended use"
    Compliance with industry standards:Device "complies with industry standards for medical data: the NEMA DICOM 3.0 standard... the JPEG standard ISO/IEC 10918-1... and the ISO/IEC 15948 standard for Portable Networks Graphics (PNG)."
    Substantial Equivalence to Predicate Devices:Device "is as safe, as effective, and performs as well as the predicate devices and that no new issues of safety and effectiveness were raised."
    Lossless/Lossy Compression Capability:"capable of being configured to provide either lossless or lossy compressed images for display."
    Display Monitor Requirements (for diagnostic use):"Display monitors used for reading medical images for diagnostic purposes must comply with applicable regulatory approvals and with quality control requirements..."
    Mammographic Image Interpretation:"Lossy compressed mammographic images and digitized film screen images must not be reviewed for primary image interpretations. Mammographic images may only be interpreted using an FDA approved monitor that offers at least 5 MP resolution..."
    No Diagnostic Interpretation:"does not interpret or provide a diagnosis. Medical diagnosis is the responsibility of the user."
    Safety & Effectiveness (general):"no new issues of safety and effectiveness were raised."

    Missing Information based on the prompt (as it pertains to a diagnostic AI algorithm):

    1. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective): Not mentioned. The testing focused on functional verification and validation, not diagnostic accuracy on a clinical test set.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable, as there's no diagnostic test set with ground truth in this context.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is an image viewer, not an AI diagnostic aid.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable. This device is a viewing interface, not a standalone algorithm with diagnostic performance.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable, as there's no diagnostic ground truth established for performance evaluation of the device itself. The device facilitates viewing images where a human establishes the ground truth (diagnosis).
    7. The sample size for the training set: Not applicable. This is not an AI/ML algorithm that requires a training set.
    8. How the ground truth for the training set was established: Not applicable.

    In conclusion, the document describes the functional and technical compliance of an image viewing software (PACS) with industry standards and its substantial equivalence to predicate devices, rather than presenting a performance study for a diagnostic AI algorithm with specific clinical acceptance criteria like sensitivity or specificity.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1