Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K232208
    Manufacturer
    Date Cleared
    2024-04-16

    (265 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use

    Sectra Digital Pathology Module (3.3) is a software device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis.

    Sectra Digital Pathology Module (3.3) is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using Sectra Digital Pathology Module (3.3).

    Sectra Digital Pathology Module (3.3) is intended for use with Leica's Aperio GT 450 DX scanner and Dell U3223QE display, for viewing and management of the ScanScope Virtual Slide (SVS) and Digital Imaging and Communications in Medicine (DICOM) image formats.

    Device Description

    The Sectra Digital Pathology Module (3.3) [henceforth referred to DPAT (3.3)] is a digital slide viewing system. The DPAT (3.3) is intended for use together with FDA-cleared whole-slide image scanner GT 450 DX and Dell U3223QE display.

    The DPAT (3.3) can only be used as an add-on module to Sectra PACS. Sectra PACS consists of Sectra Workstation IDS7 (K081469) and Sectra Core (identified as a Class I exempt by the FDA in 2000). Sectra PACS is not part of the subject device. Sectra Workstation is the viewing workstation in which the Pathology Image Window is run. Pathology Image Window is the client component of the subject device.

    The system capabilities include:

    • retrieving and displaying digital slides,
    • support for remote intranet access over computer networks,
    • tools for annotating digital slides and entering and editing metadata associated with digital slides, and
    • displaying the scanned slide images for primary diagnosis by pathologists.

    The subject device is designed to accurately display colors. The monitor is not part of the subject device.

    Digital pathology images originating from WSI scanners other than those listed in the Indications for Use will be marked with the disclaimer "For Non-clinical Use Only" in the Pathology Image Window.

    Image acquisition will be managed by the scanner which is not part of the subject device:

    • The scanner delivers images with a tag in the file header that identifies the originating scanner.
    • The scanner includes applications for controlling the scanning process and performing related quality control (e.g., ensuring that images are sharp and cover all tissue on the slide).

    The DPAT (3.3) supports reading digital slides on a Dell U32230E display monitor, enabling pathologists to make clinically relevant decisions analogous to those they make using a conventional microscope. Specifically, the system supports the pathologist in performing a primary diagnosis based on viewing the digital slide on a computer monitor. These capabilities are provided by the Pathology Image Window.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided text:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Endpoint)Reported Device PerformanceMet?
    Primary Endpoint: The upper bound of the 2-sided 95% CI of the difference between the overall major discrepancy rates of WSIR diagnosis and MSR diagnosis when compared to the reference diagnosis shall be ≤4%.The upper bound of the 95% CI of the estimated difference in major discrepancy rates was 1.69%.Yes
    Secondary Endpoint: The upper bound of the 2-sided 95% CI of the major discrepancy rate between WSIR diagnosis and the reference diagnosis shall be <7%.The upper bound of the 95% CI for the overall estimated major discrepancy rate for WSIR diagnosis was 5.25%.Yes
    Pixel-Wise Comparison: All configurations were identical, i.e., <3ΔΕ00 to reference configuration SVS/UniView/Chrome.Based on analysis of the testing data, the 4 specified configurations (DICOM/IDS7, DICOM/UniView/Chrome, SVS/IDS7, SVS/UniView/Edge) were identical, i.e., <3∆Ε00 to reference configuration SVS/UniView/Chrome, and specifically ΔE=0 for the subject device displaying the same scanned image in both DICOM and SVS formats, in IDS7, Edge or Chrome.Yes
    Turnaround Time:
    - When selecting a slide image, it should not take longer than 3 seconds until the image is fully loaded.Reported to be "adequate for the intended use". Specific values not provided but "similar to or better than those of the predicate device."Yes
    - When panning the image (one quarter of the monitor) it should not take longer than 0.5 seconds until the image is fully loaded.Reported to be "adequate for the intended use". Specific values not provided but "similar to or better than those of the predicate device."Yes
    Measurements: Measurement accuracy has been verified using a test image containing objects with known sizes.Reported that measurement accuracy "has been verified" and show "almost identical results" to the predicate.Yes

    Note: While specific numerical results for turnaround time and measurement accuracy are not provided, the document states they were found to be "adequate" and "accurate" respectively, meeting the implicit acceptance criteria for these performance aspects.

    Study Details

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 258 cases.
    • Data Provenance: The document does not explicitly state the country of origin. The study was conducted at a "single site." It was a retrospective study, as the MSR diagnoses were "original sign-out diagnoses." The WSIR diagnoses were prospectively obtained using the device for the study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts for Ground Truth: Not directly stated as "experts for ground truth," but 3 reading pathologists determined the MSR diagnoses (which formed a basis for comparison with the reference diagnosis) and 3 reading pathologists created the WSIR diagnoses. A minimum of two adjudicators independently assessed concordance for the WSIR diagnosis against the reference diagnosis.
    • Qualifications of Experts: All were "pathologists." Further specific qualifications (e.g., years of experience, board certification) are not detailed in the provided text.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Minimum of two adjudicators independently assessed concordance (concordant, minor discrepancy, major discrepancy) of the WSIR diagnosis against the reference diagnosis using predefined rules. Their concordance scores for the same case were compared to determine a consensus score for major discrepancy status. This represents a form of 2-reader adjudication with consensus. The document does not explicitly mention "adjudication of ground truth" but rather adjudication of the concordance between the device's output and the reference diagnosis.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes, effectively. The study design involved 3 reading pathologists (multi-reader) reviewing 258 cases (multi-case) using the device (WSIR diagnosis) and comparing it to their traditional light microscopy review (MSR diagnosis), both against a reference diagnosis. While not explicitly termed an "MRMC study" in the classic sense of comparing AI-assisted vs. unassisted human performance, it acts as a comparative effectiveness study demonstrating non-inferiority of the digital pathology system to traditional microscopy.
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The study's primary endpoint was non-inferiority of the digital system (DPAT (3.3)-UniView) compared to traditional light microscopy. The estimated difference in major discrepancy rates between the two modalities (digital vs. microscope) when compared to the reference diagnosis was -0.01% (95% CI: -1.71% to 1.69%). This indicates that the digital system performed comparably to, or negligibly better than, light microscopy in terms of major discrepancy rates against a reference. It doesn't quantify improvement with AI assistance per se, but rather the non-inferiority of the digital viewing system (which is the device being cleared) to the traditional method.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance

    • Was a standalone performance study done? No, not in the sense of an AI algorithm making a diagnosis without human intervention. The device is a "software device intended for viewing and management of digital images... as an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis." The study specifically evaluates "interpreting digital images... for the purposes of primary diagnosis" by a human pathologist using the device, not an automated diagnostic output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The study used "original sign-out diagnoses" made using light microscopy as the "reference diagnoses." The document clarifies that major discrepancy was defined as a "difference in diagnosis that resulted in a clinically important difference in patient management." While not explicitly stated as "pathology ground truth" established post-hoc, it strongly implies a consensus or definitive diagnosis used as the gold standard derived from clinical practice.

    8. Sample Size for the Training Set

    • Training Set Sample Size: The document does not mention the training set size for the device. The study described focuses on the performance evaluation of the final device (Sectra Digital Pathology Module 3.3) for clinical validation, not the development or training of any underlying AI or image processing models within the device.

    9. How Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: As no training set information is provided, the method for establishing its ground truth is also not mentioned.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1