Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K190332
    Date Cleared
    2019-05-20

    (95 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Aperio AT2 DX System is an automated digital slide creation and viewing system. The Aperio AT2 DX System is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Aperio AT2 DX System is not use with frozen section, cytology, or non-FFPE hematopathology specimens.

    The Aperio AT2 DX System is composed of the Aperio AT2 DX scanner, the ImageScope DX review application and Display. The Aperio AT2 DX System is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the interpretation of images obtained using the Aperio AT2 DX System.

    Device Description

    The Aperio AT2 DX System is an automated digital slide creation and viewing system. The system is comprised of an Aperio AT2 DX scanner instrument and a Viewing Workstation with a computer and a calibrated monitor executing ImageScope DX viewer software. The system capabilities include digitizing microscope slides at diagnostic resolution, retrieving and displaying digital slides, including support for remote intra-net access over computer networks, providing tools for annotating digital slides, entering data associated with digital slides and displaying the scanned slide images for primary diagnoses by Pathologists.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Aperio AT2 DX System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device PerformanceMet?
    Upper bound of the two-sided 95% CI of the difference between overall major discrepancy rates of WSIR diagnoses and MSR diagnoses is < 4%.Upper bound of the two-sided 95% CI of the difference was 1.03% (for 0.44% difference).Yes
    Upper bound of the two-sided 95% CI of the overall major discrepancy rate of the WSIR diagnoses is < 7%.Upper bound of the two-sided 95% CI of the overall major discrepancy rate for WSIR diagnoses was 4.12% (for 3.64% rate).Yes
    Lower bounds of the 2-sided 95% confidence intervals (CIs) of the overall agreements for each precision component are ≥ 85%.Intra-System: Overall lower bound 95% CI: 95.9%Inter-System/Site: Overall lower bound 95% CI: 93.6%Within-Pathologist: Overall lower bound 95% CI: 92.9%Between-Pathologist: Overall lower bound 95% CI: 91.7%Yes

    2. Sample Size for Test Set and Data Provenance

    The text doesn't explicitly state the total number of unique cases used in the clinical study. However, we can infer some details from the precision studies:

    • Intra-System Study: "Pairwise Agreements" and "Comparison Pairs" numbers are provided for 3 systems, ranging from 193-201 agreements out of 201-204 pairs per system. This suggests hundreds of slides were used.
    • Inter-System/Site Study: "Pairwise Agreements" and "Comparison Pairs" numbers are provided for 3 system comparisons, ranging from 193-195 agreements out of 202 pairs per comparison.
    • Within- and Between-Pathologist Study: "Pairwise Agreements" and "Comparison Pairs" numbers are provided for 3 pathologists and 3 pathologist comparisons, ranging from 561-579 agreements out of 606-1818 pairs.

    The total reads for the major discrepancy analysis were 7509 for WSIRD and 7522 for MSRD. This suggests a significantly larger number of samples or multiple reads per sample.

    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). However, the study involved "three independent systems at three different sites" for the Inter-System/Site Precision study, implying a multi-site prospective clinical study for data collection. The use of "original sign-out pathologic diagnosis" as the gold standard implies retrospective access to patient records for comparison.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Ground Truth Experts: The text states that "the original sign-out pathologic diagnosis" was used as the gold standard reference diagnosis. This implies that the ground truth was established by multiple qualified pathologists in a routine clinical diagnostic setting as part of their standard practice.
    • Qualifications of Experts: The experts are described as "qualified pathologists" in the context of their ability to make clinically relevant decisions using conventional microscopes. Specific details like years of experience are not provided.

    4. Adjudication Method for the Test Set

    The text refers to the "original sign-out pathologic diagnosis" as the gold standard. This suggests that the ground truth was based on the final, adjudicated diagnosis made in a clinical setting, which typically involves consensus or final sign-out by a senior pathologist. There is no mention of a specific adjudication process (e.g., 2+1, 3+1) specifically for the test set ground truth if it differed from the original sign-out. The comparison was against this pre-existing ground truth.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    Yes, a form of comparative effectiveness study was done. The study directly compared "Whole Slide Image Review (WSIR) diagnoses" (AI-assisted, as the device is for creating and viewing digital slides) against "traditional light microscope slide review (MSR) diagnoses."

    • Effect Size / Improvement: The study focused on non-inferiority rather than explicit improvement of human readers with AI assistance compared to without AI assistance. The primary acceptance criteria were about ensuring digital review was not significantly worse than traditional microscope review.
      • The difference in overall major discrepancy rates between WSIR and MSR was 0.44%, with a 95% CI of (-0.15%, 1.03%). This indicates that the WSIR diagnoses were statistically non-inferior to MSR diagnoses regarding major discrepancies. The upper bound of 1.03% implies that the WSIR method was not substantially worse than MSR (as it was well below the 4% acceptance criterion).

    6. Standalone Performance Study

    No, a standalone (algorithm only without human-in-the-loop performance) study was not explicitly mentioned or performed as part of the primary clinical evaluation. The device is described as an "aid to the pathologist to review and interpret digital pathology slides," meaning it's intended for human-in-the-loop use. The performance evaluation directly involved pathologists using the system (WSIR diagnoses).

    7. Type of Ground Truth Used (Test Set)

    The primary ground truth used for evaluating accuracy was the "original sign-out pathologic diagnosis," which represents expert consensus or final clinical pathology diagnosis.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set. The descriptions focus on the performance evaluation (test set) for regulatory clearance.

    9. How the Ground Truth for the Training Set Was Established

    As the training set information is not provided, how its ground truth was established is not detailed in this document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1