Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K071128
    Date Cleared
    2007-10-10

    (170 days)

    Product Code
    Regulation Number
    864.1860
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K032113

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanScope System is an automated digital slide creation, management, viewing and analysis system. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting and classification of tissues and cells of clinical interest based on particular color, intensity, size, pattern and shape.

    The IHC HER2 Image Analysis application is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of HER2/neu (c-erbB-2) in formalin-fixed, paraffin-embedded normal and neoplastic tissue.

    The IHC HER2 Image Analysis application is intended for use as an accessory to the Dako HercepTest™ to aid in the detection and semi-quantitative measurement of HER2/neu (c-erbB-2) in formalin-fixed, paraffin-embedded normal and neoplastic tissue. When used with the Dako HercepTest™, it is indicated for use as an aid in the assessment of breast cancer patients for whom HERCEPTIN® (Trastuzumab) treatment is being considered. Note: The IHC HER2 Image Analysis application is an adjunctive computer-assisted methodology to assist the reproducibility of a qualified pathologist in the acquisition and measurement of images from microscope slides of breast cancer specimens stained for the presence of HER-2 receptor protein. The accuracy of the test result depends upon the quality of the immunohistochemical staining. It is the responsibility of a qualified pathologist to employ appropriate morphological studies and controls as specified in the instructions for the Dako HercepTest™ to assure the validity of the IHC HER2 Image Analysis application assisted HER-2/neu score. The actual correlation of the Dako HercepTest™ to Herceptin® clinical outcome has not been established.

    Device Description

    The ScanScope® XT System is an automated digital slide creation, management, viewing and analysis system. The ScanScope® XT System components consist of an automated digital microscope slide scanner, computer, color monitor, keyboard and digital pathology information management software. The system capabilities include digitizing microscope slides at high resolution, storing and managing the resulting digital slide images, retrieving and displaying digital slides, including support for remote access over wide-area networks, providing facilities for annotating digital slides and entering and editing metadata associated with digital slides, and facilities for image analysis of digital slides. Image analysis capabilities include the ability to quantify characteristics useful to Pathologists, such as measuring and scoring immunohistochemical stains applied to histology specimens, such as the Dako HerceptTest"M, which reveal the presence of proteins such as Human Epidermal growth factor Receptor 2 (HER2), which may be used to determine patient treatment for breast cancer.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Aperio Technologies ScanScope® XT System, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria in terms of numerical thresholds for comparison between the manual microscopy and the image analysis system. Instead, it aims to demonstrate substantial equivalence by showing that the "agreements between the pathologists' manual microscopy and performed (blinded) image analysis were comparable to the inter-pathologists agreements for manual microscopy." The study design itself serves as the framework for proving this comparability.

    Acceptance Criteria (Implied)Reported Device Performance
    Agreements between human readers (manual microscopy) and the device (image analysis) are comparable to inter-reader agreements among human readers (manual microscopy).Clinical Site 1:
    Manual Microscopy Inter-Pathologist Agreements (PA): 76.3% - 91.3%
    Image Analysis Inter-Pathologist Agreements (PA): 86.3% - 93.8%
    Manual Microscopy vs. Image Analysis (Same Pathologist) Agreements (PA): 77.5% - 92.5%

    Clinical Site 2:
    Manual Microscopy Inter-Pathologist Agreements (PA): 84.0% - 90.0%
    Image Analysis Inter-Pathologist Agreements (PA): 87.0% - 92.0%
    Manual Microscopy vs. Image Analysis (Same Pathologist) Agreements (PA): 79.0% - 90.0%

    Conclusion: Inter-pathologist agreements for image analysis (86.3-93.8%) were comparable to manual microscopy (76.3-91.3%). Agreements between manual microscopy and image analysis (77.5-92.5%) were also comparable to inter-pathologist agreements for manual microscopy (76.3-91.3%). |
    | Precision (intra-run, inter-run, inter-system) | Intra-run/Intra-system: 100% perfect agreement for calculated HER2 scores across all runs.
    Inter-run/Intra-system: 100% perfect agreement for calculated HER2 scores across all runs.
    Inter-systems: 100% perfect agreement for calculated HER2 scores across all systems and across all runs. |

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 180 formalin-fixed, paraffin-embedded breast tissue specimens.
      • Site 1: 80 specimens (with approximately equal HER2 score distribution)
      • Site 2: 100 routine specimens
    • Data Provenance: Retrospective, as the specimens were already stained and presumably collected prior to the study. The study was conducted at two clinical sites, implying a multi-center study within the US (though country of origin is not explicitly stated, "clinical sites" typically refers to healthcare facilities within the country where the submission is filed – in this case, the US FDA).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three pathologists at each of the two clinical sites, totaling 6 pathologists involved in establishing ground truth.
    • Qualifications of Experts: The document refers to them as "pathologists," implying they are qualified medical professionals specializing in pathology. No specific years of experience or sub-specialty certification are provided.

    4. Adjudication Method for the Test Set

    The document describes a comparative study where three pathologists at each site independently performed a blinded read of the glass slides for manual microscopy. For the image analysis, the same three pathologists remotely viewed and outlined tumor regions. The algorithm then reported the HER2 score for each pathologist's outlined regions.

    There is no explicit adjudication method (like 2+1 or 3+1 consensus) described for establishing a single "ground truth" for each slide based on expert opinion before comparison. Instead, the study compares inter-pathologist agreement for manual reads, inter-pathologist agreement for image analysis results, and agreement between individual pathologist's manual reads and their corresponding image analysis results. The image analysis algorithm's output serves as a separate measure to be compared against each pathologist's manual assessment.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • MRMC Comparative Effectiveness Study: Yes, an MRMC-like study was conducted. It involved multiple readers (pathologists) and multiple cases (180 breast tissue specimens). The comparison was between manual microscopy and image analysis, with pathologists themselves interacting with the image analysis system by outlining regions.
    • Effect Size of Improvement with AI Assistance: The document does not quantify the improvement of human readers with AI assistance in terms of an effect size. It focuses on the "comparability" of agreements:
      • Inter-pathologist agreements for the blinded image analysis (PA: 86.3-93.8%) were comparable to inter-pathologist agreements for manual microscopy (PA: 76.3-91.3%).
      • Agreements between the pathologists' manual microscopy and performed (blinded) image analysis (PA: 77.5-92.5%) were comparable to inter-pathologist agreements for manual microscopy (PA: 76.3-91.3%).

    This indicates the system performed similarly to human agreement without necessarily making a claim of "improvement" in diagnostic accuracy or efficiency for the human reader while using the AI. The study's goal was to demonstrate substantial equivalence, not superior performance or augmentation.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone component of the algorithm's performance was evaluated. The pathologists outlined representative tumor regions, and then the algorithm was run in "batch mode, blinded from the pathologists" and used "out of the box" to report the HER2 score for those outlined regions. This means the algorithm's output for a defined region was generated independently of further human intervention in the scoring process for that specific region.

    7. The Type of Ground Truth Used

    The ground truth used for comparison was expert consensus (implied via agreement metrics) or expert opinion for each pathologist's manual read. There wasn't a single, definitive "gold standard" ground truth like pathology or outcomes data established for each slide beforehand. Instead, the study evaluates agreement between different forms of assessment (manual vs. AI-assisted) and among experts. The Dako HercepTest™ staining is mentioned as the method used for preparing the specimens, which is a standardized immunohistochemical stain, but the interpretation of this stain (the HER2 score) is what is being compared.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set of the IHC HER2 Image Analysis application. It only describes the test set used for validating the device.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide any information on how the ground truth for the training set was established. This information is typically proprietary to the developer and not always disclosed in 510(k) summaries, which focus on the validation study.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1