Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K080254
    Date Cleared
    2008-10-31

    (274 days)

    Product Code
    Regulation Number
    864.1860
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K020023

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanScope® System is an automated digital slide creation, management, viewing and analysis system. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting and classification of tissues and cells of clinical interest based on particular color, intensity, size, pattern and shape.

    The ScanScope® system is intended for use as an aid to the pathologist in the detection and quantitative measurement of PR (Progesterone Receptor) by manual examination of the digital slide of formalin-fixed, paraffin-embedded normal and neoplastic tissue immunohistochemically stained for PR on a computer monitor.

    It is indicated for use as an aid in the management, prognosis, and prediction of therapy outcomes of breast cancer.

    Device Description

    The system comprises a ScanScope® XT digital slide scanner instrument and a computer system executing Spectrum " software. The system capabilitics include digitizing microscope slides at diagnostic resolution, storing and managing the resulting digital slide images, retrieving and displaying digital slides, including support for remote access over wide-area networks, providing facilities for annotating digital slides and entering and editing metadata associated with digital slides, and facilities for image analysis of digital slides, including the ability to quantify characteristics useful to Pathologists, such as measuring and scoring immunohistochemical stains applied to histology specimens, such as Dako PR, which reveal the presence of PR (Progesterone Receptor) protein expression, which may be used to determine patient treatment for breast cancer.

    AI/ML Overview

    The provided document, K080254, describes the Aperio ScanScope® XT System, intended for in vitro diagnostic use as an aid to pathologists in the display, detection, counting, and classification of tissues and cells of clinical interest. Specifically, the document focuses on its use for the detection and quantitative measurement of Progesterone Receptor (PR) by manual examination of digital slides.

    Here's an analysis based on the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly based on the agreement between manual microscopy and manual reading of digital slides, and the reproducibility of the digital slide system. The document does not explicitly state pre-defined numerical acceptance criteria (e.g., "agreement must be >X%"). Instead, it reports the range of observed agreements and precision metrics.

    MetricAcceptance Criteria (Implicit)Reported Device Performance (ScanScope® XT System)
    Clinical Performance (Pathologist Agreement)
    Inter-Pathologist Agreement (Digital Slides) - % Positive NucleiN/A (Comparison study to conventional microscopy for substantial equivalence)76.3% - 98.0%
    Inter-Pathologist Agreement (Manual Microscopy) - % Positive NucleiN/A83.8% - 99.0%
    Agreement between Manual Microscopy & Digital Slides - % Positive NucleiN/A (Demonstrate substantial equivalence)78.8% - 100.0%
    Inter-Pathologist Agreement (Digital Slides) - Intensity ScoreN/A58.8% - 78.0%
    Inter-Pathologist Agreement (Manual Microscopy) - Intensity ScoreN/A58.8% - 88.0%
    Agreement between Manual Microscopy & Digital Slides - Intensity ScoreN/A62.5% - 96.0%
    Analytical Performance (Reproducibility via Image Analysis Algorithm)
    Intra-system (10 scans) - % Positive Nuclei Standard DeviationN/A (Demonstrate precision/reproducibility)Overall SD: 0.54% (max 1.47%), Average Range: 1.06% (max 4.78%)
    Intra-system (10 scans) - Intensity Values Standard DeviationN/AOverall SD: 0.9 (max 1.60), Average Range: 2.48 (max 4.27)
    Inter-day/Intra-system (20 scans) - % Positive Nuclei Standard DeviationN/AOverall SD: 0.54% (max 1.09%), Average Range: 1.52% (max 3.90%)
    Inter-day/Intra-system (20 scans) - Intensity Values Standard DeviationN/AOverall SD: 1.44 (max 2.43), Average Range: 5.29 (max 11.39)
    Inter-system (3 systems, 10 scans each) - % Positive Nuclei Standard DeviationN/AIndividual System Average SD: 0.54%, 0.53%, 0.75% (max 1.47%, 1.23%, 2.05%)
    Combined Systems Overall Average SD: 0.87% (max 1.57%)
    Inter-system (3 systems, 10 scans each) - Intensity Values Standard DeviationN/AIndividual System Average SD: 0.9%, 1.01%, 0.93% (max 1.60%, 1.48%)
    Combined Systems Overall Average SD: 1.35% (max 2.03%)
    Intra-pathologist (5 reads) - % Positive Nuclei Standard Deviation (Manual Microscopy)N/AOverall Average SD: 6.73% (max 16.73%), Average Range: 9.8% (max 40%)
    Intra-pathologist (5 reads) - % Positive Nuclei Standard Deviation (Digital Slides)N/AOverall Average SD: 11.81% (max 28.72%), Average Range: 16.2% (max 75%)
    Intra-pathologist (5 reads) - Intensity Score Outliers (Manual Microscopy)N/A8 outliers out of 50 scores (16%)
    Intra-pathologist (5 reads) - Intensity Score Outliers (Digital Slides)N/A9 outliers out of 50 scores (18%)
    Inter-pathologist - % Positive Nuclei Standard Deviation (Manual Microscopy)N/AOverall Average SD: 13.30% (max 32.15%), Average Range: 17.2% (max 60%)
    Inter-pathologist - % Positive Nuclei Standard Deviation (Digital Slides)N/AOverall Average SD: 11.3% (max 20.82%), Average Range: 16.0% (max 40%)
    Inter-pathologist - Intensity Score Outliers (Manual Microscopy)N/A7 outliers out of 30 scores (23%)
    Inter-pathologist - Intensity Score Outliers (Digital Slides)N/A7 outliers out of 30 scores (23%)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 180 formalin-fixed, paraffin-embedded breast tissue specimens.
      • Clinical Site 1: 80 slides.
      • Clinical Site 2: 100 slides.
    • Data Provenance: The study was conducted at two Clinical Laboratory Improvement Amendments (CLIA) qualified clinical sites. The specimens were immunohistochemically stained at these clinical sites using Dako IVD FDA cleared reagents. Glass slides were prepared in the sites' clinical laboratories.
      • Clinical Site 1: Specimens selected based on existing clinical scores to provide an equal distribution of PR slides across different percentage positive nuclei ranges (0%, 1-9%, 10-49%, 50-100%).
      • Clinical Site 2: Routine specimens taken from clinical operations, representing a typical clinical setting.
    • Retrospective/Prospective: Not explicitly stated as strictly retrospective or prospective. The specimens were "from both clinical sites" and for Site 1, "selected based on their clinical scores on file," suggesting a retrospective selection of cases. For Site 2, "routine specimens taken from their clinical operation" could imply concurrent collection or a recent retrospective selection. The reading of these slides by pathologists for the study itself was a prospective activity within the study design.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three different board-certified pathologists at each of the two clinical sites were used. This totals 6 pathologists for the initial manual reads (3 at site 1, 3 at site 2). Later, for the digital reads, the "same three Pathologists at each clinical site" performed the digital read. It's unclear if the same 6 pathologists or 3 total pathologists across both sites participated in all stages. However, for ground truth generation, it was 3 pathologists per site.
    • Qualifications of Experts: "Board-certified staff pathologists."

    4. Adjudication Method for the Test Set

    The "ground truth" for the comparison study was established by the consensus or average of the three pathologists' manual microscopy reads. The statistical analyses were presented for each of the scores (percentage of positive nuclei and intensity scores) and "comparatively between the two methods for the clinical sites with their different three pathologists." This implies a form of expert consensus, where the average/distribution of their reads served as the reference for comparison, rather than a formal adjudication to a single "correct" answer by an independent panel.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes, a comparative study was conducted where three pathologists at each of two clinical sites read 180 cases by traditional manual microscopy and then later (after a wash-out period and randomization) read the digital slides of the same cases on a computer monitor. This fits the description of a multi-reader, multi-case comparison study.
    • Effect size of human readers improve with AI vs without AI assistance: This study does not describe AI assistance for human readers. The ScanScope® XT System is described as a digital slide creation, management, viewing, and analysis system intended as an aid to the pathologist by manual examination of digital slides. The analytical performance section mentions an "image analysis algorithm," but this algorithm was used for precision/reproducibility studies of the system itself, not to assist pathologists in their interpretation of diagnostic cases. The clinical comparison study directly compares manual microscopy performance to human pathologists reading digital slides visually. Therefore, an effect size of human readers with AI vs without AI assistance is not reported because the clinical study did not involve AI assistance for the pathologists.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? No, a standalone study demonstrating the algorithm's diagnostic performance without human-in-the-loop was not performed or reported for its intended diagnostic use.
      • An "image analysis algorithm" was used in the analytical performance (precision/reproducibility) section to quantify cell features and scoring schemes objectively. This algorithm reported percentage of positive nuclei and intensity scores for system assessment, not for diagnostic claims for the algorithm itself. It was used to understand scanning variability, not to replace or assist a pathologist's diagnosis.

    7. Type of Ground Truth Used

    The ground truth for the comparison study (clinical performance) was established by expert consensus based on the average/distribution of manual microscopy readings from three board-certified pathologists for each slide. The Dako IVD FDA cleared Monoclonal Mouse Anti-Human Progesterone Receptor (Clone PgR 636) (K020023) was used for immunohistochemical staining, providing a standardized basis for the pathologists' assessment.

    8. Sample Size for the Training Set

    The document does not mention a training set for the ScanScope® XT System itself, as it is a digital slide scanner and management system, with the focus of the clinical study being on the equivalence of manual pathologist interpretation of digital slides compared to glass slides. The "image analysis algorithm" used in the analytical performance section is not presented as a component needing a separate training set for diagnostic purposes described here.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is described for a diagnostic algorithm, the method for establishing its ground truth is not applicable in this context. The clinical study focuses on establishing the equivalence of the digital viewing method to conventional microscopy, with the human pathologist remaining the primary interpreter.

    Ask a Question

    Ask a specific question about this device

    K Number
    K073677
    Date Cleared
    2008-08-01

    (217 days)

    Product Code
    Regulation Number
    864.1860
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K993957, K020023

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScanScope® XT System is an automated digital slide creation, management, viewing and analysis system. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting and classification of tissues and cells of clinical interest based on particular color, intensity, size, pattern and shape.

    The IHC ER Image Analysis application is intended for use as an aid to the pathologist in the detection and quantitative measurement of ER (Estrogen Receptor) in formalin-fixed paraffinembedded normal and neoplastic tissue.

    The IHC PR Image Analysis application is intended for use as an aid to the pathologist in the detection and quantitation measurement of PR (Progesterone Receptor) in formalin-fixed. paraffin-embedded normal and neoplastic tissue.

    It is indicated for use as an aid in the management, prognosis, and prediction of therapy outcomes of breast cancer.

    Note: The IHC ER and PR Image Analysis applications are an adjunctive computer-assisted methodology to assist the reproducibility of a qualified pathologist in the acquisition and measurement of images from microscope slides of breast cancer specimens stained for the presence of estrogen and progesterone receptor proteins. The accuracy of the test result depends upon the quality of the immunohistochemical staining. It is the responsibility of a qualified pathologist to employ appropriate morphological studies and controls as specified in the instructions for the ER and PR reagent/kit used to assure the validity of the IHC ER and PR Image Analysis application assisted scores.

    Device Description

    The system comprises a ScanScope® XT digital slide scanner instrument and a computer system executing Spectrum " software. The system capabilities include digitizing microscope slides at diagnostic resolution, storing and managing the resulting digital slide images, retrieving and displaying digital slides, including support for remote access over wide-area networks, providing facilities for annotating digital slides and entering and editing metadata associated with digital slides, and facilities for image analysis of digital slides, including the ability to quantify characteristics useful to Pathologists, such as measuring and scoring immunohistochemical stains applied to histology specimens, such as Dako ER/PR, which reveal the presence of ER (Estrogen Receptor) protein and PR (Progesterone Receptor) protein expression, which may be used to determine patient treatment for breast cancer.

    Hardware Operation: The ScanScope XT digital slide scanner creates seamless true color digital slide images of entire glass slides in a matter of minutes. A high numeric aperture 20x, as found on conventional microscopes, is used to produce high-quality images. (When the 2X magnification changer is inserted, the effective magnification of the images is 40X.) The ScanScope XT employs a linear-array scanning technique that generates images free from optical aberrations along the scanning axis. The result is digital slide images that have no tiling artifacts and are seamless.

    Software Operation: The Spectrum software is a full-featured digital pathology management system. The software runs on a server computer called a Digital Slide Repository (DSR), which stores digital slide images on disk storage such as a RAID array, and which hosts an SQL database that contains digital slide metadata. Spectrum includes a web application and services which encapsulate database and digital slide image access for other computers. The Spectrum server supports the capability of running a variety of image analysis algorithms on digital slides, and storing the results of analysis into the database. Spectrum also includes support for locally or remotely connected image workstation computers, which run digital slide viewing and analysis software provided as part of Spectrum.

    Overview of System Operation: The laboratory technician or operator loads glass microscope slides into a specially designed slide carrier with a capacity of up to 120 slides. The scanning process begins when the operator starts the ScanScope scanner and finishes when the scanner has completed scanning of all loaded slides. As each glass slide is processed, the system automatically stores individual "striped" images of the tissue contained on the glass slide and integrates the striped images into a single digital slide image, which represents a histological reconstruction of the entire tissue section. After scanning is completed, the operator is able to view and perform certain analytical tests on the digital slides.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied through the results presented, which aim to demonstrate substantial equivalence to manual microscopy. The study primarily focuses on inter-pathologist agreement for both manual microscopy and the device's image analysis, as well as agreement between manual microscopy and the device's image analysis. Precision studies also demonstrate the device's consistency.

    Given the document structure, the "acceptance criteria" appear to be defined not as specific numerical thresholds prior to the study, but rather by demonstrating that the device performs comparably to manual microscopy and shows acceptable levels of precision. The reported performance shows the ranges of agreement found.

    MetricAcceptance Criteria (Implied by Comparison)Reported Device Performance (Range)
    ER Percentage of Positive NucleiInter-pathologist agreement comparable to manual microscopy; agreement between manual microscopy and AI comparable to inter-pathologist manual agreement.Inter-pathologist (AI): 93.8%-98.8%
    Inter-pathologist (Manual): 91.3%-98.8%
    Manual vs. AI: 92.5%-97.5%
    ER Intensity ScoreInter-pathologist agreement comparable to manual microscopy; agreement between manual microscopy and AI comparable to inter-pathologist manual agreement.Inter-pathologist (AI): 88.8%-90.0%
    Inter-pathologist (Manual): 55.0%-86.3%
    Manual vs. AI: 63.8%-86.3%
    PR Percentage of Positive NucleiInter-pathologist agreement comparable to manual microscopy; agreement between manual microscopy and AI comparable to inter-pathologist manual agreement.Inter-pathologist (AI): 85.0%-99.0%
    Inter-pathologist (Manual): 83.8%-99.0%
    Manual vs. AI: 81.3%-99.0%
    PR Intensity ScoreInter-pathologist agreement comparable to manual microscopy; agreement between manual microscopy and AI comparable to inter-pathologist manual agreement.Inter-pathologist (AI): 68.8%-88.0%
    Inter-pathologist (Manual): 58.8%-88.0%
    Manual vs. AI: 58.8%-84%
    ER Percentage of Positive Nuclei Precision (Intra-system)Low standard deviation and range across runs.Overall SD: 0.31% (max 0.74%)
    Avg Range: 0.71% (max 2.25%)
    ER Intensity Score Precision (Intra-system)Low standard deviation and range across runs.Overall SD: 0.67 (max 1.45)
    Avg Range: 1.18 (max 4.88)
    PR Percentage of Positive Nuclei Precision (Intra-system)Low standard deviation and range across runs.Overall SD: 0.54% (max 1.47%)
    Avg Range: 1.06% (max 4.78%)
    PR Intensity Score Precision (Intra-system)Low standard deviation and range across runs.Overall SD: 0.9 (max 1.60)
    Avg Range: 2.48 (max 4.27)
    ER Percentage of Positive Nuclei Precision (Inter-system)Minimal variation across different ScanScope systems.Overall Avg SD: 0.55% (max 1.05%)
    Avg Range: 1.44% (max 4.02%)
    ER Intensity Score Precision (Inter-system)Minimal variation across different ScanScope systems.Overall Avg SD: 1.22% (max 3.07%)
    Avg Range: 2.37% (max 8.91%)
    PR Percentage of Positive Nuclei Precision (Inter-system)Minimal variation across different ScanScope systems.Overall Avg SD: 0.87% (max 1.57%)
    Avg Range: 2.54% (max 8.13%)
    PR Intensity Score Precision (Inter-system)Minimal variation across different ScanScope systems.Overall Avg SD: 1.35% (max 2.03%)
    Avg Range: 4.55% (max 6.86%)

    2. Sample Size for the Test Set and Data Provenance

    • ER Study Test Set: 80 formalin-fixed, paraffin-embedded breast tissue specimens.
      • Data Provenance: Retrospective, from a single CLIA-qualified clinical site in the US (implied by CLIA qualification, as it's a US regulatory standard). Specimens were "selected based on their clinical scores on file."
    • PR Study Test Set: 180 formalin-fixed, paraffin-embedded breast tissue specimens.
      • Data Provenance: Retrospective, from two CLIA-qualified clinical sites in the US. 80 slides from the first site (selected based on clinical scores) and 100 slides from the second site (routine clinical specimens, representing the target population).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three board-certified pathologists at each clinical site.
    • Qualifications: "Board-certified staff pathologists" at CLIA-qualified clinical sites. (No specific years of experience are mentioned).

    4. Adjudication Method for the Test Set

    The document describes a form of expert consensus and comparison rather than a strict adjudication to arrive at a single "ground truth" value for the test set.

    • For Manual Microscopy: Three different board-certified pathologists at each clinical site performed a blinded manual review of each glass slide. They reported the percentage of positive nuclei and average intensity score. The study then uses the "manual microscopy average percentages of positive nuclei from the three pathologists" and "manual microscopy average intensity scores from the three pathologists" for comparisons. This suggests an averaging approach rather than a specific adjudication rule (e.g., 2-out-of-3 majority; a 3+1 method where a fourth expert adjudicates disagreements is not explicitly stated).
    • For Image Analysis: Each of the three pathologists outlined tumor regions on digital slides (blinded from each other and from image analysis results). Image analysis was then performed on each set of outlined regions, resulting in a separate image analysis score for each of the three pathologists. No formal adjudication is described to combine these three algorithm scores into a single "ground truth" for the algorithm; rather, the agreement between the pathologists' manual scores and their respective image analysis scores is evaluated, as well as inter-pathologist agreement for both methods.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size:

    • This was a type of MRMC study, as multiple readers (pathologists) evaluated multiple cases (slides) both manually and with the assistance of the device (though the AI component was run after and blinded from the pathologists' region selection).
    • Effect Size: The document does not report an effect size for how much human readers improve with AI vs. without AI assistance. Instead, it reports agreement percentages between pathologists' manual scores, between pathologists' AI-assisted scores, and between a pathologist's manual score and their corresponding AI-assisted score. The study's focus was on demonstrating substantial equivalence and agreement, not on measuring reader improvement with assistance.

    6. If a standalone (i.e., algorithm-only without human-in-the-loop performance) was done:

    • Yes, in part. The image analysis algorithm reported the percentage of positive nuclei and average intensity score for each digital slide. However, the input to the algorithm (the tumor regions) was still defined by human pathologists. The critical step of selecting the region of interest was human-in-the-loop, even if the quantitative analysis within that region was standalone.
    • The document states: "Image analysis was run in batch processing mode completely separated from the pathologists outlining the tumor regions to avoid influencing the pathologists in their choice of tumor regions." This clarifies that the numerical output of the algorithm was standalone for a given region, but the selection of that region itself was pathologist-driven.

    7. The Type of Ground Truth Used

    • Expert Consensus (Averaged): For comparing the device performance, the "ground truth" for manual microscopy was established by taking the average percentage of positive nuclei and average intensity scores from three board-certified pathologists. This serves as the reference against which the digital system's performance (also tied to pathologist-defined regions) is compared.
    • Not Pathology, but based on Pathology Scores: While pathology slides were used, the ground truth was not solely an independent pathology report in the traditional sense, but rather the statistically combined scores of the evaluating pathologists who were part of the study.
    • Not Outcomes Data: The study did not use patient outcomes data.

    8. The Sample Size for the Training Set

    • The document does not specify the sample size used for the training set for the image analysis algorithm. The provided information focuses entirely on the clinical validation study (test set).

    9. How the Ground Truth for the Training Set was Established

    • The document does not provide information on how the ground truth for the training set was established, as the training set details are not described.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1