Search Results
Found 1 results
510(k) Data Aggregation
(533 days)
The ScanScope System is an automated digital slide creation, management, viewing and analysis system. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting and classification of tissues and cells of clinical interest based on particular color, intensity, size, pattern and shape.
The IHC HER2 Breast Tissue Tunable Image Analysis application is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of HER2/neu (c-erbB-2) in formalin-fixed, paraffin-embedded normal and neoplastic tissue.
The IHC HER2 Breast Tissue Tunable Image Analysis application is intended for use as an accessory to the Dako HercepTest™ to aid in the detection and semi-quantitative measurement of HER2/neu (c-erbB-2) in formalin-fixed, paraffin-embedded normal and neoplastic tissue. It is indicated for use as an aid in the assessment of breast cancer patients for whom HERCEPTIN® (Trastuzumab) treatment is being considered. Note: The IHC HER2 Breast Tissue Tunable Image Analysis application is an adjunctive computer-assisted methodology to assist the reproducibility of a qualified pathologist in the acquisition and measurement of images from microscope slides of breast cancer specimens stained for the presence of HER2 receptor protein. The accuracy of the test result depends upon the quality of the immunohistochemical staining. It is the responsibility of a qualified pathologist to employ appropriate morphological studies and controls as specified in the instructions for the HER2 reagent/kit used to assure the validity of the IHC HER2 Breast Tissue Tunable Image Analysis application assisted HER2/neu score. The actual correlation of the HER2 reagents/kits to Herceptin® clinical outcome has not been established.
The ScanScope® XT System is an automated digital slide creation, management, viewing and analysis system. The system is comprised of a slide scanner instrument and a computer executing Spectrum™ software. The system capabilities include digitizing microscope slides at diagnostic resolution, storing and managing the resulting digital slide images, retrieving and displaying digital slides, including support for remote access over wide-area networks, providing facilities for annotating digital slides and entering and editing metadata associated with digital slides, and facilities for image analysis of digital slides, including the ability to quantify characteristics useful to Pathologists, such as measuring and scoring immunohistochemical stains applied to histology specimens, such as Dako HerceptTestTM which reveals the presence of proteins such as Human Epidermal growth factor Receptor 2 (HER2), which may be used to determine patient treatment for breast cancer.
Here's an analysis of the acceptance criteria and study detailed in the provided K080564 submission for the Aperio ScanScope® XT System:
1. Table of Acceptance Criteria and Reported Device Performance
The submission focuses on demonstrating substantial equivalence rather than predefined acceptance criteria in the traditional sense of a specific performance target for accuracy or sensitivity. Instead, the "acceptance criteria" are implicitly met by demonstrating comparable performance to manual microscopy and superior inter-pathologist agreement. The primary performance metric presented is Percent Agreement (PA).
| Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (IHC HER2 Breast Tissue Tunable Image Analysis application) |
|---|---|---|
| Inter-Pathologist Agreement (Manual Microscopy) | To be comparable to what is expected for manual microscopy. | Ranged from 65.0% to 91.3% (with 95% CI from 53.5% to 96.4%) |
| Inter-Pathologist Agreement (Image Analysis) | To be comparable to manual microscopy and ideally show improvement. | Ranged from 85.0% to 94.0% (with 95% CI from 76.5% to 97.8%) |
| Manual Microscopy vs. Image Analysis Agreement (Same Pathologist) | To demonstrate good agreement between the two methods when performed by the same pathologist. | Ranged from 75.0% to 90.0% (with 95% CI from 65.1% to 95.1%) |
| Precision/Reproducibility | To demonstrate perfect agreement across various testing conditions. | 100% agreement for calculated HER2 scores across intra-run, inter-run, and inter-system studies. |
2. Sample Size and Data Provenance
- Test Set Sample Size: 180 formalin-fixed, paraffin-embedded breast tissue specimens.
- Clinical Site 1: 80 specimens.
- Clinical Site 2: 100 specimens.
- Data Provenance: Retrospective, with specimens from two unnamed clinical sites. The country of origin is not explicitly stated but is implicitly within the scope of FDA approval, suggesting US-based clinical sites.
3. Number of Experts and Qualifications for Ground Truth for the Test Set
- Number of Experts: Three (3) board-certified pathologists at each clinical site (a total of 6 unique pathologists across both sites, although for each site, it's 3 pathologists).
- Qualifications of Experts: "Board-certified pathologists." No further details on years of experience are provided.
4. Adjudication Method for the Test Set
The primary method for establishing the reference HER2 scores for the image analysis comparison was the manual microscopic review by three pathologists. The algorithm's score was then compared against these individual pathologist scores and implicitly against the consensus of pathologists (e.g., the "average HER2 scores from the three pathologists" was used to stratify slides for the algorithm training set).
For the inter-pathologist agreement, each pathologist's manual score was compared against the others (Pathologist 1 vs 2, 1 vs 3, 2 vs 3). Similarly, for the Image Analysis inter-pathologist agreement, the image analysis scores derived from each pathologist's outlined tumor regions were compared.
The initial manual microscopy average HER2 scores from the three pathologists were used to define the HER2 score distribution for the study.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Status and Effect Size
No explicit MRMC comparative effectiveness study, as typically understood for human readers improving with AI vs. without AI assistance, was performed. The study evaluates the agreement between manual microscopy and the image analysis system, and the inter-pathologist agreement for both manual and image analysis methods separately. It doesn't directly measure the improvement in human reader performance when using AI assistance in a diagnostic workflow.
However, the study does compare inter-pathologist agreement between manual microscopy and image analysis.
- Manual Microscopy Inter-Pathologist Agreement: Ranged from 65.0% to 91.3%
- Image Analysis Inter-Pathologist Agreement: Ranged from 85.0% to 94.0%
This suggests that the image analysis system itself results in higher inter-pathologist agreement compared to manual microscopy performed by independent pathologists. The submission also notes, "This study shows a good example how image analysis can help Pathologists with the standardization of the scoring." and "The variations introduced by a single pathologist by outlining different tumor regions from one read to another is 3x to 3.7x smaller than the variations introduced by different pathologists outlining different tumor regions" which supports the idea that the system could improve consistency.
6. Standalone (Algorithm Only) Performance
Yes, a standalone performance was done for the reported device. The Image Analysis (algorithm) was run in "batch processing mode completely separated from the pathologists outlining the tumor regions to avoid influencing the pathologists in their choice of tumor regions." The agreement percentages for "Manual Microscopy vs Image Analysis - same Pathologist - Agreements" and "Image Analysis - Inter-Pathologists - Agreements" inherently describe the standalone performance relative to human input.
7. Type of Ground Truth Used
Expert Consensus (modified): The ground truth was based on the independent scoring of three board-certified pathologists for each slide, using manual microscopy. For the purpose of stratifying the training set, the "average HER2 score provided by three pathologists using manual microscopy" was used. For comparison studies, the algorithm's output was compared pathologist-by-pathologist to their respective manual reads and to the image analysis scores derived from their own outlined regions.
8. Sample Size for the Training Set
- Algorithm Training Set (for the comparison study): 20 HER2 slides (5 slides for each 0, 1+, 2+, and 3+ HER2 class), randomly selected from the available slides.
- Algorithm Training Set (for the separate analytical performance "Algorithm Training Set" section): 20 slides (again, 5 slides from each 0, 1+, 2+, and 3+ HER2 class, chosen via stratified-random selection) from a set of 100 HER2 slides. The remaining 80 slides formed the evaluation dataset for this separate analysis.
9. How Ground Truth for the Training Set Was Established
The ground truth for the training set was established based on the "average HER2 score from the three pathologists" using manual microscopy. These average scores were used to stratify the slides into 0, 1+, 2+, and 3+ classes from which the training slides were then selected. The algorithm was "tuned" using these selected training slides and the procedure outlined later in the submission (though the specific tuning procedure isn't fully detailed in the provided text).
Ask a specific question about this device
Page 1 of 1