Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K241232
    Date Cleared
    2025-01-24

    (267 days)

    Product Code
    Regulation Number
    864.3750
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Galen™ Second Read™ is a software only device intended to analyze scanned histopathology whole slide images (WSIs) from prostate core needle biopsies (PCNB) prepared from hematoxylin & eosin (H&E) stained formalin-fixed paraffin embedded (FFPE) tissue. The device is intentify cases initially diagnosed as benign for further review by a pathologist. If Galen™ Second Read™ detects tissue morphology suspicious for prostate adenocarcinoma (AdC), it provides case- and slide-level alerts (flags) which includes a heatmap of tissue areas in the WSI that is likely to contain cancer.

    Galen™ Second Read™ is intended to be used with slide images digitized with Philips Ultra Fast Scanner and visualized using the Galen™ Second Read™ user interface.

    Galen™ Second Read™ outputs are not intended to be used on a standalone basis for diagnosis, to rule out prostatic AdC or to preclude pathological assessment of WSIs according to the standard of care.

    Device Description

    The Galen Second Read is an in vitro diagnostic medical device software, derived from a deterministic deep convolutional network that has been developed with digitized WSIs of H&E-stained prostate core needle biopsy (PCNB) slides originating from formalin-fixed paraffinembedded (FFPE) tissue sections, that were initially diagnosed as benign by the pathologist.

    The Galen Second Read is cloud-hosted and utilizes external accessories [e.g., scanner and image management systems (IMS)] for automatic ingestion of the input. The device identifies WSIs that are more likely to contain prostatic adenocarcinoma (AdC). For each input WSI, the Galen Second Read automatically analyzes the WSI and outputs the following:

    • Binary classification of the likelihood (high/low) to contain AdC based on a predetermined . threshold of the neural network output.
    • For slides classified with high likelihood to contain AdC, slide-level findings are flagged . and visualized (AdC score and heatmap) for additional review by a pathologist alongside the WSI.
    • For slides classified as low likelihood to contain AdC, no additional output is available. .

    Galen Second Read key functionalities include image upload and analysis, flag slides of high likelihood to contain AdC and display of all the WSIs uploaded to the system alongside their analysis results. Flagged findings constitute a recommendation for additional review by a pathologist.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Galen™ Second Read™ device, based on the provided text:

    Acceptance Criteria and Device Performance

    The document does not explicitly state pre-defined acceptance criteria with specific numerical targets. Instead, it presents the device's performance metrics from clinical studies. The implied acceptance criteria are that the device should improve the detection of prostatic adenocarcinoma (AdC) in initially benign cases when assisting pathologists.

    Here are the reported device performance metrics from the provided studies:

    Table 1: Device Performance (Clinical Study 1 - Standalone Performance)

    ParameterEstimate95% CIContext
    Slide-Level
    Sensitivity81.0%(69.2%; 92.9%)Ability to correctly identify GT positive slides
    Specificity91.6%(90.9%; 92.3%)Ability to correctly identify GT negative slides
    Case-Level
    Sensitivity80.8%(74.1%; 87.6%)Ability to correctly identify GT positive cases
    Specificity46.9%(39.5%; 54.3%)Ability to correctly identify GT negative cases

    Table 2: Device Performance (Clinical Study 2 - Human-in-the-Loop Performance)

    ParameterPerformance with Galen Second Read AI AssistancePerformance with Standard of Care (SoC)Difference95% CI (Difference)
    Combined Pathologists (Overall)
    Sensitivity93.9%90.5%3.5%(2.3%; 4.5%)
    Specificity87.9%91.1%-3.2%(-4.3%; -1.9%)
    For Slides Initially Assessed as Benign by Pathologists
    Sensitivity36.3%0% (SoC)36.3%(28.0%; 45.5%)
    Specificity96.5%100% (SoC)-3.5% (approx)(95.2%; 97.5%)

    Study Information:

    1. Sample Size and Data Provenance

    Analytical Performance Studies (Precision and Localization):

    • Sample Size: Not explicitly stated as a single number for these studies. The tables show "n/N" values for positive and negative slides. For repeatability, there were 39 positive slides and 38 negative slides in each run (total for repeatability: 3 runs * 39 positive + 3 runs * 38 negative = 231 slide-reads). For reproducibility, it was also based on "39" and "38" slides for each scanner/operator combination.
    • Data Provenance: Retrospectively collected, de-identified slides.
    • Country of Origin: Not specified for these analytical studies.

    Clinical Performance Study 1 (Standalone Performance):

    • Sample Size: 347 cases (initially diagnosed as benign) with associated whole slide images (WSIs).
    • Data Provenance: Retrospectively collected samples.
    • Country of Origin: Three sites, including 2 US sites and 1 Outside the US (OUS) site.

    Clinical Performance Study 2 (Human-in-the-Loop Performance):

    • Sample Size: 772 cases/slides (376 negative cases and 396 positive cases).
    • Data Provenance: Retrospectively collected slides.
    • Country of Origin: Four sites, including 3 US sites and 1 OUS site.

    2. Number of Experts and Qualifications for Test Set Ground Truth

    Analytical Performance Studies:

    • Number of Experts: Not explicitly stated, but "GT determined as 'positive', or 'benign' by the GT pathologists" implies multiple pathologists.
    • Qualifications: "GT pathologists" - no specific experience level mentioned.

    Clinical Performance Study 1 (Standalone Performance):

    • Number of Experts: Two independent expert pathologists for initial review, with a third independent expert pathologist for tie-breaking.
    • Qualifications: "Independent expert pathologists" - no specific experience level mentioned.

    Clinical Performance Study 2 (Human-in-the-Loop Performance):

    • Number of Experts: Not explicitly detailed for the GT determination for this specific study, but it is likely consistent with Study 1's method, as it shares similar retrospective data characteristics.
    • Qualifications: Not explicitly detailed for the GT determination for this specific study.

    3. Adjudication Method for the Test Set

    Clinical Performance Study 1 (Standalone Performance):

    • Adjudication Method: 2+1 (Two independent expert pathologists, with a third independent expert pathologist to review disagreements and determine the majority rule for the final ground truth).

    Analytical Performance Studies & Clinical Performance Study 2:

    • Adjudication Method: Not explicitly detailed, but implied to be expert consensus.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a MRMC comparative effectiveness study was done (Clinical Performance Study 2).
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance:
      • Sensitivity: The combined sensitivity for pathologists improved by 3.5% (95% CI: 2.3%; 4.5%) with Galen Second Read assistance compared to SoC.
      • Specificity: The combined specificity for pathologists decreased by 3.2% (95% CI: -4.3%; -1.9%) with Galen Second Read assistance compared to SoC.
      • For slides initially assessed as benign by pathologists (the intended use population), sensitivity increased by 36.3% (from 0% in SoC to 36.3% with Galen Second Read). Specificity for these slides decreased by 3.5% (from 100% in SoC to 96.5% with Galen Second Read).

    5. Standalone Performance Study

    • Yes, a standalone (algorithm only without human-in-the-loop performance) was done (Clinical Performance Study 1).
    • The results are shown in "Table 1: Device Performance (Clinical Study 1 - Standalone Performance)" above.

    6. Type of Ground Truth Used

    • Expert Consensus: For both clinical performance studies, the ground truth for slides was established by expert pathologists via a consensus process (two independent experts, with a third for adjudication in cases of disagreement). The ground truth for cases was derived from the slide-level ground truth.

    7. Sample Size for the Training Set

    • Not provided in the document. The document describes the device as a "deterministic deep convolutional network that has been developed with digitized WSIs...". However, it does not state the specific sample size, origin, or characteristics of the training dataset.

    8. How Ground Truth for the Training Set Was Established

    • Not provided in the document. While it mentions the network was "developed with digitized WSIs," details on how the ground truth for these training images was established are not included in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN200080
    Device Name
    Paige Prostate
    Manufacturer
    Date Cleared
    2021-09-21

    (264 days)

    Product Code
    Regulation Number
    864.3750
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Paige Prostate is a software only device intended to assist pathologists in the detection of foci that are suspicious for cancer during the review of scanned whole slide images (WSI) from prostate needle biopsies prepared from hematoxylin & eosin (H&E) stained formalinfixed paraffin embedded (FFPE) tissue. After initial diagnostic review of the WSI by the pathologist, if Paige Prostate detects tissue morphology suspicious for cancer, it provides coordinates (X,Y) on a single location on the image with the highest likelihood of having cancer for further review by the pathologist.

    Paige Prostate is intended to be used with slide images digitized with Philips Ultra Fast Scanner and visualized with Paige FullFocus WSI viewing software.

    Paige Prostate is an adjunctive computer-assisted methodology and its output should not be used as the primary diagnosis. Pathologists should only use Paige Prostate in conjunction with their complete standard of care evaluation of the slide image.

    Device Description

    Paige Prostate is an in vitro diagnostic medical device software, derived from a deterministic deep learning system that has been developed with digitized WSIs of H&E stained prostate needle biopsy slides.

    Paige Prostate utilizes several accessory devices as shown in Figure 1 below, for automated ingestion of the input. The device identifies areas suspicious for cancer on the input WSIs. For each input WSI, Paige Prostate automatically analyzes the WSI and outputs the following:

    • . Binary classification of suspicious or not suspicious for cancer based on a pre-defined threshold on the neural network output.
    • . If the slide is classified as suspicious for cancer, a single coordinate (X,Y) of the location with the highest probability of cancer on an image determined to be suspicious for cancer.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for Paige Prostate, based on the provided text:


    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device PerformanceComments
    Algorithm Localization (X,Y Coordinate) and Accuracy StudySensitivity: 94.5% (95% CI: 91.4%; 96.6%)
    Specificity: 94.0% (95% CI: 91.3%; 95.9%)This study evaluated the standalone performance of the algorithm in identifying suspicious foci and localizing them.
    Precision Study (Within-scanner)Cancer Slides: Probability of result being "Cancer" with same scanner/operator is 99.0% (95%CI: 94.8%; 99.8%)
    Benign Slides: Probability of result being "Benign" with same scanner/operator is 94.4% (95%CI: 88.4%; 97.4%)This assessed the consistency of the device's output under repeated scans by the same operator on the same scanner.
    Precision Study (Reproducibility: Between-scanner and between-operator)Cancer Slides: Probability of result being "Cancer" with different scanners/operators is 100% (95%CI: 96.5%; 100%)
    Benign Slides: Probability of result being "Benign" with different scanners/operators is 93.5% (95%CI: 87.2%; 96.8%)This assessed the consistency of the device's output across different scanners and operators.
    Localization Precision StudyLocation Correct (Within-Scanner, Op1/Sc1): 98.2% (56/57) (95%CI: 90.7%; 99.7%)
    Location Correct (3 Scanners, 3 Operators): 96.4% (53/55) (95%CI: 87.7%; 99.0%)This focused specifically on the precision of the (X,Y) coordinate localization.
    Clinical Study (Pathologist Performance with AI Assistance)Average Improvement in Sensitivity: 7.3% (95% CI: 3.9%; 11.4%) (statistically significant)
    Average Difference in Specificity: 1.1% (95% CI: -0.7%; 3.4%) (not statistically significant)This study measured the efficacy of Paige Prostate as an adjunctive tool for pathologists. "Positive" was defined as 'deferred' or 'cancer', and 'negative' as 'benign'.

    Study Information

    2. Sample size used for the test set and the data provenance:

    • Algorithm Localization and Accuracy Study:

      • Test Set Size: 728 WSIs (311 cancer, 417 benign) from unique patients.
      • Provenance: De-identified WSIs from:
        • Consecutive prostate cancer slides from an internal site (located in US).
        • Challenging cancer slides (≤0.5mm tumor) from an internal site.
        • Consecutive cancer slides submitted from external sites.
        • Challenging cancer slides submitted from external sites.
        • Benign slides from consecutive prostate biopsy cases from an internal site.
        • Consecutive benign slides submitted from external sites (submitted to internal site for expert consultation).
      • External Sites: Included 217 different sites located throughout the world (including US).
      • Retrospective/Prospective: Retrospective.
    • Precision Study:

      • Test Set Size: 35 cancer WSIs and 36 benign WSIs from unique patients.
      • Provenance: Slides from an internal site and external sites (217 different sites).
      • Retrospective/Prospective: Retrospective.
    • Clinical Study:

      • Test Set Size: 527 WSIs (171 prostate cancer, 356 benign) from unique patients.
      • Provenance: 44.15% from cases prepared, reviewed, diagnosed, and digitized at the internal site (US). 55.85% from cases prepared at 156 different external sites but reviewed, diagnosed, and digitized at the internal site.
      • Retrospective/Prospective: Retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Algorithm Localization and Accuracy Study (Localization Ground Truth):

      • Number of Experts: 3 study pathologists.
      • Qualifications: US board-certified pathologists (2 completed anatomic pathology fellowship and 1 sub-specialized genitourinary pathologist). They were blinded to Paige Prostate results.
    • Clinical Study (Ground Truth for slide-level cancer/benign):

      • Number of Experts: Not explicitly stated as "experts for ground truth creation" but implies the original pathologists who generated the synoptic diagnostic reports.
      • Qualifications: Pathologists at the internal site generating synoptic diagnostic reports.

    4. Adjudication method for the test set:

    • Algorithm Localization and Accuracy Study (Localization Ground Truth):

      • Adjudication Method: The union of annotations between at least 2 of the 3 annotating pathologists was used as the localization ground truth.
    • Clinical Study (Slide-Level Cancer/Benign Ground Truth):

      • Adjudication Method: "Synoptic diagnostic reports from the internal site were used to generate the ground truth for each slide as either cancer or no cancer." This implies a single, established diagnostic report rather than a consensus process for the study's ground truth.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Yes, an MRMC comparative effectiveness study was done (the "Clinical Study").
    • Effect Size of Improvement:
      • Average Improvement in Sensitivity: 7.3% (95% CI: 3.9%; 11.4%)
      • Average Difference in Specificity: 1.1% (95% CI: -0.7%; 3.4%)
      • The document clarifies that this is an average across 16 pathologists.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study was done. This is detailed in the "Analytical Performance" section, specifically the "Algorithm Localization (X,Y Coordinate) and Accuracy Study."
      • Sensitivity (Standalone): 94.5%
      • Specificity (Standalone): 94.0%

    7. The type of ground truth used:

    • Algorithm Localization and Accuracy Study (Slide-Level Cancer Ground Truth): Synoptic pathology diagnostic reports from the internal site.
    • Algorithm Localization and Accuracy Study (Localization Ground Truth): Consensus of 3 US board-certified pathologists who manually annotated image patches.
    • Precision Study (Slide-Level Cancer Ground Truth): Synoptic diagnostic reports from the internal site.
    • Clinical Study (Slide-Level Cancer/Benign Ground Truth): Original diagnostic synoptic reports.

    8. The sample size for the training set:

    • Training Dataset: 33,543 slide images.

    9. How the ground truth for the training set was established:

    • "De-identified slides were labeled as benign or cancer based on the synoptic diagnostic pathology report."
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1