K Number
DEN160056
Date Cleared
2017-04-12

(132 days)

Product Code
Regulation Number
864.3700
Type
Direct
Reference & Predicate Devices
N/A
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing, and management system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

The PIPS comprises the Image Management System (IMS), the Ultra Fast Scanner (UFS) and Display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS.

Device Description

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system.

The PIPS consists of two subsystems and a display:

  • Ultra Fast Scanner (UFS) (for software UFS1.7.1.1); ●
  • Image Management System (IMS) (for software IMS2.5.1.1); ●
  • . Display (PS27QHDCR).

The UFS consists of optical, mechanical, electronic and software elements to scan FFPE tissue mounted on glass slides at a resolution of 0.25 um per pixel, which is equivalent to a 40x objective, to create digital Whole Slide Images (WSI). The UFS has a capacity of 300 slides (15 glass slide racks with up to 20 slides per rack). After the slide racks are loaded into the UFS, the UFS automatically detects and starts scanning the slides. CCD cameras are used to capture color images from the back-lit tissue specimen. An LED light source employs toplit illumination to capture the barcode and back-lit illumination for tissue scanning. The stage (STG) and Image Capturing Unit (ICU) are fixed to each other and to the base frame to ensure correct positioning of the slide and to suppress external disturbances. Proprietary software is used for image processing during acquisition. Philips' proprietary format, iSyntax, is used to store and transmit the images between the UFS and the IMS.

The IMS is a software only subsystem to be used with the Display. Functionality of the IMS includes the ability to view images, organize workload, and annotate and bookmark scanned images. The user manual for PIPs specifies compatible computer environment hardware and software that is not included as part of the system.

The different subsystems of the PIPS are connected over an IT network at the user site. The IT hardware/software that supports the IMS Application Server & Storage software is not provided as part of the PIPS, but may be located in a central server room separate from the workstation with the IMS viewing software and Display. The communication of data between UFS and IMS is via a customer provided wired network or a direct connected cable between these subsystems. PIPS includes a display that has been validated as part of the pivotal clinical study.

The PIPS allows pathologists to view and evaluate digital images of formalin-fixed, paraffinembedded (FFPE) tissue slides that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.

AI/ML Overview

The provided text describes the regulatory acceptance criteria and supporting studies for the Philips IntelliSite Pathology Solution (PIPS), a digital whole slide imaging (WSI) system for pathology.

Here's an breakdown of the requested information:

1. Table of Acceptance Criteria and Reported Device Performance

The regulatory document outlines specific performance characteristics that the PIPS device must meet, primarily focusing on showing non-inferiority to traditional optical microscopy for diagnostic purposes and demonstrating precision and reproducibility.

Acceptance Criteria CategorySpecific Criteria (Implicitly Derived from Studies)Reported Device Performance
Clinical PerformanceNon-inferiority of Manual Digital (MD) major discordance rate to Manual Optical (MO) major discordance rate, with an upper bound of 95% CI for MD-MO difference < 4%.Overall observed major discordance rate: MD = 4.9%, MO = 4.6%. MMRM-modelled major discordance rate: MD = 4.7%, MO = 4.4%. MD-MO difference: 0.4% (MMRM-modelled). 95% CI for MD-MO difference: [-0.30%, 1.01%]. Met: The upper limit (1.01%) is < 4%.
Intra-system PrecisionLower limit of 95% CI for overall agreement rate ≥ 85.0%.Overall agreement rate: 92.0%. 95% CI: [90.57%, 93.29%]. Met: Lower limit (90.57%) is ≥ 85.0%.
Inter-system PrecisionLower limit of 95% CI for overall agreement rate ≥ 85.0%.Overall agreement rate: 93.8%. 95% CI: [92.6%, 95.0%]. Met: Lower limit (92.6%) is ≥ 85.0%.
Inter-site ReproducibilityReporting of overall agreement rate and 95% CI (no specific threshold stated, but high agreement is implied).Overall agreement rate: 90.2%. 95% CI: [87.9%, 92.4%].
Human FactorsNo critical task failures observed.No critical task failures observed. Learnability and ease of use reported as very high.
Technical PerformanceNumerous detailed technical performance requirements for components (e.g., Slide Feeder, Light Source, Imaging Optics, Digital Imaging Sensor, Image Processing Software, Display, Color Reproducibility, Spatial Resolution, Focusing Test, Whole Slide Tissue Coverage, Stitching Error, Turnaround Time) are listed as necessary to provide data on.Information and specifications provided, along with test data, to verify performance against these criteria. Specific quantitative acceptance criteria are not provided in the summary for all individual technical aspects, but the FDA concluded the information was sufficient.

2. Sample Sizes and Data Provenance

  • Test Set Sample Size (Clinical Study): 1992 cases, comprising a total of 3390 slides. This resulted in 15,925 readings adjudicated (7,964 MD and 7,961 MO).
  • Test Set Sample Size (Analytical Performance - Precision/Reproducibility):
    • Intra-system & Inter-system: 399 glass slides / FOVs (containing 420 selected features). Additionally, 210 wild card FOVs were used to minimize bias, but not for primary analysis. Total 609 FOVs for reading sessions.
    • Inter-site: 399 FOVs (containing 420 selected features). No wild card FOVs used in primary analysis.
  • Data Provenance (Clinical Study): Slides obtained from consecutive cases at least one year old from pathology laboratories. Data was collected across four sites (implicitly in the US, as this is an FDA submission). The study was retrospective in nature, as it used archived cases with existing sign-out diagnoses.
  • Data Provenance (Analytical Performance - Precision/Reproducibility): Consecutive cases from pathology laboratories. Data was collected across three systems at one site for intra- and inter-system studies, and across three different sites, each with its own PIPS system, for the inter-site study.

3. Number of Experts and Qualifications for Ground Truth

  • Clinical Study Ground Truth (Main/Reference Diagnosis): The ground truth was based on the original sign-out diagnosis rendered at the institution by a qualified pathologist using an optical (light) microscope. The study does not explicitly state the number of pathologists involved in these original diagnoses or their specific qualifications, but it can be inferred that they were qualified pathologists in clinical practice.
  • Clinical Study Adjudication Panel: Two adjudication pathologists independently reviewed eCRFs. In case of disagreement, a third adjudication pathologist reviewed to achieve majority vote. For cases where all three had different opinions, a consensus was reached in an adjudication panel meeting of the same three pathologists.
  • Analytical Performance Ground Truth:
    • Enrollment Pathologist (EP): Selected consecutive cases from the LIS and identified the main diagnosis.
    • Validating Enrollment Pathologist (VEP): Confirmed the presence of the pre-specified histopathologic "features" on the glass slide and then on the Field of View (FOV) after scanning. This established the "ground truth" for the presence of these features for the precision and reproducibility studies.
    • Specific qualifications (e.g., years of experience) for EP and VEP are not explicitly stated, but they are referred to as "pathologists."
    • Reading Pathologists (for evaluation): Three reading pathologists for precision/reproducibility studies. 16 reading pathologists (four per site) for the clinical study. No specific qualifications beyond "qualified pathologist" are given.

4. Adjudication Method for the Test Set

  • Clinical Study: 2+1 adjudication method. Two adjudication pathologists independently reviewed diagnoses. If there was a disagreement, a third pathologist reviewed to achieve a majority vote. If all three disagreed, a consensus panel (comprising the same three adjudicators) was convened.
  • Analytical Performance (Precision/Reproducibility): No explicit adjudication method listed for establishing the ground truth of feature presence. The VEP confirmed feature presence as the gold standard. The study measured agreement between readings and systems, rather than against an adjudicated ground truth for each specific reading.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • Yes, a comparative effectiveness study was done. The clinical study compared the Philips IntelliSite Pathology Solution (MD modality) to conventional optical microscopy (MO modality). This was a multi-reader (16 pathologists) multi-case (1992 cases) study.
  • Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
    • This study does not describe an AI-assisted workflow. The PIPS device is a digital imaging system for viewing slides, not an AI diagnostic aid. It enables digital viewing instead of glass slide viewing.
    • Therefore, the study focuses on the non-inferiority of the digital viewing experience compared to traditional optical microscopy, not on how an AI algorithm improves a human reader's performance. The "human readers improve with AI vs without AI assistance" aspect is not applicable here because the device is not an AI diagnostic tool.
    • The primary outcome was the difference in major discordance rates between MD and MO, relative to the original sign-out diagnosis. The goal was to prove the digital modality was not worse than the optical modality.

6. Standalone (Algorithm Only) Performance

  • No, a standalone (algorithm only) performance study was not done in the context of diagnostic accuracy.
  • The PIPS is a digital slide creation, viewing, and management system. It "does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis." Therefore, there is no discrete "algorithm" in a diagnostic sense to test in a standalone manner.
  • The system itself has integrated algorithms for image acquisition (e.g., tissue detection, focusing, stitching), and these technical aspects were evaluated (e.g., "Whole Slide Tissue Coverage," "Stitching Error," "Focusing Test"). However, these are performance metrics for the system's image generation capabilities, not for a diagnostic algorithm.

7. Type of Ground Truth Used

  • Clinical Study: Expert consensus / Original Sign-Out Diagnosis. The primary ground truth for comparing the MD and MO modalities was the original sign-out diagnosis as documented by the institution's pathologist. Adjudication by an independent panel of pathologists was then used to determine major/minor discordance of reader diagnoses against this original sign-out.
  • Analytical Performance (Precision/Reproducibility): Expert Consensus. The presence of specific histopathologic "features" was confirmed by a Validating Enrollment Pathologist (VEP) on the glass slide and then on the scanned FOV. This VEP confirmation served as the ground truth for feature presence in these studies.

8. Sample Size for the Training Set

  • Not Applicable. The provided text describes studies for validating a digital whole slide imaging system (PIPS) for primary diagnosis, not an AI algorithm.
  • The PIPS does not contain a machine learning/AI component that requires a training set for diagnostic classification in the sense of a CADe/CADx device. Its internal operations (e.g., image processing, focus algorithms) are likely based on classical image processing and control algorithms, not deep learning requiring large labeled training datasets as commonly understood in AI.

9. How the Ground Truth for the Training Set was Established

  • Not Applicable, as there was no explicit AI training set.

{0}------------------------------------------------

EVALUATION OF AUTOMATIC CLASS III DESIGNATION FOR Philips IntelliSite Pathology Solution (PIPS)

DECISION SUMMARY

Correction Date: October 13, 2017 This Decision Summary contains corrections to the April 13, 2017 Decision Summary

A. DEN Number:

DEN160056

B. Purpose for Submission:

De Novo request for evaluation of automatic class III designation for the Philips IntelliSite Pathology Solution (PIPS)

C. Measurand:

Not applicable.

D. Type of Test:

Digital pathology whole slide imaging system

E. Applicant:

Philips Medical Systems Nederland B.V.

F. Proprietary and Established Names:

Philips IntelliSite Pathology Solution (PIPS)

G. Regulatory Information:

    1. Regulation section:
      21 CFR 864.3700
    1. Classification:
      Class II (special controls)
    1. Product code: PSY
    1. Panel: 88 - Pathology

{1}------------------------------------------------

H. Indications for use:

1. Indications for use:

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing, and management system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

The PIPS comprises the Image Management System (IMS), the Ultra Fast Scanner (UFS) and Display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS.

    1. Special conditions for use statement(s):
      For in vitro diagnostic (IVD) use only

For prescription use only

    1. Special instrument requirements:
      Image Management System (IMS) (for software IMS2.5.1.1) Ultra Fast Scanner (UFS) (for software UFS1.7.1.1) Display (PS27QHDCR)

I. Device Description:

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system.

The PIPS consists of two subsystems and a display:

  • Ultra Fast Scanner (UFS) (for software UFS1.7.1.1); ●
  • Image Management System (IMS) (for software IMS2.5.1.1); ●
  • . Display (PS27QHDCR).

The UFS consists of optical, mechanical, electronic and software elements to scan FFPE tissue mounted on glass slides at a resolution of 0.25 um per pixel, which is equivalent to a 40x objective, to create digital Whole Slide Images (WSI). The UFS has a capacity of 300 slides (15 glass slide racks with up to 20 slides per rack). After the slide racks are loaded into the UFS, the UFS automatically detects and starts scanning the slides. CCD cameras are used

{2}------------------------------------------------

to capture color images from the back-lit tissue specimen. An LED light source employs toplit illumination to capture the barcode and back-lit illumination for tissue scanning. The stage (STG) and Image Capturing Unit (ICU) are fixed to each other and to the base frame to ensure correct positioning of the slide and to suppress external disturbances. Proprietary software is used for image processing during acquisition. Philips' proprietary format, iSyntax, is used to store and transmit the images between the UFS and the IMS.

The IMS is a software only subsystem to be used with the Display. Functionality of the IMS includes the ability to view images, organize workload, and annotate and bookmark scanned images. The user manual for PIPs specifies compatible computer environment hardware and software that is not included as part of the system.

The different subsystems of the PIPS are connected over an IT network at the user site. The IT hardware/software that supports the IMS Application Server & Storage software is not provided as part of the PIPS, but may be located in a central server room separate from the workstation with the IMS viewing software and Display. The communication of data between UFS and IMS is via a customer provided wired network or a direct connected cable between these subsystems. PIPS includes a display that has been validated as part of the pivotal clinical study.

The PIPS allows pathologists to view and evaluate digital images of formalin-fixed, paraffinembedded (FFPE) tissue slides that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.

J. Standard/Guidance Document Referenced (if applicable):

Technical Performance Assessment Digital Pathology Whole Slide Imaging Devices; Guidance for Industry and Food and Drug Administration Staff (April 20, 2016).

K. Test Principle:

The PIPS device is an automated system designed for scanning and digitizing surgical pathology slides prepared from FFPE tissue. These digitized images can then be reviewed and interpreted by pathologists for clinical (patient care) purposes.

Prior to scanning the slide on the UFS, the technician conducts quality control of the slides per the laboratory's standards. The technician then places the slides into racks, which are loaded into the UFS. The handler in the UFS automatically moves a slide from the storage area to the scanning area. A macro image is generated that includes the slide label and a low power image of the entire slide. The system then determines regions of interest in the tissue to scan, which are subsequently scanned at high resolution (0.25 um per pixel). After the slide is scanned, it is returned to the same slot of the same rack from which it was originally obtained.

The images scanned in the UFS are compressed using Philips' proprietary iSyntax format and are transmitted to the IMS subsystem. The images can be reviewed through the IMS only.

{3}------------------------------------------------

The IMS allows the user to identify, organize and execute the worklist. The pathologist selects the first slide, navigates around the slide and views the images at the desired magnification. The pathologist is responsible for ensuring the validity of the interpretation of the digital images obtained from the PIPS.

L. Interpretation of Results:

The PIPS is an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from FFPE tissue that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using the PIPS. Additionally, it is the responsibility of the pathologist to use professional judgment when determining whether to directly examine the glass slides by light microscopy if there is uncertainty about the interpretation of the digital image(s).

M. Performance Characteristics:

1. Analytical performance:

  • a. Precision/Reproducibility:
    The objective of this study was to evaluate both intra-system precision and intersystem precision for the PIPS. Inter-site reproducibility was also evaluated for the PIPS.

The precision of the device was based on three reading pathologists' assessments and identification of specific histopathologic "features" that are observed in formalinfixed, paraffin-embedded (FFPE) H&E slides. Twenty-one (21) features were selected for the analytical studies. The 21 features were evaluated at their relevant magnification. The levels of magnification were 10x, 20x, 40x and each level of magnification included 7 features.

For each feature, three organs were selected. For each organ, six FOVs were selected, each containing one study feature. (21 features * 3 organs/feature * 6 FOV/organ = 378 single selected feature FOVs). Additionally, there were 21 FOVs each containing two selected features, for a total of 378 + 21 = 399 FOVs and 378 + (21*2) = 420 selected features. An example is shown in Table 1 below.

Number of Selected Features in FOV
MagnificationFeatureOrgan12
40xMitosisLung60
Rectum61
Uterus61
Overall182
Table 1: Study feature example

{4}------------------------------------------------

For each pre-specified feature, consecutive cases were selected from the pathology laboratory using the laboratory information system (LIS) by the enrollment pathologist (EP). The validating enrollment pathologist (VEP) confirmed whether the feature was present on the glass slide. Once the slides were scanned, the EP reviewed the WSI and defined an area (bookmark) containing the selected feature(s) at the appropriate magnification. Then a static full resolution extraction image of the bookmark was created and defined as the field of view (FOV). The VEP confirmed whether the feature(s) was present on the FOV. After confirmation, the FOV was considered enrolled.

For the intra-system and the inter-system studies, the same set of glass slides (n=399) was used. From this slide set, 399 FOVs were extracted, which included 420 selected features. In addition, 210 wild card FOVs were selected from other glass slides following the same procedure. Wild card FOVS were used to minimize or avoid bias by the reading pathologist, but were not analyzed or used for the primary analysis. The total FOV set for the intra- and inter-system study was 609 FOVs. Each of the three reading pathologists evaluated each enrolled FOV three times, once during each of three reading sessions.

For the inter-site study, the 210 wild card FOVs from the intra- and inter-system studies were enrolled as study FOVs. In addition. 189 slides were selected as described above, resulting in a total FOV set of 399 FOVs with 420 selected features. There were no wild card FOVs included in the inter-site study, as each reading pathologist evaluated each FOV only once. The study included three different reading pathologists located at different sites, each with its own PIPS system.

For each FOV, the reading pathologist recorded the presence of each observed feature on a checklist. For each magnification, a separate checklist containing ten features (seven study features and three non-study features) was developed. Only the selected features were used for the primary analysis. For secondary analyses, all observed features were analyzed. Each study was designed such that there were three readings for each selected feature on an FOV:

  • For the intra-system study, the three readings by the same pathologist were from three scans from the same system.
  • . For the inter-system study, the three readings by the same pathologist were scans from three different systems (at the same site).
  • . For inter-site reproducibility, the three readings were by three different pathologists and from three different systems, each at a different site.

Intra-system Precision Study:

The study slide set was divided equally (n=133 slides per system) and randomly over three systems at one site. On each system the slides were scanned three times with at least six hours downtime (ensuring full cool down) of the system between scanning iterations. The 210 wild card slides were all scanned once on System 1.

{5}------------------------------------------------

Three separate reading sessions were performed by each of three reading pathologists. with a washout period of at least two weeks in between reading sessions. The 399 FOVs that were read during a reading session were randomly selected from the FOVs originating from three different systems and three different iterations. Per reading session. 70 different wild card FOVs were added such that all 210 wild card FOVs were read by each reading pathologist.

The overall intra-system agreement rate was calculated by averaging all available pairwise comparison results over all 420 enrolled features and all three pathologists. While each system scanned 133 slides 3 times each, some of the slides contained multiple features as explained previously. This results in different numbers of comparison pairs on a per system basis, but the overall number is consistent (420 selected features * 3 systems * 3 reads = 3780). To preserve the correlation structure of multiple readings of the same feature and multiple features on an FOV, the bootstrap method was used to derive a two-sided 95% confidence interval (CI) for the overall agreement rate. An FOV was the bootstrap re-sampling unit. The study acceptance criterion that the lower limit of the 95% CI for the overall agreement rate be 85.0% or above was met (Table 2).

SystemNumber ofPairwiseAgreementsNumber ofComparisonPairsAgreement Rate
%95% CI
System 11146127889.7(87.1, 92.1)
System 21149123393.2(90.9, 95.3)
System 31181126993.1(90.7, 95.2)
Overall3476378092.0(90.57, 93.29)

Table 2: Intra-system study results

Inter-system Precision Study:

The complete study slide set (n=399) was scanned once on each of the three systems at one site. The same 210 wildcard FOVs were used from the intra-system study. Three separate reading sessions were performed by each pathologist with a washout period of at least two weeks between sessions. The 399 FOVs that were read during a reading session were randomly selected from the FOVs originating from three different systems. Per reading session, 70 different wild card FOVs were added such that all 210 wild card FOVs were read by each reading pathologist.

The overall inter-system agreement rate was calculated averaging all available pairwise comparison results over all 420 selected features and all three pathologists. To preserve the correlation structure of multiple readings of the same feature and multiple features on an FOV, the bootstrap method was used to derive a two-sided 95% CI for the overall agreement rate. An FOV was the bootstrap re-sampling unit. The acceptance criterion that the lower-limit of the 95% CI for the overall agreement rate was 85.0% or above was met (Table 3).

{6}------------------------------------------------

Number ofNumber ofAgreement Rate
SystemsPairwiseComparison
ComparedAgreementsPairs%95% CI
Sys 1 v Sys 21173126093.1(91.5, 94.6)
Sys 1 v Sys 31181126093.7(92.2, 95.2)
Sys 2 v Sys 31192126094.6(93.3, 95.9)
Overall3546378093.8(92.6, 95.0)

Table 3: Inter-system study results

Inter-site Reproducibility Study:

For the inter-site study, the slide set (n=399) was scanned once on each of three sites, resulting in three WSI sets. There was a different reading pathologist at each site, and each of the three reading pathologists had only one reading session in which all FOVs scanned at their site were read. The order in which the FOVs were read at each site was randomly ordered.

The overall inter-site agreement rate was calculated by averaging all available pairwise comparison results over all 420 enrolled features. A bootstrap 95% CI was also calculated for the overall inter-site agreement rate for review purposes (Table 4).

Sites ComparedNumber of Pairwise AgreementsNumber of Comparison PairsAgreement Rate
%95% CI
Site 1 v Site 237042088.1(84.9, 91.2)
Site 1 v Site 337942090.2(87.4, 92.9)
Site 2 v Site 338742092.1(89.4, 94.7)
Overall1136126090.2(87.9, 92.4)

Table 4: Inter-site study results

  • b. Linearity/assay reportable range:
    Not applicable

  • c. Traceability, Stability, Expected values (controls, calibrators, or methods):
    Not applicable

  • d. Detection limit:
    Not Applicable

  • e. Analytical Reactivity: Not Applicable

  • f. Interferences (Robustness):

{7}------------------------------------------------

Not Applicable

  • g. Assay cut-off: (Interpretation of Results)
    Not Applicable

    1. Technical studies:
      Multiple studies were conducted to evaluate the performance assessment data associated with the technical evaluation of the PIPS.
  • a. Slide Feeder
    Information was provided on the configuration of the slide feed mechanism, including a physical description of the slide, the number of slides in queue (carrier), and the class of automation. Information was provided on the user interaction with the slide feeder, including hardware, software, feedback mechanisms, and Failure Mode and Effects Analysis (FMEA).

  • b. Light Source
    Descriptive information associated with the lamp and the condenser was provided. Testing information was provided to verify the spectral distribution of the light source as part of the color reproduction capability of the UFS subsystem.

Imaging Optics C.

An Optical schematic with all optical elements identified from slide (object plane) to digital image sensor (image plane) was provided. Descriptive information regarding the microscope objective, the auxiliary lenses, and the magnification of imaging optics was provided. Testing information regarding the relative irradiance, optical distortions, and lateral chromatics aberrations was provided.

d. Mechanical Scanner Movement

Information and specifications on the configuration of the stage, method of movement, control of movement of the stage, and FMEA was provided. Test data to verify the repeatability of the stage movement and to verify the mechanism that the stage movement stays within limits during operations was provided.

Digital Imaging Sensor e.

Information and specifications on the sensor type, pixel information, responsivity specifications, noise specifications, readout rate, and digital output format. Test data to determine the correct functioning of the digital image sensor that converts optical signals of the slide to digital signals which consist of a set of numerical values corresponding to the brightness and color at each point in the optical image was

{8}------------------------------------------------

provided.

f. Image Processing Software

Information and specifications on the exposure control, white balance, color correction, sub-sampling, pixel-offset correction, pixel-gain or flat-field correction, and pixel-defect correction was provided.

Image Composition g.

Information and specifications on the scanning method, the scanning speed, and the number of planes at the Z-axis to be digitized was provided. Test data to analyze the image composition performance was provided.

h. Image Files Format

Information and specifications on the compression method, compression ratio, file format, and file organization was provided.

i. Image Review Manipulation Software

Information and specifications on continuous panning and pre-fetching, continuous zooming, discrete Z-axis displacement, ability to compare multiple slide simultaneously on multiple windows, image enhancement and sharpening functions, color manipulation, annotation tools, digital bookmarks, and virtual multihead microscope was provided.

j. Computer Environment

Information and specifications on the computer hardware, operating system, graphics card, graphics card driver, color management settings, color profile, and display interface was provided.

k. Display

Information and specifications on the technological characteristics of the display device, physical size of the viewable area and aspect ratio, backlight type and properties, frame rate and refresh rate, pixel array, pitch, pixel aperture ratio and subpixel matrix scheme, subpixel driving to improve grayscale resolution, supported color spaces, display interface, user controls of brightness, contrast, gamma, color space, power-saving options, etc., via the on-screen display menu, ambient light adaptation, touchscreen technology, color calibration tools, and frequency and nature of quality-control tests was provided. Test data to verify the performance of the display was provided.

  • l. Color Reproducibility

{9}------------------------------------------------

Test data to evaluate the color reproducibility of the system was provided.

  • m. Spatial Resolution
    Test data to evaluate the composite optical performance of all components in the image acquisition phase was provided.

  • n. Focusing Test
    Test data to evaluate the technical focus quality of the system was provided.

  • Whole Slide Tissue Coverage 0.
    Test data to demonstrate that the entire tissue specimen on the glass slide is detected by the tissue detection algorithms, and that all of the tissue specimens are included in the digital image file was provided.

  • p. Stitching Error
    Test data to evaluate the stitching errors and artifacts in the reconstructed image was provided.

  • q. Turnaround Time
    Test data to evaluate the turnaround time of the system was provided.

3. Human factors studies:

Human factors studies designed around critical user tasks and use scenarios performed by users were conducted. Information included a list of all critical user tasks and a description of the process that was followed to identify them.

A systematic evaluation involving simulated use by representative users performing all critical tasks required for operation of the device, and collected subjective assessment for failure was provided. No critical task failures were observed. There were the occasional difficulties that are to be expected with any piece of new software but learnability and ease of use seemed very high. All difficulties observed were of little influence on the perception of the usability, and no difficulties or failures were observed on tasks that could lead to patient harm.

In all instances both pathologists and lab technicians were able to easily identify cases and ensure that everything was complete.

4. Clinical studies:

A study was conducted to demonstrate that viewing, reviewing and diagnosing digital

{10}------------------------------------------------

images of surgical pathology FFPE tissue slides using the PIPS is non-inferior to using optical (light) microscopy. The primary endpoint was the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (main) diagnosis, which is based on the original sign-out diagnosis rendered at the institution, using an optical (light) microscope. By the study protocol, a total of 2000 cases consisting of multiple organ and tissue types were to be enrolled. Cases were divided over four sites. At each site, four pathologists read all the cases assigned to the site using both the MO and the MD modalities in an alternating fashion and randomized order and with a washout period of four weeks in between, resulting in a total of 8000 planned digital reads and 8000 planned optical reads. Three adjudicators reviewed the reader diagnosis against the sign-out diagnosis and determined whether the diagnosis was concordant, minor discordant or major discordant.

The study was based on the reading of slides obtained from consecutive cases at least one year old and for which a sign-out diagnosis was available. Slides were selected by a study EP from the original slides used for the sign-out diagnosis. The EP at each site reviewed the pathology report for each case and determined the main diagnosis for the case. The EP subsequently matched the case to the clinical study design list of types of cases to be evaluated and selected the representative slide(s) that reflected the main diagnosis. The selected slides could include H&E, IHC and special stains. In the case of IHCs and special stains, the inclusion of control slides was required to fulfill the quality checks according to general clinical practice. The VEP confirmed that the selected slide(s) reflected the main diagnosis for the case, as well as required ancillary information for cancer cases, and then the case was enrolled.

All 16 reading pathologists, four per site, read the slides for all cases at their site, approximately 500 cases per site. using both the MO modality and the MD modality in an alternating order. There was a washout of at least four weeks between the first and second reading of the same case. The reading pathologists were provided with all representative slide(s) of a case at once. An electronic case report form (eCRF) was completed to document the reading pathologist's diagnosis. Then two adjudication pathologists independently reviewed the eCRFs to determine whether or not the diagnosis was consistent with the main diagnosis. A major discordance was defined as a difference in diagnosis that would be associated with a clinically important difference in patient management. A minor discordance was defined as a difference in diagnosis that would not be associated with a clinically important difference in patient management. In the event that there was a disagreement between the two adjudicators, a third adjudication pathologist reviewed the case to achieve majority vote. In cases where all three adjudicators had a different opinion, consensus was arrived at in an adjudication panel meeting consisting of the same three adjudication pathologists.

Inclusion criteria:

  • All glass slides, with human tissue obtained via surgical pathology of original case, . were available.
  • . Original sign-out diagnosis was available.
  • . The selected slide or slides for the main diagnosis and the control slide(s) fulfilled the

{11}------------------------------------------------

quality checks according to general clinical practice.

  • . Cases were at least one year since accessioning.
    Exclusion criteria:

  • Cases, including cases previously referred to another institution. for which any H&E, . IHC or special stains slide used for the original sign-out diagnosis was no longer available at the site.

  • Cases for which the control slides for IHC and special stains were not available. .

  • . The selected slide or slides for the main diagnosis did not match any subtype of the organ for which the case was selected.

  • Relevant clinical information that was available to the sign-out pathologist in the . pathology request form could not be obtained.

  • Selected slides contained indelible markings. ●

  • Selected slides with damage that could not be easily repaired. .

  • More than one case was selected for a patient (only one case may be enrolled per ● patient).

  • Case consisted of frozen section(s) only.

  • . Case consisted of gross specimens only.

For the primary objective of demonstrating the MD major discordance rate to be noninferior to the MO major discordance rate, a Mixed Model Repeated Measures (MMRM) logistic regression was conducted. For each reading result the dependent variable was the major discordance status and the independent variables included modality as a fixed effect (MD vs. MO) and site, reader, and case as random effects. A two-sided 95% CI for the modality effect, i.e., the overall MD-MO major discordance rate difference, was constructed from this analysis. If the upper bound of the 95% CI was less than the noninferiority margin of 4%, MD would be considered non-inferior to MO.

A total of 1992 cases were included in the Full Analyses Set with 3390 slides. In total, 15.925 readings were adjudicated (7.964 MD and 7.961 MO). The observed overall major discordance rate, i.e., over all sites, reading pathologists and organs, was 4.9% for MD and 4.6% for MO. The MD-MO difference in major discordance rate was 0.4%. The major discordance rates as estimated by the MMRM logistic model ("modelled") resulted in similar, slightly lower, proportions, i.e., 4.7% for MD and 4.4% for MO. The MD-MO difference in major discordance rate was 0.4%, with a derived two-sided 95% CI of [-0.30%: 1.01%]. The upper limit of this confidence interval was less than the pre-specified non-inferiority margin of 4%, and therefore, the MD modality using the PIPS was demonstrated to be non-inferior to the MO modality with respect to major discordance rate when comparing to the main diagnosis. Thus, the study met the primary objective as shown in Table 5.

Table 5: Clinical study results based on major discordance rates

Manual Digital (MD)Manual Optical (MO)Difference(MD - MO)
Total%95% CITotal%95% CI%95% CI

{12}------------------------------------------------

readsdiscordantreadsdiscordantdiscordant
Observed79644.9N/A79614.6N/A0.4*
MMRM79644.73.27, 6.8279614.43.2, 6.330.4*-0.30, 1.01
  • Difference does not equal 0.3 due to rounding error.

The difference in major discordance rates between MD and MO is shown in Table 6.

Major discordance rate
OrganManual digital(MD)Manual optical(MO)Difference(MD - MO)
Breast4.2%4.3%-0.2%
Prostate12.0%11.3%0.8%
Respiratory3.5%4.2%-0.7%
Colorectal1.7%1.0%0.7%
GE junction2.0%1.3%0.7%
Stomach0.8%0.5%0.3%
Skin4.9%4.7%0.3%
Lymph node0.3%0.8%-0.5%
Bladder7.3%6.1%1.3%
Gynecological6.3%5.2%1.2%
Liver/BD4.6%5.6%-1.0%
Endocrine6.5%4.7%1.8%
Brain / neuro6.2%5.8%0.4%
Kidney / neoplastic2.5%1.0%1.5%
Salivary gland2.0%3.0%-1.0%
Peritoneal0.0%0.0%0.0%
Gallbladder0.0%0.0%0.0%
Appendix0.0%0.0%0.0%
Soft tissue0.0%0.0%0.0%
Perianal1.0%2.0%-1.0%

Table 6: Major discordance rates by organ and modality

The differences in major discordance rates between MD and MO were < 2% in absolute value for all organs. When examining the major discordance rates, 'prostate' had the highest major discordance rates of 12.0% MD and 11.3% MO. The difference in minor discordance rates between MO and MD was 0.8%.

Secondary analysis showed that there was also strong agreement between the two

{13}------------------------------------------------

modalities. There were 7959 pairs of readings with both adjudication outcomes for MO and MD. Results showed 96.5% of the paired readings resulted in no major discordance for both modalities (93.5%) or major discordance for both modalities (3%).

The clinical study was not powered to analyze the results by organ site or diagnosis. Nonetheless, the major types of discordances were reviewed with particular attention to types of discordances where the other modality did show concordance. This is to isolate the possible effect of either microscope or whole slide imaging device with respect to such discordances. Cases for which the MO diagnosis was concordant with the reference diagnosis and the MD diagnosis was a major discordance for the same observer were analyzed, together with the converse, where MD was concordant and MO a major discordance.

The most common types of such discordances in MD were missed thyroid carcinomas, overcalled melanocytic lesions, missed bladder carcinoma in situ, ductal carcinoma in situ of the breast overcalled as invasive, and overcalled endometrial carcinoma/atypical hyperplasia. The most common types of such discordances in MO were undercalled dysplasia in respiratory, misdiagnosed carcinoma in liver core biopsies, missed ductal carcinoma in situ of the breast / lobular carcinoma in situ of breast, and overcalled CIN/SIL in cervix. In every case the observed rates were within the known and established rate of inter-pathologist variation in diagnosis as reported in literature.

    1. Clinical cut-off:
      Not applicable
    1. Expected values/Reference range:
      Not applicable

N. System Description:

    1. Modes of Operation:
      The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system. The PIPS consists of two subsystems and a display:
  • Ultra Fast Scanner (UFS) (for software UFS1.7.1.1) ●

  • . Image Management System (IMS) (for software IMS2.5.1.1)

  • Display (PS27QHDCR) ●

    1. Software:

FDA has reviewed the applicant's Hazard Analysis and software development processes for this line of product types:

Yes _X or No _____________________________________________________________________________________________________________________________________________________________

    1. Calibration and Quality Controls:
      The UFS performs a series of automatic calibrations. Each whole slide image

{14}------------------------------------------------

(WSI) displays a flag that indicates if the scanner was in a calibrated or un-calibrated state, thereby providing a visual indicator to the viewer of the WSI. Users also may manually initiate a calibration, if desired. By default, calibrations are triggered every 4 hours (or 200 slides). Depending on circumstances, this frequency can result in a calibration in the middle of a run. To prevent such an event, the user may manually initiate the calibration process prior to scanning a batch (e.g., during slide processing).

Manual calibration may also be initiated if the UFS detects that the instrument is not calibrated during a slide scan. The system will display a message to the operator requesting whether the user wishes to accept the slide as is or re-calibrate and rescan the slide. As a precautionary measure, the system displays a warning message on images that were scanned using an un-calibrated scanner.

In addition to the automated calibration within the UFS, routine field service visits are planned to calibrate the UFS. The Display is calibrated by the user as described in the instructions for use. The IMS subsystem does not require calibration.

It is the responsibility of the laboratory staff to conduct and maintain quality control of the slides per their laboratory standards (e.g., staining, cover-slipping, barcode placement) prior to loading the slides into the UFS. After completing a scan, the operator is instructed by the instructions for use to check image data and image quality using the IMS Viewer. To perform a display quality control test, the user places the LCD sensor manually on the center of the screen and initiates the OC procedure described in the IMS instructions for use.

O. Labeling:

The labeling is sufficient and it satisfies the requirements of 21 CFR Parts 801 and 809, as applicable, and the special controls for this device type.

P. Patient Perspectives:

This submission did not include specific information on patient perspectives for this device.

Q. Identified Risks to Health and Identified Mitigations:

Identified Risks to HealthMitigation Measures
Inaccurate or missing resultsleading to, for example,incorrect diagnosis.General controls and special controls (1) and (2)
Delayed resultsGeneral controls and special controls (1) and (2)

{15}------------------------------------------------

R. Benefit/Risk Analysis:

Summary
Summary of theBenefit(s)With respect to public health in the US, there is an important indirectbenefit to both the individual patient and the efficient and effectivefunctioning of the health care system as a whole. Within an institutionusing this device, improved efficiency, speed and accuracy in patientcare is anticipated, particularly with respect to care provided bypathologists, oncologists and surgeons, and to a lesser extent, othersubspecialties. An improved level of pathology, oncology and surgicalcare is expected based on the ability to efficiently store and rapidlyretrieve current and previous biopsies and other pathologic material.The demonstration of noninferiority of the PIPS device to standard ofcare light microscopy supports absence of significant harm to individualpatients when compared to standard of care light microscopy
Summary of theRisk(s)False positive and false negative diagnostic interpretations bypathologists can have a wide range of consequences ranging from noeffect to life-altering and life-threatening effects of the use of orwithholding of various medical and surgical treatments. However, basedon the analytical and clinical studies, and the application of mitigatingmeasures (general controls and special controls established for thisdevice type), such erroneous results are no more or less likely to occurwith the use of the PIPS device as compared to standard of care lightmicroscopy of glass slides.
Summary of OtherFactorsThe pathologist should directly examine the glass slides by lightmicroscopy whenever there is uncertainty about the ability to render adiagnostic interpretation using the PIPS device alone.
ConclusionsDo the probablebenefits outweigh theprobable risks?Yes, the probable benefits of this device outweigh the probable risks,given the combination of required general controls and special controlsestablished for this device.

{16}------------------------------------------------

S. Conclusion:

The information provided in this de novo submission is sufficient to classify this device into class II under regulation 21 CFR 864.3700.

FDA believes that the stated special controls and the applicable general controls, including design controls, provide a reasonable assurance of the safety and effectiveness of the device type. The device is classified under the following:

Product Code: PSY Device type: Whole Slide Imaging System Class: II (special controls) Regulation: 21 CFR 864.3700

  • (a) Identification.
    The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.

  • (b) Classification. Class II (special controls). A whole slide imaging system must comply with the following special controls:

    • (1) Premarket notification submissions must include the following information:
      • (i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
      • (ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
        • (A) Slide feeder;
        • (B) Light source;
        • (C) Imaging optics:
        • (D)Mechanical scanner movement;
        • (E) Digital imaging sensor;
        • (F) Image processing software;
        • (G)Image composition techniques;

{17}------------------------------------------------

(H)Image file formats;

  • (I) Image review manipulation software;
  • (J) Computer environment;
  • (K)Display system.
  • (iii)Detailed bench testing and results at the system level, including for the following, as appropriate:
    • (A)Color reproducibility;
    • (B) Spatial resolution;
    • (C) Focusing test;
    • (D) Whole slide tissue coverage;
    • (E) Stitching error:
    • (F) Turnaround time.
  • (iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
    • (A)Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
    • (B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
    • (C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (e.g., main sign-out diagnosis).

{18}------------------------------------------------

  • (D) A detailed human factors engineering process must be used to evaluate the whole slide imaging system user interface(s).
  • (2) Labeling compliant with 21 CFR 809.10(b) must include the following:
    • The intended use statement must include the information described in paragraph (i) (1)(i) of this section, as applicable, and a statement that reads, "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device."
    • (ii) A description of the technical studies and the summary of results, including those that relate to paragraph (1)(ii) and (1)(iii) of this section, as appropriate.
    • (iii) A description of the performance studies and the summary of results, including those that relate to paragraph (1)(iv) of this section, as appropriate.
    • (iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.

§ 864.3700 Whole slide imaging system.

(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.