K Number
DEN160056
Device Name
Philips IntelliSite Pathology Solution
Date Cleared
2017-04-12

(132 days)

Product Code
Regulation Number
864.3700
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP Authorized
Intended Use
The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing, and management system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. The PIPS comprises the Image Management System (IMS), the Ultra Fast Scanner (UFS) and Display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS.
Device Description
The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system. The PIPS consists of two subsystems and a display: - Ultra Fast Scanner (UFS) (for software UFS1.7.1.1); ● - Image Management System (IMS) (for software IMS2.5.1.1); ● - . Display (PS27QHDCR). The UFS consists of optical, mechanical, electronic and software elements to scan FFPE tissue mounted on glass slides at a resolution of 0.25 um per pixel, which is equivalent to a 40x objective, to create digital Whole Slide Images (WSI). The UFS has a capacity of 300 slides (15 glass slide racks with up to 20 slides per rack). After the slide racks are loaded into the UFS, the UFS automatically detects and starts scanning the slides. CCD cameras are used to capture color images from the back-lit tissue specimen. An LED light source employs toplit illumination to capture the barcode and back-lit illumination for tissue scanning. The stage (STG) and Image Capturing Unit (ICU) are fixed to each other and to the base frame to ensure correct positioning of the slide and to suppress external disturbances. Proprietary software is used for image processing during acquisition. Philips' proprietary format, iSyntax, is used to store and transmit the images between the UFS and the IMS. The IMS is a software only subsystem to be used with the Display. Functionality of the IMS includes the ability to view images, organize workload, and annotate and bookmark scanned images. The user manual for PIPs specifies compatible computer environment hardware and software that is not included as part of the system. The different subsystems of the PIPS are connected over an IT network at the user site. The IT hardware/software that supports the IMS Application Server & Storage software is not provided as part of the PIPS, but may be located in a central server room separate from the workstation with the IMS viewing software and Display. The communication of data between UFS and IMS is via a customer provided wired network or a direct connected cable between these subsystems. PIPS includes a display that has been validated as part of the pivotal clinical study. The PIPS allows pathologists to view and evaluate digital images of formalin-fixed, paraffinembedded (FFPE) tissue slides that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.
More Information

Not Found

Not Found

No
The document explicitly states, "The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis." While it mentions "image processing," this is described in terms of basic image corrections and acquisition, not advanced analytical capabilities typically associated with AI/ML for diagnosis or detection.

No.
The device is used for in vitro diagnostic purposes as an aid to pathologists to review and interpret digital images, not to treat or cure a disease or condition.

Yes.

The "Intended Use / Indications for Use" section explicitly states, "The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue."

No

The device description explicitly states that the system consists of two subsystems and a display, including an "Ultra Fast Scanner (UFS)" which is described as having "optical, mechanical, electronic and software elements". This indicates the presence of significant hardware components beyond just software. While the "Image Management System (IMS)" is described as "software only", the overall "Philips IntelliSite Pathology Solution (PIPS)" is a system that includes hardware.

Yes, this device is an IVD (In Vitro Diagnostic).

Here's why:

  • Intended Use: The "Intended Use / Indications for Use" section explicitly states: "The Philips IntelliSite Pathology Solution (PIPS) is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides..."
  • Purpose: The device is used to create, view, and manage digital images of biological specimens (surgical pathology slides) for the purpose of aiding a pathologist in making a diagnosis. This falls squarely under the definition of an in vitro diagnostic device, which is used to examine specimens taken from the human body to provide information for diagnosis, monitoring, or screening.
  • User and Setting: The intended user is a pathologist, and the setting is a pathology laboratory, which is consistent with the use of an IVD.

N/A

Intended Use / Indications for Use

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing, and management system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

The PIPS comprises the Image Management System (IMS), the Ultra Fast Scanner (UFS) and Display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS.

Product codes

PSY

Device Description

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system.

The PIPS consists of two subsystems and a display:

  • Ultra Fast Scanner (UFS) (for software UFS1.7.1.1); ●
  • Image Management System (IMS) (for software IMS2.5.1.1); ●
  • . Display (PS27QHDCR).

The UFS consists of optical, mechanical, electronic and software elements to scan FFPE tissue mounted on glass slides at a resolution of 0.25 um per pixel, which is equivalent to a 40x objective, to create digital Whole Slide Images (WSI). The UFS has a capacity of 300 slides (15 glass slide racks with up to 20 slides per rack). After the slide racks are loaded into the UFS, the UFS automatically detects and starts scanning the slides. CCD cameras are used to capture color images from the back-lit tissue specimen. An LED light source employs toplit illumination to capture the barcode and back-lit illumination for tissue scanning. The stage (STG) and Image Capturing Unit (ICU) are fixed to each other and to the base frame to ensure correct positioning of the slide and to suppress external disturbances. Proprietary software is used for image processing during acquisition. Philips' proprietary format, iSyntax, is used to store and transmit the images between the UFS and the IMS.

The IMS is a software only subsystem to be used with the Display. Functionality of the IMS includes the ability to view images, organize workload, and annotate and bookmark scanned images. The user manual for PIPs specifies compatible computer environment hardware and software that is not included as part of the system.

The different subsystems of the PIPS are connected over an IT network at the user site. The IT hardware/software that supports the IMS Application Server & Storage software is not provided as part of the PIPS, but may be located in a central server room separate from the workstation with the IMS viewing software and Display. The communication of data between UFS and IMS is via a customer provided wired network or a direct connected cable between these subsystems. PIPS includes a display that has been validated as part of the pivotal clinical study.

The PIPS allows pathologists to view and evaluate digital images of formalin-fixed, paraffinembedded (FFPE) tissue slides that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.

Mentions image processing

Yes

Mentions AI, DNN, or ML

Not Found

Input Imaging Modality

Whole Slide Images (WSI) from scanned glass slides.

Anatomical Site

Not specified. The clinical study mentions a wide range of organs/tissues including Breast, Prostate, Respiratory, Colorectal, GE junction, Stomach, Skin, Lymph node, Bladder, Gynecological, Liver/BD, Endocrine, Brain/neuro, Kidney/neoplastic, Salivary gland, Peritoneal, Gallbladder, Appendix, Soft tissue, Perianal.

Indicated Patient Age Range

Not Found

Intended User / Care Setting

Pathologist / healthcare system

Description of the training set, sample size, data source, and annotation protocol

Not Found

Description of the test set, sample size, data source, and annotation protocol

Precision/Reproducibility Studies:

  • Slide set for intra-system and inter-system studies: n=399 glass slides used. From this set, 399 FOVs were extracted, including 420 selected features. Additionally, 210 wild card FOVs were selected from other glass slides following the same procedure to minimize bias by the reading pathologist. Total FOV set for intra- and inter-system study was 609 FOVs.
  • Slide set for inter-site study: The 210 wild card FOVs from the intra- and inter-system studies were enrolled as study FOVs. In addition, 189 slides were selected as described above, resulting in a total FOV set of 399 FOVs with 420 selected features. No wild card FOVs were included in the inter-site study.
  • Data Source: Prescribed features in formalin-fixed, paraffin-embedded (FFPE) H&E slides. Consecutive cases were selected from the pathology laboratory using the laboratory information system (LIS) by an enrollment pathologist (EP).
  • Annotation Protocol: The EP reviewed the WSI and defined an area (bookmark) containing the selected feature(s) at the appropriate magnification. A static full resolution extraction image of the bookmark was created and defined as the field of view (FOV). A validating enrollment pathologist (VEP) confirmed whether the feature(s) was present on the FOV. After confirmation, the FOV was considered enrolled.

Clinical Study:

  • Sample Size: A total of 1992 cases were included in the Full Analyses Set with 3390 slides. In total, 15,925 readings were adjudicated (7,964 MD and 7,961 MO).
  • Data Source: Slides obtained from consecutive cases at least one year old and for which a sign-out diagnosis was available from institutions.
  • Annotation Protocol: Slides were selected by a study EP from the original slides used for the sign-out diagnosis. The EP at each site reviewed the pathology report for each case and determined the main diagnosis for the case. The EP subsequently matched the case to the clinical study design list of types of cases to be evaluated and selected the representative slide(s) that reflected the main diagnosis. The selected slides could include H&E, IHC and special stains. In the case of IHCs and special stains, the inclusion of control slides was required to fulfill the quality checks according to general clinical practice. The VEP confirmed that the selected slide(s) reflected the main diagnosis for the case, as well as required ancillary information for cancer cases, and then the case was enrolled. Three adjudicators reviewed the reader diagnosis against the sign-out diagnosis and determined whether the diagnosis was concordant, minor discordant or major discordant. In the event of disagreement between two adjudicators, a third adjudicator reviewed.

Summary of Performance Studies (study type, sample size, AUC, MRMC, standalone performance, key results)

1. Analytical Performance:

  • Precision/Reproducibility:
    • Study Type: Intra-system precision, Inter-system precision, Inter-site reproducibility. Pathologist assessment of specific histopathologic "features" on FFPE H&E slides.
    • Sample Size:
      • Intra-system and Inter-system: 609 FOVs (399 study FOVs with 420 selected features + 210 wild card FOVs) read by 3 pathologists, 3 times each.
      • Inter-site: 399 FOVs (420 selected features) read by 3 different pathologists, once each.
    • Key Results: Agreement rates were calculated based on pathologists recording the presence of observed features. The study acceptance criterion for precision studies was that the lower limit of the 95% CI for the overall agreement rate be 85.0% or above.
      • Intra-system Precision:
        • Overall Agreement Rate: 92.0% (95% CI: 90.57, 93.29). Met acceptance criterion.
      • Inter-system Precision:
        • Overall Agreement Rate: 93.8% (95% CI: 92.6, 95.0). Met acceptance criterion.
      • Inter-site Reproducibility:
        • Overall Agreement Rate: 90.2% (95% CI: 87.9, 92.4).

2. Technical Studies:
Multiple studies were conducted to evaluate the performance assessment data associated with the technical evaluation of the PIPS components (Slide Feeder, Light Source, Imaging Optics, Mechanical Scanner Movement, Digital Imaging Sensor, Image Processing Software, Image Composition, Image Files Format, Image Review Manipulation Software, Computer Environment, Display, Color Reproducibility, Spatial Resolution, Focusing Test, Whole Slide Tissue Coverage, Stitching Error, Turnaround Time). Information and test data were provided to demonstrate their performance and specifications.

3. Human Factors Studies:

  • Study Type: Systematic evaluation involving simulated use by representative users performing all critical tasks.
  • Key Results: No critical task failures were observed. Occasional difficulties were noted but were of little influence on usability perception, and no difficulties or failures were observed on tasks that could lead to patient harm. Pathologists and lab technicians were able to easily identify cases and ensure completeness.

4. Clinical Studies:

  • Study Type: Non-inferiority study comparing digital image review (MD) to optical microscopy (MO) for surgical pathology FFPE tissue slides.
  • Sample Size: 1992 cases (3390 slides), resulting in 15,925 adjudications (7,964 MD and 7,961 MO). Four sites, four pathologists per site.
  • Endpoint: Primary endpoint was the difference in major discordance rates between MD and MO compared to a reference (original sign-out diagnosis by optical microscope).
  • Key Results:
    • Observed overall major discordance rate: 4.9% for MD, 4.6% for MO.
    • Modelled major discordance rate (MMRM logistic model): 4.7% for MD, 4.4% for MO.
    • MD-MO difference in major discordance rate: 0.4% (95% CI: -0.30%, 1.01%).
    • The upper limit of the 95% CI was less than the pre-specified non-inferiority margin of 4%.
    • Conclusion: The MD modality using the PIPS was demonstrated to be non-inferior to the MO modality with respect to major discordance rate when comparing to the main diagnosis.
    • Secondary analysis showed strong agreement between modalities: 96.5% of paired readings resulted in no major discordance for both (93.5%) or major discordance for both (3%).
    • Major discordance rates by organ: Differences between MD and MO were

§ 864.3700 Whole slide imaging system.

(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.

0

EVALUATION OF AUTOMATIC CLASS III DESIGNATION FOR Philips IntelliSite Pathology Solution (PIPS)

DECISION SUMMARY

Correction Date: October 13, 2017 This Decision Summary contains corrections to the April 13, 2017 Decision Summary

A. DEN Number:

DEN160056

B. Purpose for Submission:

De Novo request for evaluation of automatic class III designation for the Philips IntelliSite Pathology Solution (PIPS)

C. Measurand:

Not applicable.

D. Type of Test:

Digital pathology whole slide imaging system

E. Applicant:

Philips Medical Systems Nederland B.V.

F. Proprietary and Established Names:

Philips IntelliSite Pathology Solution (PIPS)

G. Regulatory Information:

    1. Regulation section:
      21 CFR 864.3700
    1. Classification:
      Class II (special controls)
    1. Product code: PSY
    1. Panel: 88 - Pathology

1

H. Indications for use:

1. Indications for use:

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing, and management system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

The PIPS comprises the Image Management System (IMS), the Ultra Fast Scanner (UFS) and Display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS.

    1. Special conditions for use statement(s):
      For in vitro diagnostic (IVD) use only

For prescription use only

    1. Special instrument requirements:
      Image Management System (IMS) (for software IMS2.5.1.1) Ultra Fast Scanner (UFS) (for software UFS1.7.1.1) Display (PS27QHDCR)

I. Device Description:

The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, management, viewing and analysis system.

The PIPS consists of two subsystems and a display:

  • Ultra Fast Scanner (UFS) (for software UFS1.7.1.1); ●
  • Image Management System (IMS) (for software IMS2.5.1.1); ●
  • . Display (PS27QHDCR).

The UFS consists of optical, mechanical, electronic and software elements to scan FFPE tissue mounted on glass slides at a resolution of 0.25 um per pixel, which is equivalent to a 40x objective, to create digital Whole Slide Images (WSI). The UFS has a capacity of 300 slides (15 glass slide racks with up to 20 slides per rack). After the slide racks are loaded into the UFS, the UFS automatically detects and starts scanning the slides. CCD cameras are used

2

to capture color images from the back-lit tissue specimen. An LED light source employs toplit illumination to capture the barcode and back-lit illumination for tissue scanning. The stage (STG) and Image Capturing Unit (ICU) are fixed to each other and to the base frame to ensure correct positioning of the slide and to suppress external disturbances. Proprietary software is used for image processing during acquisition. Philips' proprietary format, iSyntax, is used to store and transmit the images between the UFS and the IMS.

The IMS is a software only subsystem to be used with the Display. Functionality of the IMS includes the ability to view images, organize workload, and annotate and bookmark scanned images. The user manual for PIPs specifies compatible computer environment hardware and software that is not included as part of the system.

The different subsystems of the PIPS are connected over an IT network at the user site. The IT hardware/software that supports the IMS Application Server & Storage software is not provided as part of the PIPS, but may be located in a central server room separate from the workstation with the IMS viewing software and Display. The communication of data between UFS and IMS is via a customer provided wired network or a direct connected cable between these subsystems. PIPS includes a display that has been validated as part of the pivotal clinical study.

The PIPS allows pathologists to view and evaluate digital images of formalin-fixed, paraffinembedded (FFPE) tissue slides that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. The PIPS does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.

J. Standard/Guidance Document Referenced (if applicable):

Technical Performance Assessment Digital Pathology Whole Slide Imaging Devices; Guidance for Industry and Food and Drug Administration Staff (April 20, 2016).

K. Test Principle:

The PIPS device is an automated system designed for scanning and digitizing surgical pathology slides prepared from FFPE tissue. These digitized images can then be reviewed and interpreted by pathologists for clinical (patient care) purposes.

Prior to scanning the slide on the UFS, the technician conducts quality control of the slides per the laboratory's standards. The technician then places the slides into racks, which are loaded into the UFS. The handler in the UFS automatically moves a slide from the storage area to the scanning area. A macro image is generated that includes the slide label and a low power image of the entire slide. The system then determines regions of interest in the tissue to scan, which are subsequently scanned at high resolution (0.25 um per pixel). After the slide is scanned, it is returned to the same slot of the same rack from which it was originally obtained.

The images scanned in the UFS are compressed using Philips' proprietary iSyntax format and are transmitted to the IMS subsystem. The images can be reviewed through the IMS only.

3

The IMS allows the user to identify, organize and execute the worklist. The pathologist selects the first slide, navigates around the slide and views the images at the desired magnification. The pathologist is responsible for ensuring the validity of the interpretation of the digital images obtained from the PIPS.

L. Interpretation of Results:

The PIPS is an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from FFPE tissue that would otherwise be appropriate for manual visualization by conventional brightfield (light) microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using the PIPS. Additionally, it is the responsibility of the pathologist to use professional judgment when determining whether to directly examine the glass slides by light microscopy if there is uncertainty about the interpretation of the digital image(s).

M. Performance Characteristics:

1. Analytical performance:

  • a. Precision/Reproducibility:
    The objective of this study was to evaluate both intra-system precision and intersystem precision for the PIPS. Inter-site reproducibility was also evaluated for the PIPS.

The precision of the device was based on three reading pathologists' assessments and identification of specific histopathologic "features" that are observed in formalinfixed, paraffin-embedded (FFPE) H&E slides. Twenty-one (21) features were selected for the analytical studies. The 21 features were evaluated at their relevant magnification. The levels of magnification were 10x, 20x, 40x and each level of magnification included 7 features.

For each feature, three organs were selected. For each organ, six FOVs were selected, each containing one study feature. (21 features * 3 organs/feature * 6 FOV/organ = 378 single selected feature FOVs). Additionally, there were 21 FOVs each containing two selected features, for a total of 378 + 21 = 399 FOVs and 378 + (21*2) = 420 selected features. An example is shown in Table 1 below.

Number of Selected Features in FOV
MagnificationFeatureOrgan12
40xMitosisLung60
Rectum61
Uterus61
Overall182
Table 1: Study feature example

4

For each pre-specified feature, consecutive cases were selected from the pathology laboratory using the laboratory information system (LIS) by the enrollment pathologist (EP). The validating enrollment pathologist (VEP) confirmed whether the feature was present on the glass slide. Once the slides were scanned, the EP reviewed the WSI and defined an area (bookmark) containing the selected feature(s) at the appropriate magnification. Then a static full resolution extraction image of the bookmark was created and defined as the field of view (FOV). The VEP confirmed whether the feature(s) was present on the FOV. After confirmation, the FOV was considered enrolled.

For the intra-system and the inter-system studies, the same set of glass slides (n=399) was used. From this slide set, 399 FOVs were extracted, which included 420 selected features. In addition, 210 wild card FOVs were selected from other glass slides following the same procedure. Wild card FOVS were used to minimize or avoid bias by the reading pathologist, but were not analyzed or used for the primary analysis. The total FOV set for the intra- and inter-system study was 609 FOVs. Each of the three reading pathologists evaluated each enrolled FOV three times, once during each of three reading sessions.

For the inter-site study, the 210 wild card FOVs from the intra- and inter-system studies were enrolled as study FOVs. In addition. 189 slides were selected as described above, resulting in a total FOV set of 399 FOVs with 420 selected features. There were no wild card FOVs included in the inter-site study, as each reading pathologist evaluated each FOV only once. The study included three different reading pathologists located at different sites, each with its own PIPS system.

For each FOV, the reading pathologist recorded the presence of each observed feature on a checklist. For each magnification, a separate checklist containing ten features (seven study features and three non-study features) was developed. Only the selected features were used for the primary analysis. For secondary analyses, all observed features were analyzed. Each study was designed such that there were three readings for each selected feature on an FOV:

  • For the intra-system study, the three readings by the same pathologist were from three scans from the same system.
  • . For the inter-system study, the three readings by the same pathologist were scans from three different systems (at the same site).
  • . For inter-site reproducibility, the three readings were by three different pathologists and from three different systems, each at a different site.

Intra-system Precision Study:

The study slide set was divided equally (n=133 slides per system) and randomly over three systems at one site. On each system the slides were scanned three times with at least six hours downtime (ensuring full cool down) of the system between scanning iterations. The 210 wild card slides were all scanned once on System 1.

5

Three separate reading sessions were performed by each of three reading pathologists. with a washout period of at least two weeks in between reading sessions. The 399 FOVs that were read during a reading session were randomly selected from the FOVs originating from three different systems and three different iterations. Per reading session. 70 different wild card FOVs were added such that all 210 wild card FOVs were read by each reading pathologist.

The overall intra-system agreement rate was calculated by averaging all available pairwise comparison results over all 420 enrolled features and all three pathologists. While each system scanned 133 slides 3 times each, some of the slides contained multiple features as explained previously. This results in different numbers of comparison pairs on a per system basis, but the overall number is consistent (420 selected features * 3 systems * 3 reads = 3780). To preserve the correlation structure of multiple readings of the same feature and multiple features on an FOV, the bootstrap method was used to derive a two-sided 95% confidence interval (CI) for the overall agreement rate. An FOV was the bootstrap re-sampling unit. The study acceptance criterion that the lower limit of the 95% CI for the overall agreement rate be 85.0% or above was met (Table 2).

| System | Number of
Pairwise
Agreements | Number of
Comparison
Pairs | Agreement Rate | |
|----------|-------------------------------------|----------------------------------|----------------|----------------|
| | | | % | 95% CI |
| System 1 | 1146 | 1278 | 89.7 | (87.1, 92.1) |
| System 2 | 1149 | 1233 | 93.2 | (90.9, 95.3) |
| System 3 | 1181 | 1269 | 93.1 | (90.7, 95.2) |
| Overall | 3476 | 3780 | 92.0 | (90.57, 93.29) |

Table 2: Intra-system study results

Inter-system Precision Study:

The complete study slide set (n=399) was scanned once on each of the three systems at one site. The same 210 wildcard FOVs were used from the intra-system study. Three separate reading sessions were performed by each pathologist with a washout period of at least two weeks between sessions. The 399 FOVs that were read during a reading session were randomly selected from the FOVs originating from three different systems. Per reading session, 70 different wild card FOVs were added such that all 210 wild card FOVs were read by each reading pathologist.

The overall inter-system agreement rate was calculated averaging all available pairwise comparison results over all 420 selected features and all three pathologists. To preserve the correlation structure of multiple readings of the same feature and multiple features on an FOV, the bootstrap method was used to derive a two-sided 95% CI for the overall agreement rate. An FOV was the bootstrap re-sampling unit. The acceptance criterion that the lower-limit of the 95% CI for the overall agreement rate was 85.0% or above was met (Table 3).

6

Number ofNumber ofAgreement Rate
SystemsPairwiseComparison
ComparedAgreementsPairs%95% CI
Sys 1 v Sys 21173126093.1(91.5, 94.6)
Sys 1 v Sys 31181126093.7(92.2, 95.2)
Sys 2 v Sys 31192126094.6(93.3, 95.9)
Overall3546378093.8(92.6, 95.0)

Table 3: Inter-system study results

Inter-site Reproducibility Study:

For the inter-site study, the slide set (n=399) was scanned once on each of three sites, resulting in three WSI sets. There was a different reading pathologist at each site, and each of the three reading pathologists had only one reading session in which all FOVs scanned at their site were read. The order in which the FOVs were read at each site was randomly ordered.

The overall inter-site agreement rate was calculated by averaging all available pairwise comparison results over all 420 enrolled features. A bootstrap 95% CI was also calculated for the overall inter-site agreement rate for review purposes (Table 4).

Sites ComparedNumber of Pairwise AgreementsNumber of Comparison PairsAgreement Rate
%95% CI
Site 1 v Site 237042088.1(84.9, 91.2)
Site 1 v Site 337942090.2(87.4, 92.9)
Site 2 v Site 338742092.1(89.4, 94.7)
Overall1136126090.2(87.9, 92.4)

Table 4: Inter-site study results

  • b. Linearity/assay reportable range:
    Not applicable

  • c. Traceability, Stability, Expected values (controls, calibrators, or methods):
    Not applicable

  • d. Detection limit:
    Not Applicable

  • e. Analytical Reactivity: Not Applicable

  • f. Interferences (Robustness):

7

Not Applicable

  • g. Assay cut-off: (Interpretation of Results)
    Not Applicable

    1. Technical studies:
      Multiple studies were conducted to evaluate the performance assessment data associated with the technical evaluation of the PIPS.
  • a. Slide Feeder
    Information was provided on the configuration of the slide feed mechanism, including a physical description of the slide, the number of slides in queue (carrier), and the class of automation. Information was provided on the user interaction with the slide feeder, including hardware, software, feedback mechanisms, and Failure Mode and Effects Analysis (FMEA).

  • b. Light Source
    Descriptive information associated with the lamp and the condenser was provided. Testing information was provided to verify the spectral distribution of the light source as part of the color reproduction capability of the UFS subsystem.

Imaging Optics C.

An Optical schematic with all optical elements identified from slide (object plane) to digital image sensor (image plane) was provided. Descriptive information regarding the microscope objective, the auxiliary lenses, and the magnification of imaging optics was provided. Testing information regarding the relative irradiance, optical distortions, and lateral chromatics aberrations was provided.

d. Mechanical Scanner Movement

Information and specifications on the configuration of the stage, method of movement, control of movement of the stage, and FMEA was provided. Test data to verify the repeatability of the stage movement and to verify the mechanism that the stage movement stays within limits during operations was provided.

Digital Imaging Sensor e.

Information and specifications on the sensor type, pixel information, responsivity specifications, noise specifications, readout rate, and digital output format. Test data to determine the correct functioning of the digital image sensor that converts optical signals of the slide to digital signals which consist of a set of numerical values corresponding to the brightness and color at each point in the optical image was

8

provided.

f. Image Processing Software

Information and specifications on the exposure control, white balance, color correction, sub-sampling, pixel-offset correction, pixel-gain or flat-field correction, and pixel-defect correction was provided.

Image Composition g.

Information and specifications on the scanning method, the scanning speed, and the number of planes at the Z-axis to be digitized was provided. Test data to analyze the image composition performance was provided.

h. Image Files Format

Information and specifications on the compression method, compression ratio, file format, and file organization was provided.

i. Image Review Manipulation Software

Information and specifications on continuous panning and pre-fetching, continuous zooming, discrete Z-axis displacement, ability to compare multiple slide simultaneously on multiple windows, image enhancement and sharpening functions, color manipulation, annotation tools, digital bookmarks, and virtual multihead microscope was provided.

j. Computer Environment

Information and specifications on the computer hardware, operating system, graphics card, graphics card driver, color management settings, color profile, and display interface was provided.

k. Display

Information and specifications on the technological characteristics of the display device, physical size of the viewable area and aspect ratio, backlight type and properties, frame rate and refresh rate, pixel array, pitch, pixel aperture ratio and subpixel matrix scheme, subpixel driving to improve grayscale resolution, supported color spaces, display interface, user controls of brightness, contrast, gamma, color space, power-saving options, etc., via the on-screen display menu, ambient light adaptation, touchscreen technology, color calibration tools, and frequency and nature of quality-control tests was provided. Test data to verify the performance of the display was provided.

  • l. Color Reproducibility

9

Test data to evaluate the color reproducibility of the system was provided.

  • m. Spatial Resolution
    Test data to evaluate the composite optical performance of all components in the image acquisition phase was provided.

  • n. Focusing Test
    Test data to evaluate the technical focus quality of the system was provided.

  • Whole Slide Tissue Coverage 0.
    Test data to demonstrate that the entire tissue specimen on the glass slide is detected by the tissue detection algorithms, and that all of the tissue specimens are included in the digital image file was provided.

  • p. Stitching Error
    Test data to evaluate the stitching errors and artifacts in the reconstructed image was provided.

  • q. Turnaround Time
    Test data to evaluate the turnaround time of the system was provided.

3. Human factors studies:

Human factors studies designed around critical user tasks and use scenarios performed by users were conducted. Information included a list of all critical user tasks and a description of the process that was followed to identify them.

A systematic evaluation involving simulated use by representative users performing all critical tasks required for operation of the device, and collected subjective assessment for failure was provided. No critical task failures were observed. There were the occasional difficulties that are to be expected with any piece of new software but learnability and ease of use seemed very high. All difficulties observed were of little influence on the perception of the usability, and no difficulties or failures were observed on tasks that could lead to patient harm.

In all instances both pathologists and lab technicians were able to easily identify cases and ensure that everything was complete.

4. Clinical studies:

A study was conducted to demonstrate that viewing, reviewing and diagnosing digital

10

images of surgical pathology FFPE tissue slides using the PIPS is non-inferior to using optical (light) microscopy. The primary endpoint was the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (main) diagnosis, which is based on the original sign-out diagnosis rendered at the institution, using an optical (light) microscope. By the study protocol, a total of 2000 cases consisting of multiple organ and tissue types were to be enrolled. Cases were divided over four sites. At each site, four pathologists read all the cases assigned to the site using both the MO and the MD modalities in an alternating fashion and randomized order and with a washout period of four weeks in between, resulting in a total of 8000 planned digital reads and 8000 planned optical reads. Three adjudicators reviewed the reader diagnosis against the sign-out diagnosis and determined whether the diagnosis was concordant, minor discordant or major discordant.

The study was based on the reading of slides obtained from consecutive cases at least one year old and for which a sign-out diagnosis was available. Slides were selected by a study EP from the original slides used for the sign-out diagnosis. The EP at each site reviewed the pathology report for each case and determined the main diagnosis for the case. The EP subsequently matched the case to the clinical study design list of types of cases to be evaluated and selected the representative slide(s) that reflected the main diagnosis. The selected slides could include H&E, IHC and special stains. In the case of IHCs and special stains, the inclusion of control slides was required to fulfill the quality checks according to general clinical practice. The VEP confirmed that the selected slide(s) reflected the main diagnosis for the case, as well as required ancillary information for cancer cases, and then the case was enrolled.

All 16 reading pathologists, four per site, read the slides for all cases at their site, approximately 500 cases per site. using both the MO modality and the MD modality in an alternating order. There was a washout of at least four weeks between the first and second reading of the same case. The reading pathologists were provided with all representative slide(s) of a case at once. An electronic case report form (eCRF) was completed to document the reading pathologist's diagnosis. Then two adjudication pathologists independently reviewed the eCRFs to determine whether or not the diagnosis was consistent with the main diagnosis. A major discordance was defined as a difference in diagnosis that would be associated with a clinically important difference in patient management. A minor discordance was defined as a difference in diagnosis that would not be associated with a clinically important difference in patient management. In the event that there was a disagreement between the two adjudicators, a third adjudication pathologist reviewed the case to achieve majority vote. In cases where all three adjudicators had a different opinion, consensus was arrived at in an adjudication panel meeting consisting of the same three adjudication pathologists.

Inclusion criteria:

  • All glass slides, with human tissue obtained via surgical pathology of original case, . were available.
  • . Original sign-out diagnosis was available.
  • . The selected slide or slides for the main diagnosis and the control slide(s) fulfilled the

11

quality checks according to general clinical practice.

  • . Cases were at least one year since accessioning.
    Exclusion criteria:

  • Cases, including cases previously referred to another institution. for which any H&E, . IHC or special stains slide used for the original sign-out diagnosis was no longer available at the site.

  • Cases for which the control slides for IHC and special stains were not available. .

  • . The selected slide or slides for the main diagnosis did not match any subtype of the organ for which the case was selected.

  • Relevant clinical information that was available to the sign-out pathologist in the . pathology request form could not be obtained.

  • Selected slides contained indelible markings. ●

  • Selected slides with damage that could not be easily repaired. .

  • More than one case was selected for a patient (only one case may be enrolled per ● patient).

  • Case consisted of frozen section(s) only.

  • . Case consisted of gross specimens only.

For the primary objective of demonstrating the MD major discordance rate to be noninferior to the MO major discordance rate, a Mixed Model Repeated Measures (MMRM) logistic regression was conducted. For each reading result the dependent variable was the major discordance status and the independent variables included modality as a fixed effect (MD vs. MO) and site, reader, and case as random effects. A two-sided 95% CI for the modality effect, i.e., the overall MD-MO major discordance rate difference, was constructed from this analysis. If the upper bound of the 95% CI was less than the noninferiority margin of 4%, MD would be considered non-inferior to MO.

A total of 1992 cases were included in the Full Analyses Set with 3390 slides. In total, 15.925 readings were adjudicated (7.964 MD and 7.961 MO). The observed overall major discordance rate, i.e., over all sites, reading pathologists and organs, was 4.9% for MD and 4.6% for MO. The MD-MO difference in major discordance rate was 0.4%. The major discordance rates as estimated by the MMRM logistic model ("modelled") resulted in similar, slightly lower, proportions, i.e., 4.7% for MD and 4.4% for MO. The MD-MO difference in major discordance rate was 0.4%, with a derived two-sided 95% CI of [-0.30%: 1.01%]. The upper limit of this confidence interval was less than the pre-specified non-inferiority margin of 4%, and therefore, the MD modality using the PIPS was demonstrated to be non-inferior to the MO modality with respect to major discordance rate when comparing to the main diagnosis. Thus, the study met the primary objective as shown in Table 5.

Table 5: Clinical study results based on major discordance rates

| | Manual Digital (MD) | | | Manual Optical (MO) | | | Difference
(MD - MO) | |
|--|---------------------|---|--------|---------------------|---|--------|-------------------------|--------|
| | Total | % | 95% CI | Total | % | 95% CI | % | 95% CI |

12

readsdiscordantreadsdiscordantdiscordant
Observed79644.9N/A79614.6N/A0.4*
MMRM79644.73.27, 6.8279614.43.2, 6.330.4*-0.30, 1.01
  • Difference does not equal 0.3 due to rounding error.

The difference in major discordance rates between MD and MO is shown in Table 6.

Major discordance rate
OrganManual digital
(MD)Manual optical
(MO)Difference
(MD - MO)
Breast4.2%4.3%-0.2%
Prostate12.0%11.3%0.8%
Respiratory3.5%4.2%-0.7%
Colorectal1.7%1.0%0.7%
GE junction2.0%1.3%0.7%
Stomach0.8%0.5%0.3%
Skin4.9%4.7%0.3%
Lymph node0.3%0.8%-0.5%
Bladder7.3%6.1%1.3%
Gynecological6.3%5.2%1.2%
Liver/BD4.6%5.6%-1.0%
Endocrine6.5%4.7%1.8%
Brain / neuro6.2%5.8%0.4%
Kidney / neoplastic2.5%1.0%1.5%
Salivary gland2.0%3.0%-1.0%
Peritoneal0.0%0.0%0.0%
Gallbladder0.0%0.0%0.0%
Appendix0.0%0.0%0.0%
Soft tissue0.0%0.0%0.0%
Perianal1.0%2.0%-1.0%

Table 6: Major discordance rates by organ and modality

The differences in major discordance rates between MD and MO were