(270 days)
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300 and PP27QHD Display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
- Subsystems:
a. A scanner in any combination of the following scanner models
i. Ultra Fast Scanner (UFS)
ii. Pathology Scanner SG with different versions for varying slide capacity Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
b. Image Management System (IMS) 4.2 - Display PP27QHD
Here's a breakdown of the acceptance criteria and study details for the Philips IntelliSite Pathology Solution 5.1, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Acceptance Criteria | Reported Device Performance (Summary) |
---|---|---|
Technical Performance (Non-Clinical) | All technical studies (e.g., Light Source, Imaging optics, Mechanical scanner Movement, Digital Imaging sensor, Image Processing Software, Image composition, Image Review Manipulation Software, Color Reproducibility, Spatial Resolution, Focusing Test, Whole Slide Tissue Coverage, Stitching Error, Turnaround Time) must pass their predefined acceptance criteria. | All technical studies passed their acceptance criteria. Pixelwise comparison showed identical image reproduction with zero ΔE between subject and predicate device. |
Electrical Safety | Compliance with IEC61010-1. | Passed. |
Electromagnetic Compatibility (EMC) | Compliance with IEC 61326-2-6 (for laboratory use of in vitro diagnostic equipment) and IEC 60601-1-2. | Passed for both emissions and immunity. |
Human Factors | User tasks and use scenarios successfully completed by all user groups. | Successfully completed for all user groups. |
Precision Study (Intra-system) | Lower limit of the 95% Confidence Interval (CI) of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 88.3% (95% CI: 86.7%; 89.9%). All individual scanner CIs also exceeded 85%. |
Precision Study (Inter-system) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 95.4% (95% CI: 94.4%; 96.5%). All individual scanner comparison CIs also exceeded 85%. |
Precision Study (Inter-site) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 90.7% (95% CI: 88.4%; 92.9%). All individual site comparison CIs also exceeded 85%. |
Clinical Study (Non-Inferiority) | The upper bound of the 95% two-sided confidence interval for the manual digital – manual optical difference in major discordance rate is less than 4%. | Difference in major discordance rate (digital-optical) was 0.1% with a 95% CI of (-1.01%; 1.18%). The upper limit (1.18%) was less than the non-inferiority margin of 4%. |
Study Details
2. Sample sized used for the test set and the data provenance:
-
Non-Clinical (Pixelwise Comparison):
- Sample Size: 42 FFPE tissue glass slides from different anatomic locations. Three regions of interest (ROI) were selected from each scanned image.
- Data Provenance: Not explicitly stated, but likely retrospective from existing archives given the nature of image comparison. The country of origin is not specified.
-
Precision Study:
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
- Intra-system: 3600 comparison pairs (likely 3 scanners with multiple reads/slides contributing).
- Inter-system: 3610 comparison pairs.
- Inter-site: 1228 comparison pairs.
- Data Provenance: Not explicitly stated, but the inter-site component suggests data from multiple locations. Retrospective or prospective is not specified.
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
-
Clinical Study:
- Sample Size: 952 cases consisting of multiple organ and tissue types.
- Data Provenance: Cases were divided over three sites. Retrospective or prospective is not specified, but the design (randomized order, washout period) suggests a prospective setup for the reading phase. The "original sign-out diagnosis rendered at the institution" implies a retrospective component for establishing the initial ground truth.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Non-Clinical (Pixelwise Comparison): No experts were explicitly mentioned for ground truth establishment; the comparison was purely technical (pixel-to-pixel).
- Precision Study: The ground truth for agreement was based on the comparison of diagnoses by pathologists, but the initial "ground truth" for the slides themselves (e.g., what they actually represented) isn't detailed in terms of expert consensus.
- Clinical Study:
- Initial Ground Truth: The "original sign-out diagnosis rendered at the institution, using an optical (light) microscope" served as the primary reference diagnosis. The qualifications of these original pathologists are implied to be standard for their role but not explicitly stated (e.g., "radiologist with 10 years of experience").
- Adjudication: Three adjudicators reviewed the reader diagnoses against the sign-out diagnosis to determine concordance, minor discordance, or major discordance. Their qualifications are not specified beyond being "adjudicators."
4. Adjudication method (for the test set):
- Clinical Study: Three adjudicators reviewed the reader diagnoses (from both manual digital and manual optical modalities) against the original sign-out diagnosis. The method for resolving disagreements among the three adjudicators (e.g., 2+1 majority, consensus) is not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a MRMC study was done, but it was for comparative non-inferiority between digital and optical methods by human readers, not explicitly for AI assistance. The study compared human pathologists reading slides using the digital system (PIPS 5.1) versus human pathologists reading slides using a traditional optical microscope.
- Effect Size of AI: This study does not involve AI assistance for human readers. The device (PIPS 5.1) is a whole slide imaging system, not an AI diagnostic tool. Therefore, there is no reported effect size regarding human reader improvement with AI assistance from this study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No. The Philips IntelliSite Pathology Solution 5.1 is described as "an aid to the pathologist to review and interpret digital images." The clinical study clearly focuses on the performance of human pathologists using the system, demonstrating its non-inferiority to optical microscopy for human interpretation. There is no mention of a standalone algorithm performance.
7. The type of ground truth used:
- Non-Clinical (Pixelwise Comparison): The "ground truth" was the direct pixel data from the predicate device, against which the subject device's reproduced pixels were compared for identity.
- Precision Study: The ground truth for evaluating agreement rates was the diagnoses made by pathologists on different scans of the same slides. The ultimate truth of the disease state was implicitly tied to the original diagnostic process.
- Clinical Study: The primary ground truth was "the original sign-out diagnosis rendered at the institution, using an optical (light) microscope." This represents a form of expert consensus/established diagnosis within a clinical setting.
8. The sample size for the training set:
- Not Applicable / Not Provided. The provided document describes a 510(k) submission for a Whole Slide Imaging (WSI) system, which is a medical device for generating, viewing, and managing digital images of pathology slides. It acts as a digital microscope. It is not an AI algorithm or a diagnostic tool that requires a training set in the typical machine learning sense to learn a particular diagnostic task. Therefore, no training set data is relevant or provided here.
9. How the ground truth for the training set was established:
- Not Applicable / Not Provided. As explained above, this device does not utilize a training set in the AI/ML context.
§ 864.3700 Whole slide imaging system.
(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.