Search Results
Found 3 results
510(k) Data Aggregation
(79 days)
Philips IntelliSite Pathology Solution 5.1
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20. Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP27QHD display, a Beacon C411W display or a Barco MDCC-4430 display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
-
- A scanner in any combination of the following scanner models
- . Ultra Fast Scanner (UFS)
- Pathology Scanner SG with different versions for varying slide capacity . Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
-
- Image Management System (IMS) 4.2
-
- Clinical display
- PP27QHD or C411W or MDCC-4430 .
PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS 5.1.
This document is a 510(k) summary for the Philips IntelliSite Pathology Solution (PIPS) 5.1. It describes the device, its intended use, and compares it to a legally marketed predicate device (also PIPS 5.1, K242848). The key change in the subject device is the introduction of a new clinical display, Barco MDCC-4430.
Here's the breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The submission focuses on demonstrating substantial equivalence of the new display (Barco MDCC-4430) to the predicate's display (Philips PP27QHD). The acceptance criteria are largely derived from the FDA's "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance) and compliance with international consensus standards. The performance is reported as successful verification showing equivalence.
Acceptance Criteria (TPA Guidance 항목) | Reported Device Performance (Subject Device with Barco MDCC-4430) | Conclusion on Substantial Equivalence |
---|---|---|
Display type | Color LCD | Substantially equivalent: Minor difference in physical display size is a minor change and does not raise any questions of safety or effectiveness. |
Manufacturer | Barco N.V. | Same as above. |
Technology | IPS technology with a-Si Thin Film Transistor (unchanged from predicate) | Substantially equivalent: Proposed and predicate device are considered substantially equivalent. |
Physical display size | 714 mm x 478 mm x 74 mm | Substantially equivalent: Minor change, does not raise safety/effectiveness questions. |
Active display area | 655 mm x 410 mm (30.4 inch diagonal) | Substantially equivalent: Slightly higher viewable area is a minor change. Verification testing confirms image quality is equivalent to the predicate device. |
Aspect ratio | 16:10 | Substantially equivalent: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
Resolution | 2560 x 1600 pixels | Substantially equivalent: Slightly higher resolution and pixel size is a minor change. Verification testing confirms image quality is equivalent to the predicate device. Conclusion: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
Pixel Pitch | 0.256 mm x 0.256 mm | Same as above. |
Color calibration tools (software) | QAWeb Enterprise version 2.14.0 installed on the workstation | Substantially equivalent: New display uses different calibration software, but calibration method (built-in front sensor), calibration targets, and frequency of quality control tests remain unchanged. Conclusion: This change does not raise new safety/effectiveness concerns. |
Color calibration tools (hardware) | Built-in front sensor (same as predicate) | Same as above. |
Additional Non-clinical Performance Tests (TPA Guidance) | Verification that technological characteristics of the display were not affected by the new panel, including: Spatial resolution, Pixel defects, Artifacts, Temporal response, Maximum and minimum luminance, Grayscale, Luminance uniformity, Stability of luminance and chromaticity, Bidirectional reflection distribution function, Grav tracking, Color scale response, Color gamut volume. | Conclusion: Verification for the new display showed that the proposed device has similar technological characteristics compared to the predicate device following the TPA guidance. In compliance with international/FDA-recognized consensus standards (IEC 60601-1, IEC 60601-1-6, IEC 62471, ISO 14971). Safe and effective, conforms to intended use. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a "sample size" in terms of cases or images for the non-clinical performance tests. The tests were performed on "the display of the proposed device" to verify its technological characteristics. This implies testing on representative units of the Barco MDCC-4430 display.
The data provenance is not specified in terms of country of origin or retrospective/prospective, as the tests were bench testing (laboratory-based performance evaluation of the display hardware) rather than clinical studies with patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and their Qualifications
This information is not applicable to this submission. The tests performed were technical performance evaluations of hardware (the display), not clinical evaluations requiring expert interpretation of medical images. Ground truth for these technical tests would be established by objective measurements against specified technical standards and parameters.
4. Adjudication Method for the Test Set
This information is not applicable to this submission. As the tests were technical performance evaluations of hardware, there would not be an adjudication process involving multiple human observers interpreting results in the same way there would be for a clinical trial.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done.
The submission explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."
Therefore, there is no effect size reported for human readers with and without AI assistance, as AI functionality for diagnostic interpretation is not the subject of this 510(k) (the PIPS 5.1 "does not include any automated image analysis applications that would constitute computer aided detection or diagnosis").
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This information is not applicable. The PIPS 5.1 is a digital slide creation, viewing, and management system, not an AI algorithm for diagnostic interpretation. The focus of this 510(k) is the display component. The device itself is designed for human-in-the-loop use by a pathologist.
7. The Type of Ground Truth Used
For the non-clinical performance data, the "ground truth" was based on:
- International and FDA-recognized consensus standards: This includes IEC 60601-1, IEC 60601-1-6, IEC 62471, and ISO 14971.
- TPA Guidance: The "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" guidance document, which specifies technical parameters for displays.
- Predicate device characteristics: Demonstrating that the new display's performance matches or is equivalent to the legally marketed predicate device's display across various technical parameters.
In essence, the ground truth was established by engineering specifications, technical performance targets, and regulatory standards for display devices.
8. The Sample Size for the Training Set
This information is not applicable. The PIPS 5.1, as described, is a system for digital pathology, not an AI algorithm that requires a training set of data. The 510(k) specifically mentions: "The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis." Therefore, there is no AI training set.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable, as there is no AI training set.
Ask a specific question about this device
(81 days)
Philips IntelliSite Pathology Solution 5.1
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra-Fast Scanner (UFS), Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP270HD display or a Beacon C411W display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
-
- A scanner in any combination of the following scanner models
- . Ultra Fast Scanner (UFS)
- . Pathology Scanner SG with different versions for varying slide capacity Pathology Scanner SG20. Pathology Scanner SG60. Pathology Scanner SG300
-
- Image Management System (IMS) 4.2
-
- Clinical display
- PP27QHD or C411W
PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS.
This document focuses on the Philips IntelliSite Pathology Solution 5.1 (PIPS 5.1) and its substantial equivalence to a predicate device, primarily due to the introduction of a new clinical display. This is a 510(k) submission, meaning it aims to demonstrate that the new device is as safe and effective as a legally marketed predicate device, rather than proving de novo effectiveness. Therefore, the study described is a non-clinical performance study to demonstrate equivalence of the new display, not a clinical effectiveness study.
Based on the provided text, a detailed breakdown of acceptance criteria and the proving study is as follows:
1. Table of Acceptance Criteria and Reported Device Performance
The document states that the evaluation was performed following the FDA's Guidance for Industry and FDA Staff entitled, "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance), dated April 20, 2016. The acceptance criteria are essentially defined by compliance with the tests outlined in this guidance and relevant international standards.
Acceptance Criteria (Measured Performance Aspect) | Performance Standard/Acceptance Limit (Implicitly based on TPA Guidance & Predicate Equivalence) | Reported Device Performance (Summary from "Conclusion") |
---|---|---|
TPA Guidance Items related to Display: | ||
Spatial resolution | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Pixel defects | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Artifacts | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Temporal response | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Maximum and minimum luminance | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Grayscale | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Luminance uniformity | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Stability of luminance and chromaticity | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Bidirectional reflection distribution function | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Gray tracking | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Color scale response | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
Color gamut volume | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
International & FDA-recognized Consensus Standards: | Compliance Required | Compliance Achieved |
IEC 60601-1 Ed. 3.2 (Medical electrical equipment - General requirements for basic safety and essential performance) | Compliance | Compliant |
IEC 60601-1-6 (4th Ed) (Usability) | Compliance | Compliant |
IEC 62471:2006 (Photobiological safety) | Compliance | Compliant |
ISO 14971:2019 (Risk management) | Compliance | Compliant |
Other: | Compliance Required | Compliance Achieved |
Existing functional, safety, and system integration requirements related to the display | Verified to function as intended without adverse impact from new display | Verified to be safe and effective |
Reported Device Performance Summary: The non-clinical performance testing of the new display (Beacon C411W) showed that the proposed device has similar technological characteristics compared to the predicate device (using the PP27QHD display) following the TPA Guidance. It is also in compliance with the aforementioned international and FDA-recognized consensus standards. The verification and validation of existing safety, user, and system integration requirements showed that the proposed PIPS 5.1 with the new clinical display is safe and effective.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a "sample size" in terms of patient cases or images for testing the display. The testing performed was bench testing ("Verification for the new display," "non-clinical performance data"). This implies that the tests were conducted on the display unit itself, measuring its physical and optical properties, and its integration with the system components, rather than on a dataset of patient images reviewed by observers.
- Data Provenance: Not applicable in the context of a display characteristic validation study. The study focused on the performance of the hardware (the new display).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
Not applicable. This was a technical, non-clinical validation of a display unit's characteristics against engineering specifications and regulatory guidance, not a study requiring expert clinical read-outs or ground truth establishment from patient data.
4. Adjudication Method for the Test Set
Not applicable. This was a technical, non-clinical validation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
- No, an MRMC comparative effectiveness study was NOT done. The document explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."
- The purpose of this submission was to demonstrate substantial equivalence for a minor hardware change (new display), not to show an improvement in human reader performance with AI assistance. The PIPS system itself does not include "any automated image analysis applications that would constitute computer aided detection or diagnosis." It is a whole slide imaging system for viewing and managing digital slides.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not applicable. The PIPS 5.1 is a system for creating, viewing, and managing digital slides for human pathologist review. It is not an AI algorithm that produces a diagnostic output on its own. The "standalone" performance here refers to the display's technical specifications.
7. The Type of Ground Truth Used
- For the non-clinical performance data, the "ground truth" was established by engineering specifications, international consensus standards (e.g., IEC, ISO), and the FDA's TPA Guidance. The aim was to ensure the new display performed equivalently to the predicate's approved display and met relevant technical requirements.
8. The Sample Size for the Training Set
Not applicable. This was a non-clinical validation of hardware (a display), not a machine learning model requiring a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. (See #8)
Ask a specific question about this device
(270 days)
Philips IntelliSite Pathology Solution 5.1
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300 and PP27QHD Display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
- Subsystems:
a. A scanner in any combination of the following scanner models
i. Ultra Fast Scanner (UFS)
ii. Pathology Scanner SG with different versions for varying slide capacity Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
b. Image Management System (IMS) 4.2 - Display PP27QHD
Here's a breakdown of the acceptance criteria and study details for the Philips IntelliSite Pathology Solution 5.1, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Acceptance Criteria | Reported Device Performance (Summary) |
---|---|---|
Technical Performance (Non-Clinical) | All technical studies (e.g., Light Source, Imaging optics, Mechanical scanner Movement, Digital Imaging sensor, Image Processing Software, Image composition, Image Review Manipulation Software, Color Reproducibility, Spatial Resolution, Focusing Test, Whole Slide Tissue Coverage, Stitching Error, Turnaround Time) must pass their predefined acceptance criteria. | All technical studies passed their acceptance criteria. Pixelwise comparison showed identical image reproduction with zero ΔE between subject and predicate device. |
Electrical Safety | Compliance with IEC61010-1. | Passed. |
Electromagnetic Compatibility (EMC) | Compliance with IEC 61326-2-6 (for laboratory use of in vitro diagnostic equipment) and IEC 60601-1-2. | Passed for both emissions and immunity. |
Human Factors | User tasks and use scenarios successfully completed by all user groups. | Successfully completed for all user groups. |
Precision Study (Intra-system) | Lower limit of the 95% Confidence Interval (CI) of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 88.3% (95% CI: 86.7%; 89.9%). All individual scanner CIs also exceeded 85%. |
Precision Study (Inter-system) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 95.4% (95% CI: 94.4%; 96.5%). All individual scanner comparison CIs also exceeded 85%. |
Precision Study (Inter-site) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 90.7% (95% CI: 88.4%; 92.9%). All individual site comparison CIs also exceeded 85%. |
Clinical Study (Non-Inferiority) | The upper bound of the 95% two-sided confidence interval for the manual digital – manual optical difference in major discordance rate is less than 4%. | Difference in major discordance rate (digital-optical) was 0.1% with a 95% CI of (-1.01%; 1.18%). The upper limit (1.18%) was less than the non-inferiority margin of 4%. |
Study Details
2. Sample sized used for the test set and the data provenance:
-
Non-Clinical (Pixelwise Comparison):
- Sample Size: 42 FFPE tissue glass slides from different anatomic locations. Three regions of interest (ROI) were selected from each scanned image.
- Data Provenance: Not explicitly stated, but likely retrospective from existing archives given the nature of image comparison. The country of origin is not specified.
-
Precision Study:
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
- Intra-system: 3600 comparison pairs (likely 3 scanners with multiple reads/slides contributing).
- Inter-system: 3610 comparison pairs.
- Inter-site: 1228 comparison pairs.
- Data Provenance: Not explicitly stated, but the inter-site component suggests data from multiple locations. Retrospective or prospective is not specified.
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
-
Clinical Study:
- Sample Size: 952 cases consisting of multiple organ and tissue types.
- Data Provenance: Cases were divided over three sites. Retrospective or prospective is not specified, but the design (randomized order, washout period) suggests a prospective setup for the reading phase. The "original sign-out diagnosis rendered at the institution" implies a retrospective component for establishing the initial ground truth.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Non-Clinical (Pixelwise Comparison): No experts were explicitly mentioned for ground truth establishment; the comparison was purely technical (pixel-to-pixel).
- Precision Study: The ground truth for agreement was based on the comparison of diagnoses by pathologists, but the initial "ground truth" for the slides themselves (e.g., what they actually represented) isn't detailed in terms of expert consensus.
- Clinical Study:
- Initial Ground Truth: The "original sign-out diagnosis rendered at the institution, using an optical (light) microscope" served as the primary reference diagnosis. The qualifications of these original pathologists are implied to be standard for their role but not explicitly stated (e.g., "radiologist with 10 years of experience").
- Adjudication: Three adjudicators reviewed the reader diagnoses against the sign-out diagnosis to determine concordance, minor discordance, or major discordance. Their qualifications are not specified beyond being "adjudicators."
4. Adjudication method (for the test set):
- Clinical Study: Three adjudicators reviewed the reader diagnoses (from both manual digital and manual optical modalities) against the original sign-out diagnosis. The method for resolving disagreements among the three adjudicators (e.g., 2+1 majority, consensus) is not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a MRMC study was done, but it was for comparative non-inferiority between digital and optical methods by human readers, not explicitly for AI assistance. The study compared human pathologists reading slides using the digital system (PIPS 5.1) versus human pathologists reading slides using a traditional optical microscope.
- Effect Size of AI: This study does not involve AI assistance for human readers. The device (PIPS 5.1) is a whole slide imaging system, not an AI diagnostic tool. Therefore, there is no reported effect size regarding human reader improvement with AI assistance from this study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No. The Philips IntelliSite Pathology Solution 5.1 is described as "an aid to the pathologist to review and interpret digital images." The clinical study clearly focuses on the performance of human pathologists using the system, demonstrating its non-inferiority to optical microscopy for human interpretation. There is no mention of a standalone algorithm performance.
7. The type of ground truth used:
- Non-Clinical (Pixelwise Comparison): The "ground truth" was the direct pixel data from the predicate device, against which the subject device's reproduced pixels were compared for identity.
- Precision Study: The ground truth for evaluating agreement rates was the diagnoses made by pathologists on different scans of the same slides. The ultimate truth of the disease state was implicitly tied to the original diagnostic process.
- Clinical Study: The primary ground truth was "the original sign-out diagnosis rendered at the institution, using an optical (light) microscope." This represents a form of expert consensus/established diagnosis within a clinical setting.
8. The sample size for the training set:
- Not Applicable / Not Provided. The provided document describes a 510(k) submission for a Whole Slide Imaging (WSI) system, which is a medical device for generating, viewing, and managing digital images of pathology slides. It acts as a digital microscope. It is not an AI algorithm or a diagnostic tool that requires a training set in the typical machine learning sense to learn a particular diagnostic task. Therefore, no training set data is relevant or provided here.
9. How the ground truth for the training set was established:
- Not Applicable / Not Provided. As explained above, this device does not utilize a training set in the AI/ML context.
Ask a specific question about this device
Page 1 of 1