Search Results
Found 19 results
510(k) Data Aggregation
(79 days)
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20. Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP27QHD display, a Beacon C411W display or a Barco MDCC-4430 display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
-
- A scanner in any combination of the following scanner models
- . Ultra Fast Scanner (UFS)
- Pathology Scanner SG with different versions for varying slide capacity . Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
-
- Image Management System (IMS) 4.2
-
- Clinical display
- PP27QHD or C411W or MDCC-4430 .
PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS 5.1.
This document is a 510(k) summary for the Philips IntelliSite Pathology Solution (PIPS) 5.1. It describes the device, its intended use, and compares it to a legally marketed predicate device (also PIPS 5.1, K242848). The key change in the subject device is the introduction of a new clinical display, Barco MDCC-4430.
Here's the breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The submission focuses on demonstrating substantial equivalence of the new display (Barco MDCC-4430) to the predicate's display (Philips PP27QHD). The acceptance criteria are largely derived from the FDA's "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance) and compliance with international consensus standards. The performance is reported as successful verification showing equivalence.
| Acceptance Criteria (TPA Guidance 항목) | Reported Device Performance (Subject Device with Barco MDCC-4430) | Conclusion on Substantial Equivalence |
|---|---|---|
| Display type | Color LCD | Substantially equivalent: Minor difference in physical display size is a minor change and does not raise any questions of safety or effectiveness. |
| Manufacturer | Barco N.V. | Same as above. |
| Technology | IPS technology with a-Si Thin Film Transistor (unchanged from predicate) | Substantially equivalent: Proposed and predicate device are considered substantially equivalent. |
| Physical display size | 714 mm x 478 mm x 74 mm | Substantially equivalent: Minor change, does not raise safety/effectiveness questions. |
| Active display area | 655 mm x 410 mm (30.4 inch diagonal) | Substantially equivalent: Slightly higher viewable area is a minor change. Verification testing confirms image quality is equivalent to the predicate device. |
| Aspect ratio | 16:10 | Substantially equivalent: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
| Resolution | 2560 x 1600 pixels | Substantially equivalent: Slightly higher resolution and pixel size is a minor change. Verification testing confirms image quality is equivalent to the predicate device. Conclusion: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
| Pixel Pitch | 0.256 mm x 0.256 mm | Same as above. |
| Color calibration tools (software) | QAWeb Enterprise version 2.14.0 installed on the workstation | Substantially equivalent: New display uses different calibration software, but calibration method (built-in front sensor), calibration targets, and frequency of quality control tests remain unchanged. Conclusion: This change does not raise new safety/effectiveness concerns. |
| Color calibration tools (hardware) | Built-in front sensor (same as predicate) | Same as above. |
| Additional Non-clinical Performance Tests (TPA Guidance) | Verification that technological characteristics of the display were not affected by the new panel, including: Spatial resolution, Pixel defects, Artifacts, Temporal response, Maximum and minimum luminance, Grayscale, Luminance uniformity, Stability of luminance and chromaticity, Bidirectional reflection distribution function, Grav tracking, Color scale response, Color gamut volume. | Conclusion: Verification for the new display showed that the proposed device has similar technological characteristics compared to the predicate device following the TPA guidance. In compliance with international/FDA-recognized consensus standards (IEC 60601-1, IEC 60601-1-6, IEC 62471, ISO 14971). Safe and effective, conforms to intended use. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a "sample size" in terms of cases or images for the non-clinical performance tests. The tests were performed on "the display of the proposed device" to verify its technological characteristics. This implies testing on representative units of the Barco MDCC-4430 display.
The data provenance is not specified in terms of country of origin or retrospective/prospective, as the tests were bench testing (laboratory-based performance evaluation of the display hardware) rather than clinical studies with patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and their Qualifications
This information is not applicable to this submission. The tests performed were technical performance evaluations of hardware (the display), not clinical evaluations requiring expert interpretation of medical images. Ground truth for these technical tests would be established by objective measurements against specified technical standards and parameters.
4. Adjudication Method for the Test Set
This information is not applicable to this submission. As the tests were technical performance evaluations of hardware, there would not be an adjudication process involving multiple human observers interpreting results in the same way there would be for a clinical trial.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done.
The submission explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."
Therefore, there is no effect size reported for human readers with and without AI assistance, as AI functionality for diagnostic interpretation is not the subject of this 510(k) (the PIPS 5.1 "does not include any automated image analysis applications that would constitute computer aided detection or diagnosis").
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This information is not applicable. The PIPS 5.1 is a digital slide creation, viewing, and management system, not an AI algorithm for diagnostic interpretation. The focus of this 510(k) is the display component. The device itself is designed for human-in-the-loop use by a pathologist.
7. The Type of Ground Truth Used
For the non-clinical performance data, the "ground truth" was based on:
- International and FDA-recognized consensus standards: This includes IEC 60601-1, IEC 60601-1-6, IEC 62471, and ISO 14971.
- TPA Guidance: The "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" guidance document, which specifies technical parameters for displays.
- Predicate device characteristics: Demonstrating that the new display's performance matches or is equivalent to the legally marketed predicate device's display across various technical parameters.
In essence, the ground truth was established by engineering specifications, technical performance targets, and regulatory standards for display devices.
8. The Sample Size for the Training Set
This information is not applicable. The PIPS 5.1, as described, is a system for digital pathology, not an AI algorithm that requires a training set of data. The 510(k) specifically mentions: "The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis." Therefore, there is no AI training set.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable, as there is no AI training set.
Ask a specific question about this device
(259 days)
The Epredia E1000 Dx Digital Pathology Solution is an automated digital slide creation, viewing, and management system. The Epredia E1000 Dx Digital Pathology Solution is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Epredia E1000 Dx Digital Pathology Solution is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The Epredia E1000 Dx Digital Pathology Solution consists of a Scanner (E1000 Dx Digital Pathology Scanner), which generates in MRXS image file format, E1000 Dx Scanner Software, Image Management System (E1000 Dx IMS), E1000 Dx Viewer Software, and Display (Barco MDPC-8127). The Epredia E1000 Dx Digital Pathology Solution is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Epredia E1000 Dx Digital Pathology Solution.
The E1000 Dx Digital Pathology Solution is a high-capacity, automated whole slide imaging system for the creation, viewing, and management of digital images of surgical pathology slides. It allows whole slide digital images to be viewed on a display monitor that would otherwise be appropriate for manual visualization by conventional brightfield microscopy.
The E1000 Dx Digital Pathology Solution consists of the following three components: Scanner component:
- . E1000 Dx Digital Pathology Scanner with E1000 firmware version 2.0.3
- . E1000 Dx Scanner Software version 2.0.3
Viewer component:
- E1000 Dx Image Management System (IMS) Server version 2.3.2 ●
- . E1000 Dx Viewer Software version 2.7.2
Display component:
- . Barco MDPC-8127
The E1000 Dx Digital Pathology Solution automatically creates digital whole slide images by scanning formalin-fixed, paraffin-embedded (FFPE) tissue slides, with a capacity to process up to 1,000 slides. The E1000 Dx Scanner Software (EDSS), which runs on the scanner workstation, controls the operation of the E1000 Dx Digital Pathology Scanner. The scanner workstation, provided with the E1000 Dx Digital Pathology Solution, includes a PC, monitor, kevboard, and mouse. The solution uses a proprietary MRXS format to store and transmit images between the E1000 Dx Digital Pathology Scanner and the E1000 Dx Image Management System (IMS).
The E1000 Dx IMS is a software component intended for use with the Barco MDPC-8127 display monitor and runs on a separate, customer-provided pathologist viewing workstation PC. The E1000 Dx Viewer, an application managed through the E1000 Dx IMS, allows the obtained digital whole slide images to be annotated, stored, accessed, and examined on Barco MDPC-8127 video display monitor. This functionality aids pathologists in interpreting digital images as an alternative to conventional brightfield microscopy.
Here's a breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:
Important Note: The provided text describes a Whole Slide Imaging System for digital pathology, which aids pathologists in reviewing and interpreting digital images of traditional glass slides. It does not describe an AI device for automated diagnosis or detection. Therefore, concepts like "effect size of how much human readers improve with AI vs without AI assistance" or "standalone (algorithm only without human-in-the-loop performance)" are not directly applicable to this device's proven capabilities as per the provided information.
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for this device appears to be non-inferiority to optical microscopy in terms of major discordance rates when comparing digital review to a main sign-out diagnosis. Additionally, precision (intra-system, inter-system repeatability, and inter-site reproducibility) is a key performance metric.
Table 1: Overall Major Discordance Rate for MD and MO
| Metric | Acceptance Criteria (Implied Non-inferiority) | Reported Device Performance (Epredia E1000 Dx) |
|---|---|---|
| MD Major Discordance Rate | N/A (Compared to MO's performance) | 2.51% (95% CI: 2.26%; 2.79%) |
| MO Major Discordance Rate | N/A (Baseline for comparison) | 2.59% (95% CI: 2.29%; 2.82%) |
| Difference MD - MO | Within an acceptable non-inferiority margin | -0.15% (95% CI: -0.40%, 0.41%) |
| Study Met Acceptance Criteria | Yes, as defined in the protocol | Met |
Precision Study Acceptance Criteria and Reported Performance
| Metric | Acceptance Criteria (Lower limit of 95% CI) | Reported Device Performance (Epredia E1000 Dx) |
|---|---|---|
| Intra-System Repeatability (Average Positive Agreement) | > 85% | 96.9% (Lower limit of 96.1%) |
| Inter-System Repeatability (Average Positive Agreement) | > 85% | 95.1% (Lower limit of 94.1%) |
| Inter-Site Reproducibility (Average Positive Agreement) | > 85% | 95.4% (Lower limit of 93.6%) |
| All Precision Studies Met Acceptance Criteria | Yes | Met |
Study Details
2. Sample Size and Data Provenance:
-
Clinical Accuracy Study (Non-inferiority):
- Test Set Sample Size: 3897 digital image reviews (MD) and 3881 optical microscope reviews (MO). The dataset comprises surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue.
- Data Provenance: Not explicitly stated, but clinical studies for FDA clearance typically involve multiple institutions, often within the US or compliant with international standards, and are prospective in nature for device validation. The "multi-centered" description suggests multiple sites, implying diverse data. It is a "blinded, and randomized study," which are characteristics of a prospective study.
-
Precision Studies (Intra-system, Inter-system, Inter-site):
- Test Set Sample Size: A "comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems" was used. Specific slide numbers or FOV counts are mentioned as pairwise agreements (e.g., 2,511 comparison pairs for Intra-system, Inter-system; 837 comparison pairs for Inter-site) rather than raw slide counts.
- Data Provenance: Clinical specimens. Not specified directly, but likely from multiple sites for the reproducibility studies, suggesting a diverse, possibly prospective, collection.
3. Number of Experts and Qualifications:
- Clinical Accuracy Study: The study involved multiple pathologists who performed both digital and optical reviews. The exact number of pathologists is not specified beyond "pathologist" and "qualified pathologist." Their qualifications are generally implied by "qualified pathologist" and the context of a clinical study for an FDA-cleared device.
- Precision Studies:
- Intra-System Repeatability: "three different reading pathologists (RPs)."
- Inter-System Repeatability: "Three reading pathologists."
- Inter-Site Reproducibility: "three different reading pathologists, each located at one of three different sites."
- Qualifications: Referred to as "reading pathologists," implying trained and qualified professionals experienced in interpreting pathology slides.
4. Adjudication Method for the Test Set:
- Clinical Accuracy Study: The ground truth was established by a "main sign-out diagnosis (SD)." This implies a definitive diagnosis made by a primary pathologist, which served as the reference standard. It's not specified if this "main sign-out diagnosis" itself involved an adjudication process, but it is presented as the final reference.
- Precision Studies: For the precision studies, agreement rates were calculated based on the pathologists' readings of predetermined features on "fields of view (FOVs)." While individual "original assessment" seems to be the baseline for agreement in the intra-system study, the method to establish a single ground truth for all FOVs prior to the study (if any, beyond the initial "defined, clinically relevant histologic features") or an adjudication process during the study is not explicitly detailed. The agreement rates are pairwise comparisons between observers or system readings, not necessarily against a single adjudicated ground truth for each FOV.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- A comparative effectiveness study was indeed done, comparing human performance with the E1000 Dx Digital Pathology Solution (MD) to human performance with an optical microscope (MO).
- Effect Size: The study demonstrated non-inferiority of digital review to optical microscopy. The "effect size" is captured by the difference in major discordance rates:
- The estimated difference (MD - MO) was -0.15% (95% CI: -0.40%, 0.41%). This narrow confidence interval, inclusive of zero and generally close to zero, supports the non-inferiority claim, indicating no significant practical difference in major discordance rates between the two modalities when used by human readers.
6. Standalone (Algorithm Only) Performance:
- No, a standalone (algorithm only) performance study was not conducted or described. This device is a Whole Slide Imaging System intended as an aid to the pathologist for human review and interpretation, not an AI for automated diagnosis.
7. Type of Ground Truth Used:
- Clinical Accuracy Study: The ground truth used was the "main sign-out diagnosis (SD)." This is a form of expert consensus or definitive clinical diagnosis, widely accepted as the reference standard in pathology.
- Precision Study: For the precision studies, "defined, clinically relevant histologic features" were used, and pathologists recorded the presence of these features. While not explicitly stated as a "ground truth" in the same way as the sign-out diagnosis, the 'original assessment' or 'presumed correct' feature presence often serves as a practical ground truth for repeatability and reproducibility calculations.
8. Sample Size for the Training Set:
- The document does not mention a training set as this device is not an AI/ML algorithm that learns from data. It's a hardware and software system designed to digitize and display images for human review. The "development processes" mentioned are for the hardware and software functionality, not for training a model.
9. How the Ground Truth for the Training Set Was Established:
- This question is not applicable as there is no training set for this device as described. Ground truth establishment mentioned in the document relates to clinical validation and precision, not AI model training.
Ask a specific question about this device
(92 days)
Roche Digital Pathology Dx is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx is composed of VENTANA DP 200 slide scanner, VENTANA DP 600 slide scanner, Roche uPath enterprise software, and ASUS PA248QV display. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Roche Digital Pathology Dx.
Roche Digital Pathology Dx (hereinafter referred to as RDPD), is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. RDPD system is composed of the following components:
- · VENTANA DP 200 slide scanner,
- · VENTANA DP 600 slide scanner,
- · Roche uPath enterprise software, and
- · ASUS PA248QV display.
VENTANA DP 600 slide scanner has a total capacity of 240 slides through 40 trays with 6 slides each. The VENTANA DP 600 slide scanner and VENTANA DP 200 slide scanner use the same Image Acquisition Unit.
Both VENTANA DP 200 and DP 600 slide scanners are bright-field digital pathology scanners that accommodate loading and scanning of 6 and 240 standard glass microscope slides, respectively. The scanners each have a high-numerical aperture Plan Apochromat 20x objective and are capable of scanning at both 20x and 40x magnifications. The scanners feature automatic detection of the tissue specimen on the glass slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). The International Color Consortium (ICC) color profile is embedded in each scanned slide image for color management. The scanned slide images are generated in a proprietary file format, Biolmagene Image File (BIF), that can be uploaded to the uPath Image Management System (IMS), provided with the Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology Dx system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using the Google Chrome or Microsoft Edge web browser. The user interface of uPath software enables laboratories to manage their workflow from the time the whole slide image is produced and acquired by VENTANA DP 200 and/or DP 600 slide scanners through the subsequent processes, such as review of the digital image on the monitor screen and reporting of results. The uPath software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided document is a 510(k) summary for the "Roche Digital Pathology Dx" system (K242783), which is a modification of a previously cleared device (K232879). This modification primarily involves adding a new slide scanner model, VENTANA DP 600, to the existing system. The document asserts that due to the identical Image Acquisition Unit (IAU) between the new DP 600 scanner and the previously cleared DP 200 scanner, the technical performance assessment from the predicate device is fully applicable. Therefore, the information provided below will primarily refer to the studies and acceptance criteria from the predicate device that are deemed applicable to the current submission due to substantial equivalence.
Here's an analysis based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a discrete "acceptance criteria" table with corresponding numerical performance metrics for the current submission (K242783). Instead, it states that the performance characteristics data collected on the VENTANA DP 200 slide scanner (the predicate device) are representative of the VENTANA DP 600 slide scanner performance because both scanners use the same Image Acquisition Unit (IAU). The table below lists the "Technical Performance Assessment (TPA) Sections" as presented in the document (Table 3), which serve as categories of performance criteria that were evaluated for the predicate device and are considered applicable to the current device. The reported performance for these sections is summarized as "Information was provided in K232879" for the DP 600, indicating that the past performance data is considered valid.
| TPA Section (Acceptance Criteria Category) | Reported Device Performance (for DP 600 scanner) |
|---|---|
| Components: Slide Feeder | No double wide slide tray compatibility, new FMEA provided in K242783. (Predicate: Information on configuration, user interaction, FMEA) |
| Components: Light Source | Information was provided in K232879. (Predicate: Descriptive info on lamp/condenser, spectral distribution verified) |
| Components: Imaging Optics | Information was provided in K232879. (Predicate: Optical schematic, descriptive info, testing for irradiance, distortions, aberrations) |
| Components: Focusing System | Information was provided in K232879. (Predicate: Schematic, description, optical system, cameras, algorithm) |
| Components: Mechanical Scanner Movement | Same except no double wide slide tray compatibility, replaced references & new FMEA items provided in K242783. (Predicate: Information/specs on stage, movement, FMEA, repeatability) |
| Components: Digital Imaging Sensor | Information was provided in K232879. (Predicate: Information/specs on sensor type, pixels, responsivity, noise, data, testing) |
| Components: Image Processing Software | Information was provided in K232879. (Predicate: Information/specs on exposure, white balance, color correction, subsampling, pixel correction) |
| Components: Image Composition | Information was provided in K232879. (Predicate: Information/specs on scanning method, speed, Z-axis planes, analysis of image composition) |
| Components: Image File Formats | Information was provided in K232879. (Predicate: Information/specs on compression, ratio, file format, organization) |
| Image Review Manipulation Software | Information was provided in K232879. (Predicate: Information/specs on panning, zooming, Z-axis displacement, comparison, image enhancement, annotation, bookmarks) |
| Computer Environment | Select upgrades of sub-components & specifications. (Predicate: Information/specs on hardware, OS, graphics, color management, display interface) |
| Display | Information was provided in K232879. (Predicate: Information/specs on pixel density, aspect ratio, display surface, and other display characteristics; performance testing for user controls, spatial resolution, pixel defects, artifacts, temporal response, luminance, uniformity, gray tracking, color scale, color gamut) |
| System-level Assessments: Color Reproducibility | Information was provided in K232879. (Predicate: Test data for color reproducibility) |
| System-level Assessments: Spatial Resolution | Information was provided in K232879. (Predicate: Test data for composite optical performance) |
| System-level Assessments: Focusing Test | Information was provided in K232879. (Predicate: Test data for technical focus quality) |
| System-level Assessments: Whole Slide Tissue Coverage | Information was provided in K232879. (Predicate: Test data for tissue detection algorithms and inclusion of tissue in digital image file) |
| System-level Assessments: Stitching Error | Information was provided in K232879. (Predicate: Test data for stitching errors and artifacts) |
| System-level Assessments: Turnaround Time | Information was provided in K232879. (Predicate: Test data for turnaround time) |
| User Interface | Identical workflow, replacement of new scanner component depiction. (Predicate: Information on user interaction, human factors/usability validation) |
| Labeling | Same content, replaced references. (Predicate: Compliance with 21 CFR Parts 801 and 809, special controls) |
| Quality Control | Same content, replaced references. (Predicate: QC activities by user, lab technician, pathologist prior to/after scanning) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states that the technical performance assessment data was "collected on the VENTANA DP 200 slide scanner" (the predicate device). However, the specific sample sizes for these technical studies (e.g., number of slides used for focusing tests, stitching error analysis, etc.) are not detailed in this summary. The data provenance (country of origin, retrospective/prospective) for these underlying technical studies from K232879 is also not provided in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This section is not applicable or not provided by the document. The studies mentioned are primarily technical performance assessments related to image quality, system components, and usability, rather than diagnostic accuracy studies requiring expert pathologist interpretation for ground truth. For the predicate device, it mentions "a qualified pathologist" is responsible for interpretation, but this refers to the end-user clinical use, not the establishment of ground truth for device validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This section is not applicable or not provided by the document. As noted above, the document details technical performance studies rather than diagnostic performance studies that would typically involve multiple readers and adjudication methods for diagnostic discrepancies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned in the provided text. The device, "Roche Digital Pathology Dx," is described as a "digital slide creation, viewing and management system" intended "as an aid to the pathologist to review and interpret digital images." It is a Whole Slide Imaging (WSI) system, and the submission is focused on demonstrating the technical equivalence of a new scanner component, not on evaluating AI assistance or its impact on human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study is mentioned. The device is a WSI system for "aid to the pathologist to review and interpret digital images," implying a human-in-the-loop system. The document does not describe any specific algorithms intended for automated diagnostic interpretation or analysis in a standalone capacity.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the technical performance studies referenced from the predicate device (K232879), the "ground truth" would be established by physical measurements and engineering specifications for aspects like spatial resolution, color reproducibility, focusing quality, tissue coverage, and stitching errors, rather than clinical outcomes or diagnostic pathology. For instance, color reproducibility would be assessed against a known color standard, and spatial resolution against resolution targets. The document does not explicitly state the exact types of ground truth used for each technical assessment but refers to "test data to evaluate" these characteristics.
8. The sample size for the training set
Not applicable/not provided. The document describes a WSI system, not an AI/ML-based diagnostic algorithm that would typically require a training set. The specific "Image Acquisition Unit" components (hardware and software for pixel pipeline) are stated to be "functionally identical" to the predicate, implying established design rather than iterative machine learning.
9. How the ground truth for the training set was established
Not applicable/not provided. As there is no mention of a training set for an AI/ML algorithm, the method for establishing its ground truth is not discussed.
Ask a specific question about this device
(81 days)
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra-Fast Scanner (UFS), Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP270HD display or a Beacon C411W display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
-
- A scanner in any combination of the following scanner models
- . Ultra Fast Scanner (UFS)
- . Pathology Scanner SG with different versions for varying slide capacity Pathology Scanner SG20. Pathology Scanner SG60. Pathology Scanner SG300
-
- Image Management System (IMS) 4.2
-
- Clinical display
- PP27QHD or C411W
PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS.
This document focuses on the Philips IntelliSite Pathology Solution 5.1 (PIPS 5.1) and its substantial equivalence to a predicate device, primarily due to the introduction of a new clinical display. This is a 510(k) submission, meaning it aims to demonstrate that the new device is as safe and effective as a legally marketed predicate device, rather than proving de novo effectiveness. Therefore, the study described is a non-clinical performance study to demonstrate equivalence of the new display, not a clinical effectiveness study.
Based on the provided text, a detailed breakdown of acceptance criteria and the proving study is as follows:
1. Table of Acceptance Criteria and Reported Device Performance
The document states that the evaluation was performed following the FDA's Guidance for Industry and FDA Staff entitled, "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance), dated April 20, 2016. The acceptance criteria are essentially defined by compliance with the tests outlined in this guidance and relevant international standards.
| Acceptance Criteria (Measured Performance Aspect) | Performance Standard/Acceptance Limit (Implicitly based on TPA Guidance & Predicate Equivalence) | Reported Device Performance (Summary from "Conclusion") |
|---|---|---|
| TPA Guidance Items related to Display: | ||
| Spatial resolution | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Pixel defects | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Artifacts | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Temporal response | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Maximum and minimum luminance | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Grayscale | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Luminance uniformity | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Stability of luminance and chromaticity | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Bidirectional reflection distribution function | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Gray tracking | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Color scale response | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| Color gamut volume | As per predicate device and TPA Guidance | Verified to be similar to predicate device |
| International & FDA-recognized Consensus Standards: | Compliance Required | Compliance Achieved |
| IEC 60601-1 Ed. 3.2 (Medical electrical equipment - General requirements for basic safety and essential performance) | Compliance | Compliant |
| IEC 60601-1-6 (4th Ed) (Usability) | Compliance | Compliant |
| IEC 62471:2006 (Photobiological safety) | Compliance | Compliant |
| ISO 14971:2019 (Risk management) | Compliance | Compliant |
| Other: | Compliance Required | Compliance Achieved |
| Existing functional, safety, and system integration requirements related to the display | Verified to function as intended without adverse impact from new display | Verified to be safe and effective |
Reported Device Performance Summary: The non-clinical performance testing of the new display (Beacon C411W) showed that the proposed device has similar technological characteristics compared to the predicate device (using the PP27QHD display) following the TPA Guidance. It is also in compliance with the aforementioned international and FDA-recognized consensus standards. The verification and validation of existing safety, user, and system integration requirements showed that the proposed PIPS 5.1 with the new clinical display is safe and effective.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a "sample size" in terms of patient cases or images for testing the display. The testing performed was bench testing ("Verification for the new display," "non-clinical performance data"). This implies that the tests were conducted on the display unit itself, measuring its physical and optical properties, and its integration with the system components, rather than on a dataset of patient images reviewed by observers.
- Data Provenance: Not applicable in the context of a display characteristic validation study. The study focused on the performance of the hardware (the new display).
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
Not applicable. This was a technical, non-clinical validation of a display unit's characteristics against engineering specifications and regulatory guidance, not a study requiring expert clinical read-outs or ground truth establishment from patient data.
4. Adjudication Method for the Test Set
Not applicable. This was a technical, non-clinical validation.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
- No, an MRMC comparative effectiveness study was NOT done. The document explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."
- The purpose of this submission was to demonstrate substantial equivalence for a minor hardware change (new display), not to show an improvement in human reader performance with AI assistance. The PIPS system itself does not include "any automated image analysis applications that would constitute computer aided detection or diagnosis." It is a whole slide imaging system for viewing and managing digital slides.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not applicable. The PIPS 5.1 is a system for creating, viewing, and managing digital slides for human pathologist review. It is not an AI algorithm that produces a diagnostic output on its own. The "standalone" performance here refers to the display's technical specifications.
7. The Type of Ground Truth Used
- For the non-clinical performance data, the "ground truth" was established by engineering specifications, international consensus standards (e.g., IEC, ISO), and the FDA's TPA Guidance. The aim was to ensure the new display performed equivalently to the predicate's approved display and met relevant technical requirements.
8. The Sample Size for the Training Set
Not applicable. This was a non-clinical validation of hardware (a display), not a machine learning model requiring a training set.
9. How the Ground Truth for the Training Set Was Established
Not applicable. (See #8)
Ask a specific question about this device
(158 days)
The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing and management, system. The PIPS is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS comprises the Image Management System (IMS) the Ultra Fast Scanner (UFS) and Philips PP27QHD display or a Beacon C411W display. The PIPS is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the interpretation of images obtained using PIPS.
The Philips IntelliSite Pathology Solution (PIPS) is an automated digital slide creation, viewing and management, system. The PIPS comprises the Image Management System (IMS) the Ultra Fast Scanner (UFS) and Philips PP27QHD display or a Beacon C411W display.
N/A
Ask a specific question about this device
(270 days)
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300 and PP27QHD Display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
- Subsystems:
a. A scanner in any combination of the following scanner models
i. Ultra Fast Scanner (UFS)
ii. Pathology Scanner SG with different versions for varying slide capacity Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
b. Image Management System (IMS) 4.2 - Display PP27QHD
Here's a breakdown of the acceptance criteria and study details for the Philips IntelliSite Pathology Solution 5.1, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Acceptance Criteria | Reported Device Performance (Summary) |
|---|---|---|
| Technical Performance (Non-Clinical) | All technical studies (e.g., Light Source, Imaging optics, Mechanical scanner Movement, Digital Imaging sensor, Image Processing Software, Image composition, Image Review Manipulation Software, Color Reproducibility, Spatial Resolution, Focusing Test, Whole Slide Tissue Coverage, Stitching Error, Turnaround Time) must pass their predefined acceptance criteria. | All technical studies passed their acceptance criteria. Pixelwise comparison showed identical image reproduction with zero ΔE between subject and predicate device. |
| Electrical Safety | Compliance with IEC61010-1. | Passed. |
| Electromagnetic Compatibility (EMC) | Compliance with IEC 61326-2-6 (for laboratory use of in vitro diagnostic equipment) and IEC 60601-1-2. | Passed for both emissions and immunity. |
| Human Factors | User tasks and use scenarios successfully completed by all user groups. | Successfully completed for all user groups. |
| Precision Study (Intra-system) | Lower limit of the 95% Confidence Interval (CI) of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 88.3% (95% CI: 86.7%; 89.9%). All individual scanner CIs also exceeded 85%. |
| Precision Study (Inter-system) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 95.4% (95% CI: 94.4%; 96.5%). All individual scanner comparison CIs also exceeded 85%. |
| Precision Study (Inter-site) | Lower limit of the 95% CI of the Average Positive Agreement exceeding 85%. | Overall Agreement Rate: 90.7% (95% CI: 88.4%; 92.9%). All individual site comparison CIs also exceeded 85%. |
| Clinical Study (Non-Inferiority) | The upper bound of the 95% two-sided confidence interval for the manual digital – manual optical difference in major discordance rate is less than 4%. | Difference in major discordance rate (digital-optical) was 0.1% with a 95% CI of (-1.01%; 1.18%). The upper limit (1.18%) was less than the non-inferiority margin of 4%. |
Study Details
2. Sample sized used for the test set and the data provenance:
-
Non-Clinical (Pixelwise Comparison):
- Sample Size: 42 FFPE tissue glass slides from different anatomic locations. Three regions of interest (ROI) were selected from each scanned image.
- Data Provenance: Not explicitly stated, but likely retrospective from existing archives given the nature of image comparison. The country of origin is not specified.
-
Precision Study:
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
- Intra-system: 3600 comparison pairs (likely 3 scanners with multiple reads/slides contributing).
- Inter-system: 3610 comparison pairs.
- Inter-site: 1228 comparison pairs.
- Data Provenance: Not explicitly stated, but the inter-site component suggests data from multiple locations. Retrospective or prospective is not specified.
- Sample Size: Not explicitly stated as a single number but implied by the "Number of Comparison Pairs" in the tables:
-
Clinical Study:
- Sample Size: 952 cases consisting of multiple organ and tissue types.
- Data Provenance: Cases were divided over three sites. Retrospective or prospective is not specified, but the design (randomized order, washout period) suggests a prospective setup for the reading phase. The "original sign-out diagnosis rendered at the institution" implies a retrospective component for establishing the initial ground truth.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Non-Clinical (Pixelwise Comparison): No experts were explicitly mentioned for ground truth establishment; the comparison was purely technical (pixel-to-pixel).
- Precision Study: The ground truth for agreement was based on the comparison of diagnoses by pathologists, but the initial "ground truth" for the slides themselves (e.g., what they actually represented) isn't detailed in terms of expert consensus.
- Clinical Study:
- Initial Ground Truth: The "original sign-out diagnosis rendered at the institution, using an optical (light) microscope" served as the primary reference diagnosis. The qualifications of these original pathologists are implied to be standard for their role but not explicitly stated (e.g., "radiologist with 10 years of experience").
- Adjudication: Three adjudicators reviewed the reader diagnoses against the sign-out diagnosis to determine concordance, minor discordance, or major discordance. Their qualifications are not specified beyond being "adjudicators."
4. Adjudication method (for the test set):
- Clinical Study: Three adjudicators reviewed the reader diagnoses (from both manual digital and manual optical modalities) against the original sign-out diagnosis. The method for resolving disagreements among the three adjudicators (e.g., 2+1 majority, consensus) is not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, a MRMC study was done, but it was for comparative non-inferiority between digital and optical methods by human readers, not explicitly for AI assistance. The study compared human pathologists reading slides using the digital system (PIPS 5.1) versus human pathologists reading slides using a traditional optical microscope.
- Effect Size of AI: This study does not involve AI assistance for human readers. The device (PIPS 5.1) is a whole slide imaging system, not an AI diagnostic tool. Therefore, there is no reported effect size regarding human reader improvement with AI assistance from this study.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- No. The Philips IntelliSite Pathology Solution 5.1 is described as "an aid to the pathologist to review and interpret digital images." The clinical study clearly focuses on the performance of human pathologists using the system, demonstrating its non-inferiority to optical microscopy for human interpretation. There is no mention of a standalone algorithm performance.
7. The type of ground truth used:
- Non-Clinical (Pixelwise Comparison): The "ground truth" was the direct pixel data from the predicate device, against which the subject device's reproduced pixels were compared for identity.
- Precision Study: The ground truth for evaluating agreement rates was the diagnoses made by pathologists on different scans of the same slides. The ultimate truth of the disease state was implicitly tied to the original diagnostic process.
- Clinical Study: The primary ground truth was "the original sign-out diagnosis rendered at the institution, using an optical (light) microscope." This represents a form of expert consensus/established diagnosis within a clinical setting.
8. The sample size for the training set:
- Not Applicable / Not Provided. The provided document describes a 510(k) submission for a Whole Slide Imaging (WSI) system, which is a medical device for generating, viewing, and managing digital images of pathology slides. It acts as a digital microscope. It is not an AI algorithm or a diagnostic tool that requires a training set in the typical machine learning sense to learn a particular diagnostic task. Therefore, no training set data is relevant or provided here.
9. How the ground truth for the training set was established:
- Not Applicable / Not Provided. As explained above, this device does not utilize a training set in the AI/ML context.
Ask a specific question about this device
(270 days)
Roche Digital Pathology Dx (VENTANA DP 200) is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx (VENTANA DP 200) is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx (VENTANA DP 200) is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx (VENTANA DP 200) is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx (VENTANA DP 200), hereinafter referred to as Roche Digital Pathology Dx, is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. Roche Digital Pathology Dx system is composed of the following components:
- · VENTANA DP 200 slide scanner
- · Roche uPath enterprise software 1.1.1 (hereinafter, "uPath")
- · ASUS PA248QV display
VENTANA DP 200 slide scanner is a bright-field digital pathology scanner that accommodates loading and scanning of up to 6 standard slides. The scanner comprises a high-resolution 20x objective with the ability to scan at both 20x and 40x. With its uniquely designed optics and scanning methods, VENTANA DP 200 scanner enables users to capture sharp, high-resolution digital images of stained tissue specimens on glass slides. The scanner features automatic detection of the tissue specimen on the slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). It also integrates color profiling to ensure that images produced from scanned slides are generated with a color-managed International Color Consortium (ICC) profile. VENTANA DP 200 image files are generated in a proprietary format (BIF) and can be uploaded to an Image Management System (IMS), such as the one provided with Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using Google Chrome or Microsoft Edge. The interface of uPath software enables laboratories to manage their workflow from the time the digital slide image is produced and acquired by a VENTANA slide scanner through the subsequent processes including, but not limited to, review of the digital image on the monitor screen, analysis, and reporting of results. The software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided text describes the acceptance criteria and the study that proves the Roche Digital Pathology Dx (VENTANA DP 200) device meets these criteria for FDA 510(k) clearance.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The core clinical acceptance criterion for the Roche Digital Pathology Dx system was non-inferiority of digital read (DR) accuracy compared to manual read (MR) accuracy.
Acceptance Criteria for Clinical Accuracy:
| Acceptance Criterion (Primary Objective) | Reported Device Performance |
|---|---|
| Lower bound of a 2-sided 95% confidence interval for the differencein accuracy (DR - MR) had to be greater than or equal to -4%. | Observed: |
| Overall agreement rate: DR = 92.00%, MR = 92.61% | |
| DR-MR difference in agreement rate: -0.61% (95% CI: -1.59%, 0.35%) | |
| Model (Generalized Linear Mixed Model): | |
| Estimated agreement rates: DR = 91.54%, MR = 92.16% | |
| DR-MR difference in agreement rate: -0.62% (95% CI: -1.50%, 0.26%) | |
| Result: The lower limit of the 95% confidence interval for DR-MR (-1.59% observed, -1.50% model) was greater than the pre-specified non-inferiority margin of -4%. Therefore, the DR modality was demonstrated to be non-inferior to the MR modality. |
Acceptance Criteria for Analytical Performance (Precision):
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Lower bounds of the 2-sided 95% CIs for all co-primary endpoints(Overall Percent Agreement [OPA] point estimate for between-site/system, between-day/within-system, and between-reader agreement) were at least 85%. | Between-Site/System OPA: 89.3% (95% CI: 85.8%, 92.4%)Between-Days/Within-System OPA: 90.3% (95% CI: 87.1%, 93.2%)Between-Readers OPA: 90.1% (95% CI: 86.6%, 93.0%)Result: For all co-primary analyses, the lower bounds of the 95% CI were >85%, demonstrating acceptable precision. |
2. Sample Size Used for the Test Set and Data Provenance
-
Clinical Accuracy Study (Test Set):
- Sample Size: 2047 cases (total of 3259 slides) consisting of multiple organ and tissue types.
- Data Provenance: Multi-center study conducted at four sites. The text doesn't explicitly state the country of origin, but it is an FDA submission based in Tucson, Arizona, implying data from the United States. The cases were retrospective, pre-screened from archived specimens from the clinical database of the study sites, with a minimum of one year between the date of sign-out diagnosis and the beginning of the study.
-
Precision Study (Test Set):
- Sample Size: 69 study cases (slides), each with 3 ROIs, totaling 207 "study" ROIs. An additional 12 "wild card" cases (36 wild card ROIs) were included to reduce recall bias but excluded from statistical analysis.
- Data Provenance: Study conducted at 3 external pathology laboratories (study sites). The text doesn't explicitly state the country of origin. The cases contained H&E-stained archival slides of FFPE human tissue.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
-
Clinical Accuracy Study (Establishing Reference Diagnosis / Sign-Out Truth):
- Initial Verification: Two Screening Pathologists at each study site pre-screened cases and determined inclusion/exclusion criteria. The first Screening Pathologist reviewed H&E and ancillary stained slides using manual microscopy to identify representative slides and confirmed the diagnosis against the sign-out report. A second Screening Pathologist then verified the sign-out diagnosis data.
- Qualifications: "Qualified pathologist" is mentioned. The study design implies these are experienced professionals involved in routine diagnostic pathology.
-
Precision Study (Establishing Reference Feature):
- "Primary feature for that case" acted as the reference. The mechanism for establishing this ultimate ground truth for features in the ROIs is not explicitly detailed beyond being "protocol-specified." However, Screening Pathologists were involved in selecting ROIs for each slide. It's implied that the reference for the presence of the 23 specific histopathologic features was expert-derived based on consensus or previous established pathology.
4. Adjudication Method for the Test Set (Clinical Accuracy Study)
- Method: A (2+1) or (2+1+panel) adjudication method was used.
- For each Reading Pathologist's diagnosis, two Adjudication Pathologists (blinded to site, Reading Pathologist, and reading modality) separately assessed agreement with the original sign-out diagnosis (reference diagnosis).
- If the two adjudicators disagreed, a third Adjudication Pathologist reviewed the case to achieve a majority consensus.
- In cases where all three adjudicators had different opinions, consensus was reached in an adjudication panel meeting consisting of the same three Adjudication Pathologists.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Yes, a MRMC comparative effectiveness study was done. This was the Clinical Accuracy Study.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study design was a non-inferiority study to show that viewing digital images (AI-assisted, as the WSI system is the "aid") is not worse than manual microscopy. It did not directly quantify improvement of human readers with AI vs. without AI.
- Instead, it compared the diagnostic accuracy of pathologists using the digital system (DR) versus traditional microscopy (MR) directly against the reference sign-out diagnosis.
- The observed overall agreement rate was 92.00% for DR and 92.61% for MR. The difference (DR - MR) was -0.61%. This suggests a slight decrease in agreement rate when using DR compared to MR, but it was statistically non-inferior (i.e., not significantly worse than MR, within the defined margin). The study states, "These model results failed to show any statistically significant difference between the 2 reading modalities."
- Therefore, the effect size is that there was no statistically significant difference in diagnostic agreement rates between digital review and manual review, demonstrating non-inferiority rather than an improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- No, a standalone (algorithm-only) performance study was not done for diagnostic accuracy.
- The device is a "Whole slide imaging system" intended "as an aid to the pathologist to review and interpret digital images." Its performance evaluation (clinical accuracy) was explicitly designed as a human-in-the-loop study (pathologists using the system for diagnosis).
- However, technical studies (e.g., color reproducibility, spatial resolution, focusing, whole slide tissue coverage, stitching error) assessed specific algorithm/system components in a standalone or technical performance manner. For instance, the "Image Processing Software" section describes various algorithms (exposure control, white balance, color correction, etc.), and "Image Composition" discusses scanning methods. These are technical assessments of the system's output quality rather than diagnostic accuracy.
7. The Type of Ground Truth Used
- Clinical Accuracy Study: The ground truth for diagnostic accuracy was the original sign-out pathologic diagnosis rendered at the study sites using an optical (light) microscope, verified by two screening pathologists. This represents expert consensus/established clinical diagnosis.
- Precision Study: The ground truth for feature detection was the "reference primary feature for that case." This was established by "Screening Pathologists" who selected the ROIs containing these features, implying expert-identified features.
8. The Sample Size for the Training Set
The provided document describes studies for device validation and clearance, not for the development and training of a machine learning model. Therefore, no information on the sample size for a training set is provided. The Roche Digital Pathology Dx system is described as a "whole slide imaging (WSI) system" and its components (scanner, software, display), without mention of AI/ML components for automated diagnosis or feature detection that would require a separate training set. The "AI" mentioned in question 5 refers to the digital WSI system as an "aid" to the human reader, not necessarily an AI algorithm performing diagnosis independently.
9. How the Ground Truth for the Training Set Was Established
As no training set is described (since this is primarily a WSI system for human review, not an autonomous AI diagnostic algorithm), this information is not applicable.
Ask a specific question about this device
(237 days)
For In Vitro Diagnostic Use
HALO AP Dx is a software only device intended as an aid to the pathologist to review, interpret and manage digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue for the purposes of pathology primary diagnosis. HALO AP Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. HALO AP Dx is intended for use with the Hamamatsu NanoZoomer S360MD Slide scanner and the JVC Kenwood JD-C240BN01A display.
HALO AP Dx, version 2.1 is a browser-based software-only device intended to aid pathology professionals in viewing, manipulating, management and interpretation of digital pathology whole slide images (WSI) of glass slides obtained from the Hamamatsu Photonics K.K. NanoZoomer S360MD scanner and viewed on the JVC Kenwood JD-C240BN01A display.
HALO AP Dx is typically operated as follows:
- Image acquisition is performed using the predicate device, NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
- Once image acquisition is complete, the unaltered image is saved by the scanner's software to an image storage location. HALO AP Dx ingests the image, and a copy of image metadata is stored in the subject device's database to improve viewing response times.
- Scanned images are reviewed by scanning personnel such as histotechnicians to confirm image quality and initiate any re-scans before making it available to the pathologist.
- The reading pathologist selects a patient case from a selected worklist within HALO AP Dx whereby the subject device fetches the associated images from external image storage.
- The reading pathologist uses the subject device to view the images and can perform the following actions, as needed:
a. Zoom and pan the image.
b. Measure distances and areas in the image.
c. Annotate images.
d. View multiple images side by side in a synchronized fashion.
The above steps are repeated as necessary.
After viewing all images belonging to a particular case (patient), the pathologist will make a diagnosis which is documented in another system, such as a Laboratory Information System (LIS).
The interoperable components of HALO AP Dx are provided in table 1 below:
Table 1. Interoperable Components for Use with HALO AP Dx
| Components | Manufacturer | Model |
|---|---|---|
| Scanner | Hamamatsu | NanoZoomer S360MD Slide scanner |
| Display | JVC | JD-C240BN01A |
This FDA 510(k) clearance letter pertains to HALO AP Dx, a software-only device for digital pathology image review. The documentation indicates that the device has been deemed substantially equivalent to a predicate device, the Hamamatsu NanoZoomer S360MD Slide scanner system (K213883).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document defines the performance data points primarily through "Performance Data" and "Summary of Studies" sections, focusing on comparisons to the predicate device and usability. There are no explicit quantitative "acceptance criteria" presented as specific thresholds, but rather statements of adequacy and similarity to the predicate.
| Acceptance Criterion (Implicit) | Reported Device Performance |
|---|---|
| Image Reproduction Quality (Color Accuracy) | Criteria: Identical image reproduction compared to the predicate device (NZViewMD viewer), specifically regarding pixel-wise color accuracy.Performance: Pixel-level comparisons demonstrated that the 95th percentile CIEDE2000 values across all Regions of Interest (ROIs) from varied tissue types and diagnoses were less than 3 ΔE00. This was determined to be "identical image reproduction." |
| Turnaround Time (Image Loading - Case Selection) | Criteria: "When selecting a case, it should not take longer than 4 seconds until the image is fully loaded."Performance: Determined to be "adequate for the intended use of the subject device." (No specific value reported, but implies <= 4 seconds). |
| Turnaround Time (Image Loading - Panning) | Criteria: "When panning the image, it should not take longer than 3 seconds until the image is fully loaded."Performance: Determined to be "adequate for the intended use of the subject device." (No specific value reported, but implies <= 3 seconds). |
| Measurement Accuracy | Criteria: Ability to perform accurate measurements (distance and area).Performance: "The subject device has been found to perform accurate measurements with respect to its intended use." (Verified using a test image with known sizes). |
| System Responsiveness under Load | Criteria: Maintain responsiveness under constant utilization.Performance: "Concurrent multi-user load testing confirms HALO AP Dx performance remains responsive under constant utilization over a long time period." |
| Human Factors/Usability (Safety and Effectiveness for Users) | Criteria: User interface is intuitive, safe, and effective for intended users.Performance: "Task-based usability tests verified the HALO AP Dx user interface to be intuitive, safe, and effective for the range of intended users." (Conducted per FDA's Guidance on Applying Human Factors and Usability Engineering to Medical Devices (2016)). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state the numerical sample size for the test set (e.g., number of whole slide images). It mentions "multiple tiles at multiple magnification levels" and "all ROIs taken from images with varied tissue types and diagnoses" for image reproduction testing, and "a test image containing objects with known sizes" for measurement accuracy. This suggests a varied, though unspecified, set of images or data points were used for testing.
- Data Provenance: Not explicitly stated regarding country of origin. The study appears to be an internal non-clinical performance evaluation. The type of tissue used for image reproduction testing is "varied tissue types and diagnoses," implying real FFPE tissue samples, but it doesn't specify if they were retrospective or prospectively collected for the study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The document describes non-clinical performance testing (e.g., pixel-wise comparison, turnaround times, measurement accuracy, human factors testing). These tests typically rely on predefined objective metrics or reference standards rather than expert consensus on diagnostic interpretation.
- For the human factors/usability testing, the study "verified the HALO AP Dx user interface to be intuitive, safe, and effective for the range of intended users." This implies involvement of intended users (pathologists or similar professionals), but neither the specific number nor their qualifications are detailed.
4. Adjudication Method for the Test Set
- Adjudication methods (e.g., 2+1, 3+1) are typically relevant for studies where a "ground truth" is established by multiple human readers for diagnostic accuracy.
- Since this document focuses on technical performance and usability, rather than diagnostic accuracy (which would involve human pathologists making diagnoses with and without AI assistance), traditional adjudication methods were not applicable or described. The "ground truth" in these tests consists of objective technical parameters or usability feedback.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was described. The study primarily focuses on showing the technical performance of HALO AP Dx is substantially equivalent to the viewing software component of the predicate device (NZViewMD), not on improving human reader performance or diagnostic accuracy with AI assistance.
- The device is a "viewer" intended to "aid the pathologist," not an AI-powered diagnostic algorithm. Therefore, an MRMC study comparing human readers with and without AI assistance to measure effect size is not relevant to this submission, which focuses on the viewing platform itself.
6. Standalone (Algorithm Only) Performance
- This device, HALO AP Dx, is a standalone software-only device in the sense that it functions as a digital pathology image viewer.
- However, it does not involve a diagnostic AI algorithm where "standalone performance" (e.g., sensitivity/specificity of the AI itself) would be measured. Its "performance" is about accurate image reproduction, speed, and usability of the viewing functions.
7. Type of Ground Truth Used
The ground truth used for the technical performance evaluations was objective and predefined:
- For Image Reproduction: A pixel-wise comparison to the predicate device's viewer (NZViewMD) was performed. The "ground truth" was essentially the image rendered by the predicate's software.
- For Turnaround Times: Time taken for specific actions (loading, panning) against predefined numerical thresholds (4 seconds, 3 seconds).
- For Measurement Accuracy: A "test image containing objects with known sizes." The known sizes were the ground truth.
- For Human Factors: User feedback and task completion during usability tests against predefined criteria for intuitiveness, safety, and effectiveness.
8. Sample Size for the Training Set
- The document does not mention a "training set" in the context of machine learning. This is because HALO AP Dx is described as a "software only device intended as an aid to the pathologist to review, interpret and manage digital images," not an AI/ML-based diagnostic algorithm that would require a training set.
- Its core functionality is image display and manipulation, not pattern recognition or classification that would necessitate machine learning.
9. How the Ground Truth for the Training Set Was Established
- As no machine learning training set is mentioned or implied for this device's functionality, this question is not applicable.
Ask a specific question about this device
(266 days)
The Aperio GT 450 DX is an automated digital slide creation and viewing system. The Aperio GT 450 DX is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Aperio GT 450 DX is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Aperio GT 450 DX is comprised of the Aperio GT 450 DX scanner, which generates images in the Digital Imaging and Communications in Medicine (DICOM) and in the ScanScope Virtual Slide (SVS) file formats, the Aperio WebViewer DX viewer, and the displays. The Aperio GT450 DX is intended to be used with the interoperable components specified in Table 1.
The Aperio GT 450 DX is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using the Aperio GT 450 DX.
The Aperio GT 450 DX is a Whole Slide Imaging (WSI) system, including image acquisition and image viewing components.
Aperio GT 450 DX is a WSI system comprised of an image acquisition subsystem known as the Aperio GT 450 DX scanner and Aperio WebViewer DX image viewing software which is accessed from a workstation and a display.
Image Acquisition Subsystem: The image acquisition subsystem of the Aperio GT 450 DX captures the information from surgical pathology glass slides prepared from FFPE tissue and saves it as a high-resolution digital image file. This subsystem is comprised of the Aperio GT 450 DX scanner and corresponding scanner configuration software, Aperio GT 450 Scanner Administration Manager DX (SAM DX).
The Aperio GT 450 DX scanner is a semi-automated benchtop brightfield WSI scanner that can achieve a scan speed of 32 seconds at the 40x scanning magnification for a 15 mm x 15 mm area. The scanner supports continuous glass-slide loading (Up to 15 racks with a total of 450-slide capacity), priority rack scanning, and automated image quality checks during image acquisition. The Aperio GT 450 DX scanner can be used with Leica Biosystems Imaging, Inc .--manufactured slide racks (Product No. 23RACKGT450) and other supported slide racks (e.g., Prisma® 20-slide basket from Sakura Finetek USA, Inc). The Aperio GT 450 DX scanner detects the racks once loaded in the scanner and scans the slides automatically. Users operate the scanner via a touchscreen interface.
The Aperio GT 450 DX scanner can save digital images in a unique Aperio ScanScope Virtual Slide (SVS) image format or Digital Imaging and Communications in Medicine (DICOM) image format. The digital images are sent to end-user-provided image storage attached to the scanner's local network, where they can be cataloged in image storage software (non-medical device, external to the WSI), including Image Management System (IMS), such as Aperio eSlide Manager, or a Picture Archiving and Communication System (PACS), such as Sectra PACS software.
Aperio GT 450 SAM DX is centralized scanner management software external to the connected scanner(s). This software application enables IT implementation, including configuration, monitoring, and service access of multiple scanners from a single desktop client location. Aperio GT 450 SAM DX is installed on a customer-provided server that resides on the same network as the scanner(s) for image management.
Image Viewing Subsystem: The image viewing subsystem of the WSI device displays the digital images to the human reader. This subsystem comprises Aperio WebViewer DX image viewing software, a workstation PC, and monitor(s). Both the workstation and display are procured by the customer from commercial distributors and qualified for in vitro diagnostic use by Leica Biosystems Imaging, Inc. The Aperio WebViewer DX software is a web-based image viewer that enables users to perform Quality Control of images and to review and annotate digital images for routine diagnosis. The Aperio WebViewer DX also incorporates monitor display image validation checks, which provide the user with the ability to ensure the digital slide images are displayed as intended on their monitor, and that browser updates have not inadvertently affected the image display quality. Aperio WebViewer DX is installed on a server and accessed from an IMS (e.g., Aperio eSlide Manager) or a customer's Laboratory Information System (LIS) using compatible browsers.
Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance for Aperio GT 450 DX
1. Table of Acceptance Criteria and Reported Device Performance:
| Criterion Category | Specific Criterion | Acceptance Criteria | Reported Device Performance | Met? |
|---|---|---|---|---|
| Clinical Accuracy | ||||
| Primary Endpoint | Difference in overall major discrepancy rates (WSIR diagnosis minus MSR diagnosis) for the full cohort (local and remote combined) compared to the reference diagnosis. | The upper bound of the 2-sided 95% confidence interval (CI) of the difference between the overall major discrepancy rates of WSIR diagnosis and MSR diagnosis when compared to the reference diagnosis shall be ≤4%. | Model 95% CI for Difference: (1.40%, 3.39%) The upper bound (3.39%) is ≤4%. | Yes |
| Secondary Endpoint | Major discrepancy rate of WSIR diagnosis relative to the reference diagnosis (full cohort). | The upper bound of the 2-sided 95% CI of the major discrepancy rate between the WSIR diagnosis and the reference diagnosis shall be ≤7%. | Model 95% CI for WSIRD Major Discrepancy Rate: (5.01%, 6.80%) The upper bound (6.80%) is ≤7%. | Yes |
| Secondary Endpoint | Difference in major discrepancy rates (WSIR diagnosis minus MSR diagnosis) for local WSI access compared to the reference diagnosis. | The upper bound of the 2-sided 95% CI of the difference between the major discrepancy rates of the WSIR diagnosis and MSR diagnosis when compared to the reference diagnosis shall be <4% for local WSI access. | Model 95% CI for Local Difference: (1.23%, 3.99%) The upper bound (3.99%) is <4%. | Yes |
| Secondary Endpoint | Difference in major discrepancy rates (WSIR diagnosis minus MSR diagnosis) for remote WSI access compared to the reference diagnosis. | The upper bound of the 2-sided 95% CI of the difference between the major discrepancy rates of the WSIR diagnosis and MSR diagnosis when compared to the reference diagnosis shall be <4% for remote WSI access. | Model 95% CI for Remote Difference: (0.78%, 3.57%) The upper bound (3.57%) is <4%. | Yes |
| Precision | ||||
| Intra-system precision (overall) | Overall agreement within individual systems and across all systems. | The lower bounds of the 2-sided 95% CI of the overall agreements for each precision component (intra-system/site, intra-pathologist, and inter-pathologist) were ≥ 85%. | Overall Agreement Rate (Intra-System): 97.1% (95% CI: 95.8%, 98.3%) The lower bound (95.8%) is ≥ 85%. | Yes |
| Inter-system/site precision (overall) | Overall agreement between different systems/sites. | The lower bounds of the 2-sided 95% CI of the overall agreements for each precision component (intra-system/site, intra-pathologist, and inter-pathologist) were ≥ 85%. | Overall Agreement Rate (Inter-System/Site): 96.3% (95% CI: 94.9%, 97.6%) The lower bound (94.9%) is ≥ 85%. | Yes |
| Intra-pathologist precision (overall) | Overall agreement within individual pathologists. | The lower bounds of the 2-sided 95% CI of the overall agreements for each precision component (intra-system/site, intra-pathologist, and inter-pathologist) were ≥ 85%. | Overall Agreement Rate (Intra-Pathologist) for System 1 (single system data used for this test): 93.5% (95% CI: 92.4%, 94.5%) The lower bound (92.4%) is ≥ 85%. | Yes |
| Inter-pathologist precision (overall) | Overall agreement between different pathologists. | The lower bounds of the 2-sided 95% CI of the overall agreements for each precision component (intra-system/site, intra-pathologist, and inter-pathologist) were ≥ 85%. | Overall Agreement Rate (Inter-Pathologist) for System 1 (single system data used for this test): 91.7% (95% CI: 90.6%, 92.8%) The lower bound (90.6%) is ≥ 85%. | Yes |
Note: WSIR = Whole Slide Image Review; MSR = Light Microscope Slide Review
2. Sample Size for Test Set and Data Provenance:
-
Clinical Accuracy Study (Full Cohort):
- WSIR Diagnosis: 3549 cases.
- MSR Diagnosis: 3631 cases.
- The "Full Cohort" combines local and remote cohorts, suggesting data collection may have occurred across multiple sites or access methods. The country of origin is not explicitly stated, but clinical studies for FDA submissions typically involve well-regulated clinical environments, often in the US. The study appears to be retrospective as it compares diagnoses from WSIR and MSR against "reference diagnoses" (original sign-out pathologic diagnoses), suggesting these reference diagnoses were pre-existing.
-
Precision Study: The sample sizes vary by substudy:
- Intra-System Precision: 759 comparison pairs (737 pairwise agreements).
- Inter-System/Site Precision: 759 comparison pairs (731 pairwise agreements).
- Intra-Pathologist Precision: 2277 comparison pairs (2129 pairwise agreements).
- Inter-Pathologist Precision: 2277 comparison pairs (2089 pairwise agreements).
- The origin of the data for the precision study is not specified but involves "three independent systems at three different sites" for inter-system/site precision. The nature of the "cases" for precision implies pathologists reviewing slides/images, potentially from similar sources as the accuracy study.
3. Number of Experts and Qualifications for Ground Truth:
- The text refers to "the pathologists" in the context of the device's intended use and the "original sign-out pathologic diagnosis" as the reference diagnosis (ground truth) for the clinical accuracy study.
- The number of pathologists establishing the original sign-out diagnoses for the entire cohort of slides is not explicitly stated.
- The qualifications implied are "qualified pathologist" to ensure the validity of interpretation. Specific years of experience are not provided.
- For the precision study, it refers to "3 pathologists" in the intra- and inter-pathologist precision substudy, but their specific qualifications (e.g., years of experience) are not listed.
4. Adjudication Method for the Test Set:
- The clinical accuracy study established "reference diagnoses" as the "original sign-out pathologic diagnosis." This suggests that the ground truth was based on a primary diagnostic process, which may or may not inherently involve an adjudication method like 2+1 or 3+1 during its initial establishment. The document does not explicitly describe an adjudication method used to establish this "reference diagnosis" or the ground truth for the test set. It implies a single, established diagnostic truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Yes, a comparative effectiveness study was done. The clinical accuracy study compared diagnoses made using the Aperio GT 450 DX (WSIR) against traditional light microscope slide review (MSR) diagnoses, both relative to a reference diagnosis. This involved multiple readers (pathologists) examining multiple cases.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: This study does not describe an AI assistance scenario. The Aperio GT 450 DX is a Whole Slide Imaging (WSI) system for digital review of slides, not an AI diagnostic aid. It enables pathologists to interpret digital images, which is compared to their interpretation using a physical microscope. Therefore, there is no "AI assistance" component described for improving human reader performance. The study focuses on the equivalence of digital review to traditional microscopy.
6. Standalone (Algorithm Only) Performance:
- No, a standalone (algorithm only) performance study was not described. The Aperio GT 450 DX is a system intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images. Its performance is evaluated with a human in the loop, comparing the pathologist's interpretation of digital images to their interpretation of glass slides. The device itself does not provide an automated diagnosis.
7. Type of Ground Truth Used:
- The ground truth for the clinical accuracy study was the "original sign-out pathologic diagnosis." This implies an expert consensus or established diagnostic conclusion based on standard pathological procedures. Other types like pathology reports, outcomes data, or a specific gold standard adjudication were not explicitly detailed beyond being the "original sign-out."
8. Sample Size for the Training Set:
- The document does not provide information regarding a specific training set or its sample size. This is a WSI system, not a device with a machine learning component that requires a labeled training set for its core function of image acquisition and display. The "tissue detection algorithms" are mentioned as part of the technical studies (whole slide tissue coverage), but details on how these were trained are not provided.
9. How the Ground Truth for the Training Set was Established:
- Since a training set and its sample size are not mentioned for the core function of the device, the method for establishing its ground truth is not provided in this document.
Ask a specific question about this device
(265 days)
For In Vitro Diagnostic Use
Sectra Digital Pathology Module (3.3) is a software device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis.
Sectra Digital Pathology Module (3.3) is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using Sectra Digital Pathology Module (3.3).
Sectra Digital Pathology Module (3.3) is intended for use with Leica's Aperio GT 450 DX scanner and Dell U3223QE display, for viewing and management of the ScanScope Virtual Slide (SVS) and Digital Imaging and Communications in Medicine (DICOM) image formats.
The Sectra Digital Pathology Module (3.3) [henceforth referred to DPAT (3.3)] is a digital slide viewing system. The DPAT (3.3) is intended for use together with FDA-cleared whole-slide image scanner GT 450 DX and Dell U3223QE display.
The DPAT (3.3) can only be used as an add-on module to Sectra PACS. Sectra PACS consists of Sectra Workstation IDS7 (K081469) and Sectra Core (identified as a Class I exempt by the FDA in 2000). Sectra PACS is not part of the subject device. Sectra Workstation is the viewing workstation in which the Pathology Image Window is run. Pathology Image Window is the client component of the subject device.
The system capabilities include:
- retrieving and displaying digital slides,
- support for remote intranet access over computer networks,
- tools for annotating digital slides and entering and editing metadata associated with digital slides, and
- displaying the scanned slide images for primary diagnosis by pathologists.
The subject device is designed to accurately display colors. The monitor is not part of the subject device.
Digital pathology images originating from WSI scanners other than those listed in the Indications for Use will be marked with the disclaimer "For Non-clinical Use Only" in the Pathology Image Window.
Image acquisition will be managed by the scanner which is not part of the subject device:
- The scanner delivers images with a tag in the file header that identifies the originating scanner.
- The scanner includes applications for controlling the scanning process and performing related quality control (e.g., ensuring that images are sharp and cover all tissue on the slide).
The DPAT (3.3) supports reading digital slides on a Dell U32230E display monitor, enabling pathologists to make clinically relevant decisions analogous to those they make using a conventional microscope. Specifically, the system supports the pathologist in performing a primary diagnosis based on viewing the digital slide on a computer monitor. These capabilities are provided by the Pathology Image Window.
Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided text:
Acceptance Criteria and Device Performance
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Endpoint) | Reported Device Performance | Met? |
|---|---|---|
| Primary Endpoint: The upper bound of the 2-sided 95% CI of the difference between the overall major discrepancy rates of WSIR diagnosis and MSR diagnosis when compared to the reference diagnosis shall be ≤4%. | The upper bound of the 95% CI of the estimated difference in major discrepancy rates was 1.69%. | Yes |
| Secondary Endpoint: The upper bound of the 2-sided 95% CI of the major discrepancy rate between WSIR diagnosis and the reference diagnosis shall be <7%. | The upper bound of the 95% CI for the overall estimated major discrepancy rate for WSIR diagnosis was 5.25%. | Yes |
| Pixel-Wise Comparison: All configurations were identical, i.e., <3ΔΕ00 to reference configuration SVS/UniView/Chrome. | Based on analysis of the testing data, the 4 specified configurations (DICOM/IDS7, DICOM/UniView/Chrome, SVS/IDS7, SVS/UniView/Edge) were identical, i.e., <3∆Ε00 to reference configuration SVS/UniView/Chrome, and specifically ΔE=0 for the subject device displaying the same scanned image in both DICOM and SVS formats, in IDS7, Edge or Chrome. | Yes |
| Turnaround Time: | ||
| - When selecting a slide image, it should not take longer than 3 seconds until the image is fully loaded. | Reported to be "adequate for the intended use". Specific values not provided but "similar to or better than those of the predicate device." | Yes |
| - When panning the image (one quarter of the monitor) it should not take longer than 0.5 seconds until the image is fully loaded. | Reported to be "adequate for the intended use". Specific values not provided but "similar to or better than those of the predicate device." | Yes |
| Measurements: Measurement accuracy has been verified using a test image containing objects with known sizes. | Reported that measurement accuracy "has been verified" and show "almost identical results" to the predicate. | Yes |
Note: While specific numerical results for turnaround time and measurement accuracy are not provided, the document states they were found to be "adequate" and "accurate" respectively, meeting the implicit acceptance criteria for these performance aspects.
Study Details
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 258 cases.
- Data Provenance: The document does not explicitly state the country of origin. The study was conducted at a "single site." It was a retrospective study, as the MSR diagnoses were "original sign-out diagnoses." The WSIR diagnoses were prospectively obtained using the device for the study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts for Ground Truth: Not directly stated as "experts for ground truth," but 3 reading pathologists determined the MSR diagnoses (which formed a basis for comparison with the reference diagnosis) and 3 reading pathologists created the WSIR diagnoses. A minimum of two adjudicators independently assessed concordance for the WSIR diagnosis against the reference diagnosis.
- Qualifications of Experts: All were "pathologists." Further specific qualifications (e.g., years of experience, board certification) are not detailed in the provided text.
4. Adjudication Method for the Test Set
- Adjudication Method: Minimum of two adjudicators independently assessed concordance (concordant, minor discrepancy, major discrepancy) of the WSIR diagnosis against the reference diagnosis using predefined rules. Their concordance scores for the same case were compared to determine a consensus score for major discrepancy status. This represents a form of 2-reader adjudication with consensus. The document does not explicitly mention "adjudication of ground truth" but rather adjudication of the concordance between the device's output and the reference diagnosis.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Yes, effectively. The study design involved 3 reading pathologists (multi-reader) reviewing 258 cases (multi-case) using the device (WSIR diagnosis) and comparing it to their traditional light microscopy review (MSR diagnosis), both against a reference diagnosis. While not explicitly termed an "MRMC study" in the classic sense of comparing AI-assisted vs. unassisted human performance, it acts as a comparative effectiveness study demonstrating non-inferiority of the digital pathology system to traditional microscopy.
- Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The study's primary endpoint was non-inferiority of the digital system (DPAT (3.3)-UniView) compared to traditional light microscopy. The estimated difference in major discrepancy rates between the two modalities (digital vs. microscope) when compared to the reference diagnosis was -0.01% (95% CI: -1.71% to 1.69%). This indicates that the digital system performed comparably to, or negligibly better than, light microscopy in terms of major discrepancy rates against a reference. It doesn't quantify improvement with AI assistance per se, but rather the non-inferiority of the digital viewing system (which is the device being cleared) to the traditional method.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Was a standalone performance study done? No, not in the sense of an AI algorithm making a diagnosis without human intervention. The device is a "software device intended for viewing and management of digital images... as an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis." The study specifically evaluates "interpreting digital images... for the purposes of primary diagnosis" by a human pathologist using the device, not an automated diagnostic output.
7. Type of Ground Truth Used
- Type of Ground Truth: The study used "original sign-out diagnoses" made using light microscopy as the "reference diagnoses." The document clarifies that major discrepancy was defined as a "difference in diagnosis that resulted in a clinically important difference in patient management." While not explicitly stated as "pathology ground truth" established post-hoc, it strongly implies a consensus or definitive diagnosis used as the gold standard derived from clinical practice.
8. Sample Size for the Training Set
- Training Set Sample Size: The document does not mention the training set size for the device. The study described focuses on the performance evaluation of the final device (Sectra Digital Pathology Module 3.3) for clinical validation, not the development or training of any underlying AI or image processing models within the device.
9. How Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: As no training set information is provided, the method for establishing its ground truth is also not mentioned.
Ask a specific question about this device
Page 1 of 2