(270 days)
Roche Digital Pathology Dx (VENTANA DP 200) is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx (VENTANA DP 200) is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx (VENTANA DP 200) is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx (VENTANA DP 200) is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx (VENTANA DP 200), hereinafter referred to as Roche Digital Pathology Dx, is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. Roche Digital Pathology Dx system is composed of the following components:
- · VENTANA DP 200 slide scanner
- · Roche uPath enterprise software 1.1.1 (hereinafter, "uPath")
- · ASUS PA248QV display
VENTANA DP 200 slide scanner is a bright-field digital pathology scanner that accommodates loading and scanning of up to 6 standard slides. The scanner comprises a high-resolution 20x objective with the ability to scan at both 20x and 40x. With its uniquely designed optics and scanning methods, VENTANA DP 200 scanner enables users to capture sharp, high-resolution digital images of stained tissue specimens on glass slides. The scanner features automatic detection of the tissue specimen on the slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). It also integrates color profiling to ensure that images produced from scanned slides are generated with a color-managed International Color Consortium (ICC) profile. VENTANA DP 200 image files are generated in a proprietary format (BIF) and can be uploaded to an Image Management System (IMS), such as the one provided with Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using Google Chrome or Microsoft Edge. The interface of uPath software enables laboratories to manage their workflow from the time the digital slide image is produced and acquired by a VENTANA slide scanner through the subsequent processes including, but not limited to, review of the digital image on the monitor screen, analysis, and reporting of results. The software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided text describes the acceptance criteria and the study that proves the Roche Digital Pathology Dx (VENTANA DP 200) device meets these criteria for FDA 510(k) clearance.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The core clinical acceptance criterion for the Roche Digital Pathology Dx system was non-inferiority of digital read (DR) accuracy compared to manual read (MR) accuracy.
Acceptance Criteria for Clinical Accuracy:
Acceptance Criterion (Primary Objective) | Reported Device Performance |
---|---|
Lower bound of a 2-sided 95% confidence interval for the difference | |
in accuracy (DR - MR) had to be greater than or equal to -4%. | Observed: |
Overall agreement rate: DR = 92.00%, MR = 92.61% | |
DR-MR difference in agreement rate: -0.61% (95% CI: -1.59%, 0.35%) | |
Model (Generalized Linear Mixed Model): | |
Estimated agreement rates: DR = 91.54%, MR = 92.16% | |
DR-MR difference in agreement rate: -0.62% (95% CI: -1.50%, 0.26%) | |
Result: The lower limit of the 95% confidence interval for DR-MR (-1.59% observed, -1.50% model) was greater than the pre-specified non-inferiority margin of -4%. Therefore, the DR modality was demonstrated to be non-inferior to the MR modality. |
Acceptance Criteria for Analytical Performance (Precision):
Acceptance Criterion | Reported Device Performance |
---|---|
Lower bounds of the 2-sided 95% CIs for all co-primary endpoints | |
(Overall Percent Agreement [OPA] point estimate for between-site/system, between-day/within-system, and between-reader agreement) were at least 85%. | Between-Site/System OPA: 89.3% (95% CI: 85.8%, 92.4%) |
Between-Days/Within-System OPA: 90.3% (95% CI: 87.1%, 93.2%) | |
Between-Readers OPA: 90.1% (95% CI: 86.6%, 93.0%) | |
Result: For all co-primary analyses, the lower bounds of the 95% CI were >85%, demonstrating acceptable precision. |
2. Sample Size Used for the Test Set and Data Provenance
-
Clinical Accuracy Study (Test Set):
- Sample Size: 2047 cases (total of 3259 slides) consisting of multiple organ and tissue types.
- Data Provenance: Multi-center study conducted at four sites. The text doesn't explicitly state the country of origin, but it is an FDA submission based in Tucson, Arizona, implying data from the United States. The cases were retrospective, pre-screened from archived specimens from the clinical database of the study sites, with a minimum of one year between the date of sign-out diagnosis and the beginning of the study.
-
Precision Study (Test Set):
- Sample Size: 69 study cases (slides), each with 3 ROIs, totaling 207 "study" ROIs. An additional 12 "wild card" cases (36 wild card ROIs) were included to reduce recall bias but excluded from statistical analysis.
- Data Provenance: Study conducted at 3 external pathology laboratories (study sites). The text doesn't explicitly state the country of origin. The cases contained H&E-stained archival slides of FFPE human tissue.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
-
Clinical Accuracy Study (Establishing Reference Diagnosis / Sign-Out Truth):
- Initial Verification: Two Screening Pathologists at each study site pre-screened cases and determined inclusion/exclusion criteria. The first Screening Pathologist reviewed H&E and ancillary stained slides using manual microscopy to identify representative slides and confirmed the diagnosis against the sign-out report. A second Screening Pathologist then verified the sign-out diagnosis data.
- Qualifications: "Qualified pathologist" is mentioned. The study design implies these are experienced professionals involved in routine diagnostic pathology.
-
Precision Study (Establishing Reference Feature):
- "Primary feature for that case" acted as the reference. The mechanism for establishing this ultimate ground truth for features in the ROIs is not explicitly detailed beyond being "protocol-specified." However, Screening Pathologists were involved in selecting ROIs for each slide. It's implied that the reference for the presence of the 23 specific histopathologic features was expert-derived based on consensus or previous established pathology.
4. Adjudication Method for the Test Set (Clinical Accuracy Study)
- Method: A (2+1) or (2+1+panel) adjudication method was used.
- For each Reading Pathologist's diagnosis, two Adjudication Pathologists (blinded to site, Reading Pathologist, and reading modality) separately assessed agreement with the original sign-out diagnosis (reference diagnosis).
- If the two adjudicators disagreed, a third Adjudication Pathologist reviewed the case to achieve a majority consensus.
- In cases where all three adjudicators had different opinions, consensus was reached in an adjudication panel meeting consisting of the same three Adjudication Pathologists.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Yes, a MRMC comparative effectiveness study was done. This was the Clinical Accuracy Study.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study design was a non-inferiority study to show that viewing digital images (AI-assisted, as the WSI system is the "aid") is not worse than manual microscopy. It did not directly quantify improvement of human readers with AI vs. without AI.
- Instead, it compared the diagnostic accuracy of pathologists using the digital system (DR) versus traditional microscopy (MR) directly against the reference sign-out diagnosis.
- The observed overall agreement rate was 92.00% for DR and 92.61% for MR. The difference (DR - MR) was -0.61%. This suggests a slight decrease in agreement rate when using DR compared to MR, but it was statistically non-inferior (i.e., not significantly worse than MR, within the defined margin). The study states, "These model results failed to show any statistically significant difference between the 2 reading modalities."
- Therefore, the effect size is that there was no statistically significant difference in diagnostic agreement rates between digital review and manual review, demonstrating non-inferiority rather than an improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- No, a standalone (algorithm-only) performance study was not done for diagnostic accuracy.
- The device is a "Whole slide imaging system" intended "as an aid to the pathologist to review and interpret digital images." Its performance evaluation (clinical accuracy) was explicitly designed as a human-in-the-loop study (pathologists using the system for diagnosis).
- However, technical studies (e.g., color reproducibility, spatial resolution, focusing, whole slide tissue coverage, stitching error) assessed specific algorithm/system components in a standalone or technical performance manner. For instance, the "Image Processing Software" section describes various algorithms (exposure control, white balance, color correction, etc.), and "Image Composition" discusses scanning methods. These are technical assessments of the system's output quality rather than diagnostic accuracy.
7. The Type of Ground Truth Used
- Clinical Accuracy Study: The ground truth for diagnostic accuracy was the original sign-out pathologic diagnosis rendered at the study sites using an optical (light) microscope, verified by two screening pathologists. This represents expert consensus/established clinical diagnosis.
- Precision Study: The ground truth for feature detection was the "reference primary feature for that case." This was established by "Screening Pathologists" who selected the ROIs containing these features, implying expert-identified features.
8. The Sample Size for the Training Set
The provided document describes studies for device validation and clearance, not for the development and training of a machine learning model. Therefore, no information on the sample size for a training set is provided. The Roche Digital Pathology Dx system is described as a "whole slide imaging (WSI) system" and its components (scanner, software, display), without mention of AI/ML components for automated diagnosis or feature detection that would require a separate training set. The "AI" mentioned in question 5 refers to the digital WSI system as an "aid" to the human reader, not necessarily an AI algorithm performing diagnosis independently.
9. How the Ground Truth for the Training Set Was Established
As no training set is described (since this is primarily a WSI system for human review, not an autonomous AI diagnostic algorithm), this information is not applicable.
§ 864.3700 Whole slide imaging system.
(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.