Search Results
Found 2 results
510(k) Data Aggregation
(92 days)
Roche Digital Pathology Dx
Roche Digital Pathology Dx is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx is composed of VENTANA DP 200 slide scanner, VENTANA DP 600 slide scanner, Roche uPath enterprise software, and ASUS PA248QV display. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Roche Digital Pathology Dx.
Roche Digital Pathology Dx (hereinafter referred to as RDPD), is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. RDPD system is composed of the following components:
- · VENTANA DP 200 slide scanner,
- · VENTANA DP 600 slide scanner,
- · Roche uPath enterprise software, and
- · ASUS PA248QV display.
VENTANA DP 600 slide scanner has a total capacity of 240 slides through 40 trays with 6 slides each. The VENTANA DP 600 slide scanner and VENTANA DP 200 slide scanner use the same Image Acquisition Unit.
Both VENTANA DP 200 and DP 600 slide scanners are bright-field digital pathology scanners that accommodate loading and scanning of 6 and 240 standard glass microscope slides, respectively. The scanners each have a high-numerical aperture Plan Apochromat 20x objective and are capable of scanning at both 20x and 40x magnifications. The scanners feature automatic detection of the tissue specimen on the glass slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). The International Color Consortium (ICC) color profile is embedded in each scanned slide image for color management. The scanned slide images are generated in a proprietary file format, Biolmagene Image File (BIF), that can be uploaded to the uPath Image Management System (IMS), provided with the Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology Dx system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using the Google Chrome or Microsoft Edge web browser. The user interface of uPath software enables laboratories to manage their workflow from the time the whole slide image is produced and acquired by VENTANA DP 200 and/or DP 600 slide scanners through the subsequent processes, such as review of the digital image on the monitor screen and reporting of results. The uPath software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided document is a 510(k) summary for the "Roche Digital Pathology Dx" system (K242783), which is a modification of a previously cleared device (K232879). This modification primarily involves adding a new slide scanner model, VENTANA DP 600, to the existing system. The document asserts that due to the identical Image Acquisition Unit (IAU) between the new DP 600 scanner and the previously cleared DP 200 scanner, the technical performance assessment from the predicate device is fully applicable. Therefore, the information provided below will primarily refer to the studies and acceptance criteria from the predicate device that are deemed applicable to the current submission due to substantial equivalence.
Here's an analysis based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a discrete "acceptance criteria" table with corresponding numerical performance metrics for the current submission (K242783). Instead, it states that the performance characteristics data collected on the VENTANA DP 200 slide scanner (the predicate device) are representative of the VENTANA DP 600 slide scanner performance because both scanners use the same Image Acquisition Unit (IAU). The table below lists the "Technical Performance Assessment (TPA) Sections" as presented in the document (Table 3), which serve as categories of performance criteria that were evaluated for the predicate device and are considered applicable to the current device. The reported performance for these sections is summarized as "Information was provided in K232879" for the DP 600, indicating that the past performance data is considered valid.
TPA Section (Acceptance Criteria Category) | Reported Device Performance (for DP 600 scanner) |
---|---|
Components: Slide Feeder | No double wide slide tray compatibility, new FMEA provided in K242783. (Predicate: Information on configuration, user interaction, FMEA) |
Components: Light Source | Information was provided in K232879. (Predicate: Descriptive info on lamp/condenser, spectral distribution verified) |
Components: Imaging Optics | Information was provided in K232879. (Predicate: Optical schematic, descriptive info, testing for irradiance, distortions, aberrations) |
Components: Focusing System | Information was provided in K232879. (Predicate: Schematic, description, optical system, cameras, algorithm) |
Components: Mechanical Scanner Movement | Same except no double wide slide tray compatibility, replaced references & new FMEA items provided in K242783. (Predicate: Information/specs on stage, movement, FMEA, repeatability) |
Components: Digital Imaging Sensor | Information was provided in K232879. (Predicate: Information/specs on sensor type, pixels, responsivity, noise, data, testing) |
Components: Image Processing Software | Information was provided in K232879. (Predicate: Information/specs on exposure, white balance, color correction, subsampling, pixel correction) |
Components: Image Composition | Information was provided in K232879. (Predicate: Information/specs on scanning method, speed, Z-axis planes, analysis of image composition) |
Components: Image File Formats | Information was provided in K232879. (Predicate: Information/specs on compression, ratio, file format, organization) |
Image Review Manipulation Software | Information was provided in K232879. (Predicate: Information/specs on panning, zooming, Z-axis displacement, comparison, image enhancement, annotation, bookmarks) |
Computer Environment | Select upgrades of sub-components & specifications. (Predicate: Information/specs on hardware, OS, graphics, color management, display interface) |
Display | Information was provided in K232879. (Predicate: Information/specs on pixel density, aspect ratio, display surface, and other display characteristics; performance testing for user controls, spatial resolution, pixel defects, artifacts, temporal response, luminance, uniformity, gray tracking, color scale, color gamut) |
System-level Assessments: Color Reproducibility | Information was provided in K232879. (Predicate: Test data for color reproducibility) |
System-level Assessments: Spatial Resolution | Information was provided in K232879. (Predicate: Test data for composite optical performance) |
System-level Assessments: Focusing Test | Information was provided in K232879. (Predicate: Test data for technical focus quality) |
System-level Assessments: Whole Slide Tissue Coverage | Information was provided in K232879. (Predicate: Test data for tissue detection algorithms and inclusion of tissue in digital image file) |
System-level Assessments: Stitching Error | Information was provided in K232879. (Predicate: Test data for stitching errors and artifacts) |
System-level Assessments: Turnaround Time | Information was provided in K232879. (Predicate: Test data for turnaround time) |
User Interface | Identical workflow, replacement of new scanner component depiction. (Predicate: Information on user interaction, human factors/usability validation) |
Labeling | Same content, replaced references. (Predicate: Compliance with 21 CFR Parts 801 and 809, special controls) |
Quality Control | Same content, replaced references. (Predicate: QC activities by user, lab technician, pathologist prior to/after scanning) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states that the technical performance assessment data was "collected on the VENTANA DP 200 slide scanner" (the predicate device). However, the specific sample sizes for these technical studies (e.g., number of slides used for focusing tests, stitching error analysis, etc.) are not detailed in this summary. The data provenance (country of origin, retrospective/prospective) for these underlying technical studies from K232879 is also not provided in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This section is not applicable or not provided by the document. The studies mentioned are primarily technical performance assessments related to image quality, system components, and usability, rather than diagnostic accuracy studies requiring expert pathologist interpretation for ground truth. For the predicate device, it mentions "a qualified pathologist" is responsible for interpretation, but this refers to the end-user clinical use, not the establishment of ground truth for device validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This section is not applicable or not provided by the document. As noted above, the document details technical performance studies rather than diagnostic performance studies that would typically involve multiple readers and adjudication methods for diagnostic discrepancies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned in the provided text. The device, "Roche Digital Pathology Dx," is described as a "digital slide creation, viewing and management system" intended "as an aid to the pathologist to review and interpret digital images." It is a Whole Slide Imaging (WSI) system, and the submission is focused on demonstrating the technical equivalence of a new scanner component, not on evaluating AI assistance or its impact on human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study is mentioned. The device is a WSI system for "aid to the pathologist to review and interpret digital images," implying a human-in-the-loop system. The document does not describe any specific algorithms intended for automated diagnostic interpretation or analysis in a standalone capacity.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the technical performance studies referenced from the predicate device (K232879), the "ground truth" would be established by physical measurements and engineering specifications for aspects like spatial resolution, color reproducibility, focusing quality, tissue coverage, and stitching errors, rather than clinical outcomes or diagnostic pathology. For instance, color reproducibility would be assessed against a known color standard, and spatial resolution against resolution targets. The document does not explicitly state the exact types of ground truth used for each technical assessment but refers to "test data to evaluate" these characteristics.
8. The sample size for the training set
Not applicable/not provided. The document describes a WSI system, not an AI/ML-based diagnostic algorithm that would typically require a training set. The specific "Image Acquisition Unit" components (hardware and software for pixel pipeline) are stated to be "functionally identical" to the predicate, implying established design rather than iterative machine learning.
9. How the ground truth for the training set was established
Not applicable/not provided. As there is no mention of a training set for an AI/ML algorithm, the method for establishing its ground truth is not discussed.
Ask a specific question about this device
(270 days)
Roche Digital Pathology Dx (VENTANA DP 200)
Roche Digital Pathology Dx (VENTANA DP 200) is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx (VENTANA DP 200) is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx (VENTANA DP 200) is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx (VENTANA DP 200) is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx (VENTANA DP 200), hereinafter referred to as Roche Digital Pathology Dx, is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. Roche Digital Pathology Dx system is composed of the following components:
- · VENTANA DP 200 slide scanner
- · Roche uPath enterprise software 1.1.1 (hereinafter, "uPath")
- · ASUS PA248QV display
VENTANA DP 200 slide scanner is a bright-field digital pathology scanner that accommodates loading and scanning of up to 6 standard slides. The scanner comprises a high-resolution 20x objective with the ability to scan at both 20x and 40x. With its uniquely designed optics and scanning methods, VENTANA DP 200 scanner enables users to capture sharp, high-resolution digital images of stained tissue specimens on glass slides. The scanner features automatic detection of the tissue specimen on the slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). It also integrates color profiling to ensure that images produced from scanned slides are generated with a color-managed International Color Consortium (ICC) profile. VENTANA DP 200 image files are generated in a proprietary format (BIF) and can be uploaded to an Image Management System (IMS), such as the one provided with Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using Google Chrome or Microsoft Edge. The interface of uPath software enables laboratories to manage their workflow from the time the digital slide image is produced and acquired by a VENTANA slide scanner through the subsequent processes including, but not limited to, review of the digital image on the monitor screen, analysis, and reporting of results. The software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided text describes the acceptance criteria and the study that proves the Roche Digital Pathology Dx (VENTANA DP 200) device meets these criteria for FDA 510(k) clearance.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The core clinical acceptance criterion for the Roche Digital Pathology Dx system was non-inferiority of digital read (DR) accuracy compared to manual read (MR) accuracy.
Acceptance Criteria for Clinical Accuracy:
Acceptance Criterion (Primary Objective) | Reported Device Performance |
---|---|
Lower bound of a 2-sided 95% confidence interval for the difference | |
in accuracy (DR - MR) had to be greater than or equal to -4%. | Observed: |
Overall agreement rate: DR = 92.00%, MR = 92.61% | |
DR-MR difference in agreement rate: -0.61% (95% CI: -1.59%, 0.35%) | |
Model (Generalized Linear Mixed Model): | |
Estimated agreement rates: DR = 91.54%, MR = 92.16% | |
DR-MR difference in agreement rate: -0.62% (95% CI: -1.50%, 0.26%) | |
Result: The lower limit of the 95% confidence interval for DR-MR (-1.59% observed, -1.50% model) was greater than the pre-specified non-inferiority margin of -4%. Therefore, the DR modality was demonstrated to be non-inferior to the MR modality. |
Acceptance Criteria for Analytical Performance (Precision):
Acceptance Criterion | Reported Device Performance |
---|---|
Lower bounds of the 2-sided 95% CIs for all co-primary endpoints | |
(Overall Percent Agreement [OPA] point estimate for between-site/system, between-day/within-system, and between-reader agreement) were at least 85%. | Between-Site/System OPA: 89.3% (95% CI: 85.8%, 92.4%) |
Between-Days/Within-System OPA: 90.3% (95% CI: 87.1%, 93.2%) | |
Between-Readers OPA: 90.1% (95% CI: 86.6%, 93.0%) | |
Result: For all co-primary analyses, the lower bounds of the 95% CI were >85%, demonstrating acceptable precision. |
2. Sample Size Used for the Test Set and Data Provenance
-
Clinical Accuracy Study (Test Set):
- Sample Size: 2047 cases (total of 3259 slides) consisting of multiple organ and tissue types.
- Data Provenance: Multi-center study conducted at four sites. The text doesn't explicitly state the country of origin, but it is an FDA submission based in Tucson, Arizona, implying data from the United States. The cases were retrospective, pre-screened from archived specimens from the clinical database of the study sites, with a minimum of one year between the date of sign-out diagnosis and the beginning of the study.
-
Precision Study (Test Set):
- Sample Size: 69 study cases (slides), each with 3 ROIs, totaling 207 "study" ROIs. An additional 12 "wild card" cases (36 wild card ROIs) were included to reduce recall bias but excluded from statistical analysis.
- Data Provenance: Study conducted at 3 external pathology laboratories (study sites). The text doesn't explicitly state the country of origin. The cases contained H&E-stained archival slides of FFPE human tissue.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
-
Clinical Accuracy Study (Establishing Reference Diagnosis / Sign-Out Truth):
- Initial Verification: Two Screening Pathologists at each study site pre-screened cases and determined inclusion/exclusion criteria. The first Screening Pathologist reviewed H&E and ancillary stained slides using manual microscopy to identify representative slides and confirmed the diagnosis against the sign-out report. A second Screening Pathologist then verified the sign-out diagnosis data.
- Qualifications: "Qualified pathologist" is mentioned. The study design implies these are experienced professionals involved in routine diagnostic pathology.
-
Precision Study (Establishing Reference Feature):
- "Primary feature for that case" acted as the reference. The mechanism for establishing this ultimate ground truth for features in the ROIs is not explicitly detailed beyond being "protocol-specified." However, Screening Pathologists were involved in selecting ROIs for each slide. It's implied that the reference for the presence of the 23 specific histopathologic features was expert-derived based on consensus or previous established pathology.
4. Adjudication Method for the Test Set (Clinical Accuracy Study)
- Method: A (2+1) or (2+1+panel) adjudication method was used.
- For each Reading Pathologist's diagnosis, two Adjudication Pathologists (blinded to site, Reading Pathologist, and reading modality) separately assessed agreement with the original sign-out diagnosis (reference diagnosis).
- If the two adjudicators disagreed, a third Adjudication Pathologist reviewed the case to achieve a majority consensus.
- In cases where all three adjudicators had different opinions, consensus was reached in an adjudication panel meeting consisting of the same three Adjudication Pathologists.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Yes, a MRMC comparative effectiveness study was done. This was the Clinical Accuracy Study.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study design was a non-inferiority study to show that viewing digital images (AI-assisted, as the WSI system is the "aid") is not worse than manual microscopy. It did not directly quantify improvement of human readers with AI vs. without AI.
- Instead, it compared the diagnostic accuracy of pathologists using the digital system (DR) versus traditional microscopy (MR) directly against the reference sign-out diagnosis.
- The observed overall agreement rate was 92.00% for DR and 92.61% for MR. The difference (DR - MR) was -0.61%. This suggests a slight decrease in agreement rate when using DR compared to MR, but it was statistically non-inferior (i.e., not significantly worse than MR, within the defined margin). The study states, "These model results failed to show any statistically significant difference between the 2 reading modalities."
- Therefore, the effect size is that there was no statistically significant difference in diagnostic agreement rates between digital review and manual review, demonstrating non-inferiority rather than an improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- No, a standalone (algorithm-only) performance study was not done for diagnostic accuracy.
- The device is a "Whole slide imaging system" intended "as an aid to the pathologist to review and interpret digital images." Its performance evaluation (clinical accuracy) was explicitly designed as a human-in-the-loop study (pathologists using the system for diagnosis).
- However, technical studies (e.g., color reproducibility, spatial resolution, focusing, whole slide tissue coverage, stitching error) assessed specific algorithm/system components in a standalone or technical performance manner. For instance, the "Image Processing Software" section describes various algorithms (exposure control, white balance, color correction, etc.), and "Image Composition" discusses scanning methods. These are technical assessments of the system's output quality rather than diagnostic accuracy.
7. The Type of Ground Truth Used
- Clinical Accuracy Study: The ground truth for diagnostic accuracy was the original sign-out pathologic diagnosis rendered at the study sites using an optical (light) microscope, verified by two screening pathologists. This represents expert consensus/established clinical diagnosis.
- Precision Study: The ground truth for feature detection was the "reference primary feature for that case." This was established by "Screening Pathologists" who selected the ROIs containing these features, implying expert-identified features.
8. The Sample Size for the Training Set
The provided document describes studies for device validation and clearance, not for the development and training of a machine learning model. Therefore, no information on the sample size for a training set is provided. The Roche Digital Pathology Dx system is described as a "whole slide imaging (WSI) system" and its components (scanner, software, display), without mention of AI/ML components for automated diagnosis or feature detection that would require a separate training set. The "AI" mentioned in question 5 refers to the digital WSI system as an "aid" to the human reader, not necessarily an AI algorithm performing diagnosis independently.
9. How the Ground Truth for the Training Set Was Established
As no training set is described (since this is primarily a WSI system for human review, not an autonomous AI diagnostic algorithm), this information is not applicable.
Ask a specific question about this device
Page 1 of 1