Search Results
Found 1 results
510(k) Data Aggregation
(92 days)
Roche Digital Pathology Dx is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx is composed of VENTANA DP 200 slide scanner, VENTANA DP 600 slide scanner, Roche uPath enterprise software, and ASUS PA248QV display. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Roche Digital Pathology Dx.
Roche Digital Pathology Dx (hereinafter referred to as RDPD), is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. RDPD system is composed of the following components:
- · VENTANA DP 200 slide scanner,
- · VENTANA DP 600 slide scanner,
- · Roche uPath enterprise software, and
- · ASUS PA248QV display.
VENTANA DP 600 slide scanner has a total capacity of 240 slides through 40 trays with 6 slides each. The VENTANA DP 600 slide scanner and VENTANA DP 200 slide scanner use the same Image Acquisition Unit.
Both VENTANA DP 200 and DP 600 slide scanners are bright-field digital pathology scanners that accommodate loading and scanning of 6 and 240 standard glass microscope slides, respectively. The scanners each have a high-numerical aperture Plan Apochromat 20x objective and are capable of scanning at both 20x and 40x magnifications. The scanners feature automatic detection of the tissue specimen on the glass slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). The International Color Consortium (ICC) color profile is embedded in each scanned slide image for color management. The scanned slide images are generated in a proprietary file format, Biolmagene Image File (BIF), that can be uploaded to the uPath Image Management System (IMS), provided with the Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology Dx system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using the Google Chrome or Microsoft Edge web browser. The user interface of uPath software enables laboratories to manage their workflow from the time the whole slide image is produced and acquired by VENTANA DP 200 and/or DP 600 slide scanners through the subsequent processes, such as review of the digital image on the monitor screen and reporting of results. The uPath software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided document is a 510(k) summary for the "Roche Digital Pathology Dx" system (K242783), which is a modification of a previously cleared device (K232879). This modification primarily involves adding a new slide scanner model, VENTANA DP 600, to the existing system. The document asserts that due to the identical Image Acquisition Unit (IAU) between the new DP 600 scanner and the previously cleared DP 200 scanner, the technical performance assessment from the predicate device is fully applicable. Therefore, the information provided below will primarily refer to the studies and acceptance criteria from the predicate device that are deemed applicable to the current submission due to substantial equivalence.
Here's an analysis based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a discrete "acceptance criteria" table with corresponding numerical performance metrics for the current submission (K242783). Instead, it states that the performance characteristics data collected on the VENTANA DP 200 slide scanner (the predicate device) are representative of the VENTANA DP 600 slide scanner performance because both scanners use the same Image Acquisition Unit (IAU). The table below lists the "Technical Performance Assessment (TPA) Sections" as presented in the document (Table 3), which serve as categories of performance criteria that were evaluated for the predicate device and are considered applicable to the current device. The reported performance for these sections is summarized as "Information was provided in K232879" for the DP 600, indicating that the past performance data is considered valid.
| TPA Section (Acceptance Criteria Category) | Reported Device Performance (for DP 600 scanner) |
|---|---|
| Components: Slide Feeder | No double wide slide tray compatibility, new FMEA provided in K242783. (Predicate: Information on configuration, user interaction, FMEA) |
| Components: Light Source | Information was provided in K232879. (Predicate: Descriptive info on lamp/condenser, spectral distribution verified) |
| Components: Imaging Optics | Information was provided in K232879. (Predicate: Optical schematic, descriptive info, testing for irradiance, distortions, aberrations) |
| Components: Focusing System | Information was provided in K232879. (Predicate: Schematic, description, optical system, cameras, algorithm) |
| Components: Mechanical Scanner Movement | Same except no double wide slide tray compatibility, replaced references & new FMEA items provided in K242783. (Predicate: Information/specs on stage, movement, FMEA, repeatability) |
| Components: Digital Imaging Sensor | Information was provided in K232879. (Predicate: Information/specs on sensor type, pixels, responsivity, noise, data, testing) |
| Components: Image Processing Software | Information was provided in K232879. (Predicate: Information/specs on exposure, white balance, color correction, subsampling, pixel correction) |
| Components: Image Composition | Information was provided in K232879. (Predicate: Information/specs on scanning method, speed, Z-axis planes, analysis of image composition) |
| Components: Image File Formats | Information was provided in K232879. (Predicate: Information/specs on compression, ratio, file format, organization) |
| Image Review Manipulation Software | Information was provided in K232879. (Predicate: Information/specs on panning, zooming, Z-axis displacement, comparison, image enhancement, annotation, bookmarks) |
| Computer Environment | Select upgrades of sub-components & specifications. (Predicate: Information/specs on hardware, OS, graphics, color management, display interface) |
| Display | Information was provided in K232879. (Predicate: Information/specs on pixel density, aspect ratio, display surface, and other display characteristics; performance testing for user controls, spatial resolution, pixel defects, artifacts, temporal response, luminance, uniformity, gray tracking, color scale, color gamut) |
| System-level Assessments: Color Reproducibility | Information was provided in K232879. (Predicate: Test data for color reproducibility) |
| System-level Assessments: Spatial Resolution | Information was provided in K232879. (Predicate: Test data for composite optical performance) |
| System-level Assessments: Focusing Test | Information was provided in K232879. (Predicate: Test data for technical focus quality) |
| System-level Assessments: Whole Slide Tissue Coverage | Information was provided in K232879. (Predicate: Test data for tissue detection algorithms and inclusion of tissue in digital image file) |
| System-level Assessments: Stitching Error | Information was provided in K232879. (Predicate: Test data for stitching errors and artifacts) |
| System-level Assessments: Turnaround Time | Information was provided in K232879. (Predicate: Test data for turnaround time) |
| User Interface | Identical workflow, replacement of new scanner component depiction. (Predicate: Information on user interaction, human factors/usability validation) |
| Labeling | Same content, replaced references. (Predicate: Compliance with 21 CFR Parts 801 and 809, special controls) |
| Quality Control | Same content, replaced references. (Predicate: QC activities by user, lab technician, pathologist prior to/after scanning) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states that the technical performance assessment data was "collected on the VENTANA DP 200 slide scanner" (the predicate device). However, the specific sample sizes for these technical studies (e.g., number of slides used for focusing tests, stitching error analysis, etc.) are not detailed in this summary. The data provenance (country of origin, retrospective/prospective) for these underlying technical studies from K232879 is also not provided in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This section is not applicable or not provided by the document. The studies mentioned are primarily technical performance assessments related to image quality, system components, and usability, rather than diagnostic accuracy studies requiring expert pathologist interpretation for ground truth. For the predicate device, it mentions "a qualified pathologist" is responsible for interpretation, but this refers to the end-user clinical use, not the establishment of ground truth for device validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This section is not applicable or not provided by the document. As noted above, the document details technical performance studies rather than diagnostic performance studies that would typically involve multiple readers and adjudication methods for diagnostic discrepancies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned in the provided text. The device, "Roche Digital Pathology Dx," is described as a "digital slide creation, viewing and management system" intended "as an aid to the pathologist to review and interpret digital images." It is a Whole Slide Imaging (WSI) system, and the submission is focused on demonstrating the technical equivalence of a new scanner component, not on evaluating AI assistance or its impact on human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study is mentioned. The device is a WSI system for "aid to the pathologist to review and interpret digital images," implying a human-in-the-loop system. The document does not describe any specific algorithms intended for automated diagnostic interpretation or analysis in a standalone capacity.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the technical performance studies referenced from the predicate device (K232879), the "ground truth" would be established by physical measurements and engineering specifications for aspects like spatial resolution, color reproducibility, focusing quality, tissue coverage, and stitching errors, rather than clinical outcomes or diagnostic pathology. For instance, color reproducibility would be assessed against a known color standard, and spatial resolution against resolution targets. The document does not explicitly state the exact types of ground truth used for each technical assessment but refers to "test data to evaluate" these characteristics.
8. The sample size for the training set
Not applicable/not provided. The document describes a WSI system, not an AI/ML-based diagnostic algorithm that would typically require a training set. The specific "Image Acquisition Unit" components (hardware and software for pixel pipeline) are stated to be "functionally identical" to the predicate, implying established design rather than iterative machine learning.
9. How the ground truth for the training set was established
Not applicable/not provided. As there is no mention of a training set for an AI/ML algorithm, the method for establishing its ground truth is not discussed.
Ask a specific question about this device
Page 1 of 1