Search Results
Found 33 results
510(k) Data Aggregation
(238 days)
MA 02215
June 26, 2025
Re: K243391
Trade/Device Name: AISight Dx
Regulation Number: 21 CFR 864.3700
Version:** V 2.9
Classification Name: Whole Slide Imaging System
Regulation Number: 21 CFR 864.3700
scanner system | Aperio GT 450 DX | AISight Dx |
| Product Code | PSY | PSY | QKQ |
| Regulation | 21 CFR 864.3700
| 21 CFR 864.3700 | 21 CFR 864.3700 |
| Regulation Name | Whole Slide Imaging System | Whole Slide Imaging
AISight Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. AISight Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. AISight DX is intended to be used with interoperable displays, scanners and file formats, and web browsers that have been 510(k) cleared for use with the AISight Dx or 510(k)-cleared displays, 510(k)-cleared scanners and file formats, and web browsers that have been assessed in accordance with the Predetermined Change Control Plan (PCCP) for qualifying interoperable devices.
AISight Dx is a web-based, software-only device that is intended to aid pathology professionals in viewing, interpretation, and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue obtained from Hamamatsu NanoZoomer S360MD Slide scanner or Leica Aperio GT 450 DX scanner (Table 1). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the AISight Dx device, based on the provided FDA 510(k) Clearance Letter:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixel-wise Comparison | Identical image reproduction (max pixelwise difference |
Ask a specific question about this device
(81 days)
Re: K250968**
Trade/Device Name: PathPresenter Clinical Viewer
Regulation Number: 21 CFR 864.3700
Version:** V1.0.1
Classification Name: Whole Slide Imaging System
Regulation Number: 21 CFR 864.3700
Slide scanner system
Submission Number: K233027
Device Class: Class II
CFR Section: 864.3700
For In Vitro Diagnostic Use
The PathPresenter Clinical Viewer is a software intended for viewing and managing whole slide images of scanned glass sides derived from formalin fixed paraffin embedded (FFPE) tissue. It is an aid to pathologists to review and render a diagnosis using the digital images for the purposes of primary diagnosis. PathPresenter Clinical is not intended for use with frozen sections, cytology specimens, or non-FFPE specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using PathPresenter Clinical software. PathPresenter Clinical Viewer is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner NDPI image formats viewed on the Barco NV MDPC-8127 display device.
The PathPresenter Clinical Viewer (version V1.0.1) is a web-based software application designed for viewing and managing whole slide images generated from scanned glass slides of formalin-fixed, paraffin-embedded (FFPE) surgical pathology tissue. It serves as a diagnostic aid, enabling pathologists to review digital images and render a primary pathology diagnosis. Functions of the viewer include zooming and panning the image, annotating the image, measuring distances and areas in the image and retrieving multiple images from the slide tray including prior cases and deprecated slides.
Here's a breakdown of the acceptance criteria and study information for the PathPresenter Clinical Viewer based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for PathPresenter Clinical Viewer
1. Table of Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixelwise Comparison | The 95th percentile of the pixel-wise color difference in any image pair is less than 3 CIEDE2000 ( |
Ask a specific question about this device
(269 days)
924-8566
Japan
Re: K242545
Trade/Device Name: RadiForce MX317W-PA
Regulation Number: 21 CFR 864.3700
Name: 30.5 inch (77.5 cm) Color LCD Monitor
- Classification Name: Whole Slide Imaging System (21 CFR 864.3700
or Usual Name: Digital Pathology Display - Classification Name: Whole Slide Imaging System (21 CFR 864.3700
RadiForce MX317W-PA is intended for in vitro diagnostic use to display digital images of histopathology slides acquired from IVD-labeled whole-slide imaging scanners and viewed using IVD-labeled digital pathology image viewing software that have been validated for use with this device.
RadiForce MX317W-PA is an aid to the pathologist and is used for review and interpretation of histopathology slides for the purposes of primary diagnosis. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using this product. The display is not intended for use with digital images from frozen section, cytology, or non- formalin-fixed, paraffin embedded (non-FFPE) hematopathology specimens.
RadiForce MX317W-PA is a color LCD monitor for viewing digital images of histopathology slides. The color LCD panel employs in-plane switching (IPS) technology allowing wide viewing angles and the matrix size is 4,096 x 2,160 pixels (8MP) with a pixel pitch of 0.1674 mm.
Since factory calibrated display modes, each of which is characterized by a specific tone curve, a specific luminance range and a specific color temperature, are stored in lookup tables within the monitor. This helps ensure tone curves even if a display controller or workstation must be replaced or serviced.
"Patho" is for intended digital pathology use mode.
The provided FDA 510(k) clearance letter for the RadiForce MX317W-PA describes a display device for digital histopathology. It does not contain information about an AI/ML medical device. Therefore, a study proving the device meets acceptance criteria related to AI/ML performance (such as accuracy, sensitivity, specificity, MRMC studies, and ground truth establishment methods for large datasets) is not present in this document.
The document primarily focuses on the technical performance and equivalence of a display monitor to a predicate device. The "performance testing" section refers to bench tests validating display characteristics like spatial resolution, luminance, and color, not the clinical performance of an AI algorithm interpreting medical images.
Given the information provided, here's an analysis based on the actual content:
Based on the provided document, the RadiForce MX317W-PA is a display monitor, not an AI/ML medical device designed for image interpretation. Therefore, the acceptance criteria and study detailed below pertain to the display's technical performance and its equivalence to a predicate display, not to an AI algorithm's diagnostic accuracy.
1. Table of Acceptance Criteria and Reported Device Performance
The document states that "the display characteristics of the RadiForce MX317W-PA meet the pre-defined criteria when criteria are set." However, the exact numerical acceptance criteria for each bench test (e.g., minimum luminance, pixel defect limits) are not explicitly listed in the provided text. The document only lists the types of tests performed and states that the device "has display characteristics equivalent to those of the predicate device" and "meet the pre-defined criteria."
Acceptance Criteria Category | Reported Device Performance Summary (as per document) |
---|---|
User controls (Modes & settings) | Performed, assumed met |
Spatial resolution | Performed, assumed met, equivalent to predicate |
Pixel defects | Performed, assumed met, equivalent to predicate |
Artifacts | Performed, assumed met, equivalent to predicate |
Temporal response | Performed, assumed met, equivalent to predicate |
Maximum and minimum luminance | Performed, assumed met, equivalent to predicate |
Grayscale | Performed, assumed met, equivalent to predicate |
Luminance uniformity and Mura test | Performed, assumed met, equivalent to predicate |
Stability of luminance and chromaticity response | Performed, assumed met, equivalent to predicate |
Bidirectional reflection distribution function | Performed, assumed met, equivalent to predicate |
Gray Tracking | Performed, assumed met, equivalent to predicate |
Color scale | Performed, assumed met, equivalent to predicate |
Color gamut volume | Performed, assumed met, equivalent to predicate |
Note: The document only states that these tests were performed and that the results show equivalence to the predicate device and that the device meets pre-defined criteria. It does not provide the specific numerical results or the exact numerical acceptance criteria for each test.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document describes bench tests performed on a single device, the RadiForce MX317W-PA (it's a physical monitor, not a software algorithm processing a dataset). There is no mention of a "test set" in the context of a dataset of medical images.
- Data Provenance: Not applicable. The "data" here refers to the measured performance characteristics of the physical display device itself during bench testing, not patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not applicable. The ground truth for a display monitor's technical performance is established by standardized measurement equipment and protocols, not by expert interpretation of images. The device itself is the object under test for its physical characteristics.
4. Adjudication Method for the Test Set
- Not applicable. This concept applies to human or AI interpretation of medical images, where discrepancies among readers or algorithms might need resolution. For physical device performance, measurements are generally objective.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Not performed/Applicable. An MRMC study is designed to assess the performance of a diagnostic aid (like AI) on image interpretation by human readers. This device is a display monitor, not an AI algorithm. Its function is to display images, not to interpret them or assist human interpreters in a diagnostic decision-making process that would warrant an MRMC study.
6. Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Study
- Not applicable. As stated, this is a display monitor, not an algorithm.
7. Type of Ground Truth Used:
- The ground truth for the display's performance tests would be metrology-based standards and calibration references (e.g., standard luminance values, colorimetry standards) against which the display's output is measured. It is not expert consensus, pathology, or outcomes data, as these relate to diagnostic accuracy studies.
8. The Sample Size for the Training Set
- Not applicable. This device is hardware; it does not involve training data or machine learning algorithms.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. No training set exists for this device.
Ask a specific question about this device
(90 days)
Vaugirard
Paris, 75015
France
Re: K250414
Trade/Device Name: CaloPix
Regulation Number: 21 CFR 864.3700
Subject Device
Trade/Device name: CaloPix
Version: 6.1.0 IVDUS
Regulation number: 21 CFR 864.3700
| K232202 |
| Clearance date | September 27, 2022 | April 16, 2024 |
| Regulation number: | 21 CFR 864.3700
Slide scanner system | Aperio GT 450 DX |
| Product code | QKQ | PSY | PSY |
| Regulation | 21 CFR 864.3700
| 21 CFR 864.3700 | 21 CFR 864.3700 |
| Regulation name | Whole Slide Imaging System | Whole Slide Imaging
For In Vitro Diagnostic Use Only
CaloPix is a software only device for viewing and management of digital images of scanned surgical pathology slides prepared from Formalin-Fixed Paraffin Embedded (FFPE) tissue.
CaloPix is intended for in vitro diagnostic use as an aid to the pathologist to review, interpret and manage these digital slide images for the purpose of primary diagnosis.
CaloPix is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and the validity of the interpretation of images using CaloPix.
CaloPix is intended to be used with the interoperable components specified in the below Table:
Scanner Hardware | Scanner Output File Format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | SVS | Dell U3223QE |
Hamamatsu NanoZoomer S360MD Slide scanner | NDPI | JVC Kenwood JD-C240BN01A |
CaloPix, version 6.1.0 IVDUS, is a web-based software-only device that is intended to aid pathology professionals in viewing, interpreting and managing digital Whole Slide Images (WSI) of glass slides obtained from the Hamamatsu NanoZoomer S360MD slide scanner (NDPI file format) and viewed on the JVC Kenwood JD-C240BN01A display, as well as those obtained from the Leica Aperio GT 450 DX scanner (SVS file format) and viewed on the Dell U3223QE display.
CaloPix does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.
CaloPix is for viewing digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
As a whole, CaloPix is a pathology Image Management System (IMS) which brings case-centric digital pathology image management, collaboration, and image processing. CaloPix consists of:
-
Integration with Laboratory Information Systems (LIS): Allows to obtain automatically from the LIS patient data associated with the cases, scanned whole slide images and other related medical images to be analyzed. The data stored in the database is automatically updated according to the interface protocol with the LIS.
-
DataBase: After ingestion, scanned WSI can be organized in the CaloPix database consisting of folders (cases) containing patient identification data and examination results from a LIS.
Ingestion of the slides is performed through an integrated module that allows their automatic indexation based on patient data retrieved from the LIS. After their ingestion, image files are stored in a CaloPix-specific file storage environment, that can be on premises or in the cloud.
- The CaloPix viewer component to process scanned whole slide images, that includes functions for panning, zooming, screen capture, annotations, distance and surface measurement, and image registration. This viewer relies on image servers (IMGSRV) which extract image tiles from the whole slide image file and send these tiles to the CaloPix viewer for smooth and fast viewing.
The FDA 510(k) clearance letter for CaloPix indicates that the device's performance was evaluated through a series of tests to demonstrate its safety and effectiveness. The primary study described in the provided document focuses on technical performance testing rather than a clinical multi-reader multi-case (MRMC) study.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixel-wise comparison (Image Reproduction Accuracy) | The 95th percentile of the pixel-wise color differences (CIEDE2000, ΔE00) in any image pair between CaloPix and the predicate device's IRMS must be less than 3 (ΔE00 |
Ask a specific question about this device
(226 days)
UT 84043
Re: K242244
Trade/Device Name: Viewer+ Regulation Number: 21 CFR 864.3700 Regulation Name:
|
| Regulation Number: | 21 CFR 864.3700 |
| Product Code:
For In Vitro Diagnostic Use
Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.
Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.
Viewer+ is operated as follows:
-
- Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
-
- Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
-
- The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner) | The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate. |
Turnaround time (for opening, panning, and zooming an image) | Found to be adequate for the intended use of the device. |
Measurement accuracy (using scanned images of biological slides) | Viewer+ was found to perform accurate measurements with respect to its intended use. |
Usability testing | Demonstrated that the subject device is safe and effective for the intended users, uses, and use environments. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.
The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.
7. Type of Ground Truth Used
- Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
- Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
- Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.
8. Sample Size for the Training Set
The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for this device, information on how its ground truth was established is not applicable.
Ask a specific question about this device
(79 days)
Re: K243871
Trade/Device Name: Philips IntelliSite Pathology Solution 5.1 Regulation Number: 21 CFR 864.3700
IntelliSite Pathology Solution 5.1 510(k) number: K243871 Device Class: Class II Classification regulation: 864.3700
Philips IntelliSite Pathology Solution 5.1 510(k) number: K242848 Device Class: Class II CFR section: 864.3700
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20. Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP27QHD display, a Beacon C411W display or a Barco MDCC-4430 display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.
The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:
-
- A scanner in any combination of the following scanner models
- . Ultra Fast Scanner (UFS)
- Pathology Scanner SG with different versions for varying slide capacity . Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
-
- Image Management System (IMS) 4.2
-
- Clinical display
- PP27QHD or C411W or MDCC-4430 .
PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS 5.1.
This document is a 510(k) summary for the Philips IntelliSite Pathology Solution (PIPS) 5.1. It describes the device, its intended use, and compares it to a legally marketed predicate device (also PIPS 5.1, K242848). The key change in the subject device is the introduction of a new clinical display, Barco MDCC-4430.
Here's the breakdown of the acceptance criteria and study information:
1. Table of Acceptance Criteria and Reported Device Performance
The submission focuses on demonstrating substantial equivalence of the new display (Barco MDCC-4430) to the predicate's display (Philips PP27QHD). The acceptance criteria are largely derived from the FDA's "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance) and compliance with international consensus standards. The performance is reported as successful verification showing equivalence.
Acceptance Criteria (TPA Guidance 항목) | Reported Device Performance (Subject Device with Barco MDCC-4430) | Conclusion on Substantial Equivalence |
---|---|---|
Display type | Color LCD | Substantially equivalent: Minor difference in physical display size is a minor change and does not raise any questions of safety or effectiveness. |
Manufacturer | Barco N.V. | Same as above. |
Technology | IPS technology with a-Si Thin Film Transistor (unchanged from predicate) | Substantially equivalent: Proposed and predicate device are considered substantially equivalent. |
Physical display size | 714 mm x 478 mm x 74 mm | Substantially equivalent: Minor change, does not raise safety/effectiveness questions. |
Active display area | 655 mm x 410 mm (30.4 inch diagonal) | Substantially equivalent: Slightly higher viewable area is a minor change. Verification testing confirms image quality is equivalent to the predicate device. |
Aspect ratio | 16:10 | Substantially equivalent: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
Resolution | 2560 x 1600 pixels | Substantially equivalent: Slightly higher resolution and pixel size is a minor change. Verification testing confirms image quality is equivalent to the predicate device. Conclusion: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent. |
Pixel Pitch | 0.256 mm x 0.256 mm | Same as above. |
Color calibration tools (software) | QAWeb Enterprise version 2.14.0 installed on the workstation | Substantially equivalent: New display uses different calibration software, but calibration method (built-in front sensor), calibration targets, and frequency of quality control tests remain unchanged. Conclusion: This change does not raise new safety/effectiveness concerns. |
Color calibration tools (hardware) | Built-in front sensor (same as predicate) | Same as above. |
Additional Non-clinical Performance Tests (TPA Guidance) | Verification that technological characteristics of the display were not affected by the new panel, including: Spatial resolution, Pixel defects, Artifacts, Temporal response, Maximum and minimum luminance, Grayscale, Luminance uniformity, Stability of luminance and chromaticity, Bidirectional reflection distribution function, Grav tracking, Color scale response, Color gamut volume. | Conclusion: Verification for the new display showed that the proposed device has similar technological characteristics compared to the predicate device following the TPA guidance. In compliance with international/FDA-recognized consensus standards (IEC 60601-1, IEC 60601-1-6, IEC 62471, ISO 14971). Safe and effective, conforms to intended use. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state a "sample size" in terms of cases or images for the non-clinical performance tests. The tests were performed on "the display of the proposed device" to verify its technological characteristics. This implies testing on representative units of the Barco MDCC-4430 display.
The data provenance is not specified in terms of country of origin or retrospective/prospective, as the tests were bench testing (laboratory-based performance evaluation of the display hardware) rather than clinical studies with patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and their Qualifications
This information is not applicable to this submission. The tests performed were technical performance evaluations of hardware (the display), not clinical evaluations requiring expert interpretation of medical images. Ground truth for these technical tests would be established by objective measurements against specified technical standards and parameters.
4. Adjudication Method for the Test Set
This information is not applicable to this submission. As the tests were technical performance evaluations of hardware, there would not be an adjudication process involving multiple human observers interpreting results in the same way there would be for a clinical trial.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done.
The submission explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."
Therefore, there is no effect size reported for human readers with and without AI assistance, as AI functionality for diagnostic interpretation is not the subject of this 510(k) (the PIPS 5.1 "does not include any automated image analysis applications that would constitute computer aided detection or diagnosis").
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
This information is not applicable. The PIPS 5.1 is a digital slide creation, viewing, and management system, not an AI algorithm for diagnostic interpretation. The focus of this 510(k) is the display component. The device itself is designed for human-in-the-loop use by a pathologist.
7. The Type of Ground Truth Used
For the non-clinical performance data, the "ground truth" was based on:
- International and FDA-recognized consensus standards: This includes IEC 60601-1, IEC 60601-1-6, IEC 62471, and ISO 14971.
- TPA Guidance: The "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" guidance document, which specifies technical parameters for displays.
- Predicate device characteristics: Demonstrating that the new display's performance matches or is equivalent to the legally marketed predicate device's display across various technical parameters.
In essence, the ground truth was established by engineering specifications, technical performance targets, and regulatory standards for display devices.
8. The Sample Size for the Training Set
This information is not applicable. The PIPS 5.1, as described, is a system for digital pathology, not an AI algorithm that requires a training set of data. The 510(k) specifically mentions: "The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis." Therefore, there is no AI training set.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable, as there is no AI training set.
Ask a specific question about this device
(259 days)
: K241717
Trade/Device Name: Epredia E1000 Dx Digital Pathology Solution Regulation Number: 21 CFR 864.3700
Classification Name: | Whole Slide Imaging System |
| Regulation Number: | 21 CFR 864.3700
of the E1000 Dx Digital Pathology Solution in accordance with the special controls listed in 21 CFR 864.3700
The Epredia E1000 Dx Digital Pathology Solution is an automated digital slide creation, viewing, and management system. The Epredia E1000 Dx Digital Pathology Solution is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Epredia E1000 Dx Digital Pathology Solution is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The Epredia E1000 Dx Digital Pathology Solution consists of a Scanner (E1000 Dx Digital Pathology Scanner), which generates in MRXS image file format, E1000 Dx Scanner Software, Image Management System (E1000 Dx IMS), E1000 Dx Viewer Software, and Display (Barco MDPC-8127). The Epredia E1000 Dx Digital Pathology Solution is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Epredia E1000 Dx Digital Pathology Solution.
The E1000 Dx Digital Pathology Solution is a high-capacity, automated whole slide imaging system for the creation, viewing, and management of digital images of surgical pathology slides. It allows whole slide digital images to be viewed on a display monitor that would otherwise be appropriate for manual visualization by conventional brightfield microscopy.
The E1000 Dx Digital Pathology Solution consists of the following three components: Scanner component:
- . E1000 Dx Digital Pathology Scanner with E1000 firmware version 2.0.3
- . E1000 Dx Scanner Software version 2.0.3
Viewer component:
- E1000 Dx Image Management System (IMS) Server version 2.3.2 ●
- . E1000 Dx Viewer Software version 2.7.2
Display component:
- . Barco MDPC-8127
The E1000 Dx Digital Pathology Solution automatically creates digital whole slide images by scanning formalin-fixed, paraffin-embedded (FFPE) tissue slides, with a capacity to process up to 1,000 slides. The E1000 Dx Scanner Software (EDSS), which runs on the scanner workstation, controls the operation of the E1000 Dx Digital Pathology Scanner. The scanner workstation, provided with the E1000 Dx Digital Pathology Solution, includes a PC, monitor, kevboard, and mouse. The solution uses a proprietary MRXS format to store and transmit images between the E1000 Dx Digital Pathology Scanner and the E1000 Dx Image Management System (IMS).
The E1000 Dx IMS is a software component intended for use with the Barco MDPC-8127 display monitor and runs on a separate, customer-provided pathologist viewing workstation PC. The E1000 Dx Viewer, an application managed through the E1000 Dx IMS, allows the obtained digital whole slide images to be annotated, stored, accessed, and examined on Barco MDPC-8127 video display monitor. This functionality aids pathologists in interpreting digital images as an alternative to conventional brightfield microscopy.
Here's a breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:
Important Note: The provided text describes a Whole Slide Imaging System for digital pathology, which aids pathologists in reviewing and interpreting digital images of traditional glass slides. It does not describe an AI device for automated diagnosis or detection. Therefore, concepts like "effect size of how much human readers improve with AI vs without AI assistance" or "standalone (algorithm only without human-in-the-loop performance)" are not directly applicable to this device's proven capabilities as per the provided information.
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for this device appears to be non-inferiority to optical microscopy in terms of major discordance rates when comparing digital review to a main sign-out diagnosis. Additionally, precision (intra-system, inter-system repeatability, and inter-site reproducibility) is a key performance metric.
Table 1: Overall Major Discordance Rate for MD and MO
Metric | Acceptance Criteria (Implied Non-inferiority) | Reported Device Performance (Epredia E1000 Dx) |
---|---|---|
MD Major Discordance Rate | N/A (Compared to MO's performance) | 2.51% (95% CI: 2.26%; 2.79%) |
MO Major Discordance Rate | N/A (Baseline for comparison) | 2.59% (95% CI: 2.29%; 2.82%) |
Difference MD - MO | Within an acceptable non-inferiority margin | -0.15% (95% CI: -0.40%, 0.41%) |
Study Met Acceptance Criteria | Yes, as defined in the protocol | Met |
Precision Study Acceptance Criteria and Reported Performance
Metric | Acceptance Criteria (Lower limit of 95% CI) | Reported Device Performance (Epredia E1000 Dx) |
---|---|---|
Intra-System Repeatability (Average Positive Agreement) | > 85% | 96.9% (Lower limit of 96.1%) |
Inter-System Repeatability (Average Positive Agreement) | > 85% | 95.1% (Lower limit of 94.1%) |
Inter-Site Reproducibility (Average Positive Agreement) | > 85% | 95.4% (Lower limit of 93.6%) |
All Precision Studies Met Acceptance Criteria | Yes | Met |
Study Details
2. Sample Size and Data Provenance:
-
Clinical Accuracy Study (Non-inferiority):
- Test Set Sample Size: 3897 digital image reviews (MD) and 3881 optical microscope reviews (MO). The dataset comprises surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue.
- Data Provenance: Not explicitly stated, but clinical studies for FDA clearance typically involve multiple institutions, often within the US or compliant with international standards, and are prospective in nature for device validation. The "multi-centered" description suggests multiple sites, implying diverse data. It is a "blinded, and randomized study," which are characteristics of a prospective study.
-
Precision Studies (Intra-system, Inter-system, Inter-site):
- Test Set Sample Size: A "comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems" was used. Specific slide numbers or FOV counts are mentioned as pairwise agreements (e.g., 2,511 comparison pairs for Intra-system, Inter-system; 837 comparison pairs for Inter-site) rather than raw slide counts.
- Data Provenance: Clinical specimens. Not specified directly, but likely from multiple sites for the reproducibility studies, suggesting a diverse, possibly prospective, collection.
3. Number of Experts and Qualifications:
- Clinical Accuracy Study: The study involved multiple pathologists who performed both digital and optical reviews. The exact number of pathologists is not specified beyond "pathologist" and "qualified pathologist." Their qualifications are generally implied by "qualified pathologist" and the context of a clinical study for an FDA-cleared device.
- Precision Studies:
- Intra-System Repeatability: "three different reading pathologists (RPs)."
- Inter-System Repeatability: "Three reading pathologists."
- Inter-Site Reproducibility: "three different reading pathologists, each located at one of three different sites."
- Qualifications: Referred to as "reading pathologists," implying trained and qualified professionals experienced in interpreting pathology slides.
4. Adjudication Method for the Test Set:
- Clinical Accuracy Study: The ground truth was established by a "main sign-out diagnosis (SD)." This implies a definitive diagnosis made by a primary pathologist, which served as the reference standard. It's not specified if this "main sign-out diagnosis" itself involved an adjudication process, but it is presented as the final reference.
- Precision Studies: For the precision studies, agreement rates were calculated based on the pathologists' readings of predetermined features on "fields of view (FOVs)." While individual "original assessment" seems to be the baseline for agreement in the intra-system study, the method to establish a single ground truth for all FOVs prior to the study (if any, beyond the initial "defined, clinically relevant histologic features") or an adjudication process during the study is not explicitly detailed. The agreement rates are pairwise comparisons between observers or system readings, not necessarily against a single adjudicated ground truth for each FOV.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- A comparative effectiveness study was indeed done, comparing human performance with the E1000 Dx Digital Pathology Solution (MD) to human performance with an optical microscope (MO).
- Effect Size: The study demonstrated non-inferiority of digital review to optical microscopy. The "effect size" is captured by the difference in major discordance rates:
- The estimated difference (MD - MO) was -0.15% (95% CI: -0.40%, 0.41%). This narrow confidence interval, inclusive of zero and generally close to zero, supports the non-inferiority claim, indicating no significant practical difference in major discordance rates between the two modalities when used by human readers.
6. Standalone (Algorithm Only) Performance:
- No, a standalone (algorithm only) performance study was not conducted or described. This device is a Whole Slide Imaging System intended as an aid to the pathologist for human review and interpretation, not an AI for automated diagnosis.
7. Type of Ground Truth Used:
- Clinical Accuracy Study: The ground truth used was the "main sign-out diagnosis (SD)." This is a form of expert consensus or definitive clinical diagnosis, widely accepted as the reference standard in pathology.
- Precision Study: For the precision studies, "defined, clinically relevant histologic features" were used, and pathologists recorded the presence of these features. While not explicitly stated as a "ground truth" in the same way as the sign-out diagnosis, the 'original assessment' or 'presumed correct' feature presence often serves as a practical ground truth for repeatability and reproducibility calculations.
8. Sample Size for the Training Set:
- The document does not mention a training set as this device is not an AI/ML algorithm that learns from data. It's a hardware and software system designed to digitize and display images for human review. The "development processes" mentioned are for the hardware and software functionality, not for training a model.
9. How the Ground Truth for the Training Set Was Established:
- This question is not applicable as there is no training set for this device as described. Ground truth establishment mentioned in the document relates to clinical validation and precision, not AI model training.
Ask a specific question about this device
(248 days)
Sq, Floor 37 New York, NY 10036
Re: K241273
Trade/Device Name: FullFocus Regulation Number: 21 CFR 864.3700
Classification Name: | Whole Slide Imaging System |
| Regulation Number: | 21 CFR 864.3700
Device Class: Class II CFR section: 864.3700 Product code: PSY Classification name: Whole Slide Imaging
For In Vitro Diagnostic Use
FullFocus is a software intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended to be used with the interoperable components specified in the below Table.
Table: Interoperable components of FullFocus
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
FullFocus, version 2.29, is a web-based software-only device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from FDA cleared digital pathology scanners on FDA cleared displays. FullFocus renders these digitized pathology images for review, management and navigation for pathology primary diagnosis.
Image acquisition is performed using the intended scanner (s), with the operator conducting quality control on the digital WSI images according to the scanner's instructions for use and lab specifications to determine if re-scans are needed. Please see the Intended Use section and below tables for specifics on scanners and respective displays for clinical use.
Once a whole slide image is acquired using the intended scanner and becomes available in the scanner's database file system, a separate medical image communications software (not part of the device), automatically uploads the image and corresponding metadata to persistent cloud storage. Integrity checks are performed during the upload to ensure data accuracy.
The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
FullFocus operates with and is validated for use with the FDA cleared components specified in the tables below:
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
Table 1: Interoperable Components Intended for Use with FullFocus
FullFocus version 2.29 was not validated for the use with images generated with Philips Ultra Fast Scanner.
Table 2: Computer Environment/System Requirements for during the use of FullFocus
Environment | Component | Minimum Requirements |
---|---|---|
Hardware | Processor | 1 CPU, 2 cores, 1.6GHz |
Memory | 4 GB RAM | |
Network | Bandwidth of 10Mbps | |
Software | Operating System | • Windows |
• macOS | ||
Browser | • Google Chrome (129.0.6668.90 or higher) | |
• Microsoft Edge (129.0.2792.79 or higher) |
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison: The 95th percentile of pixel-wise color differences in any image pair across all required screenshots must be less than 3.0 ΔE00 when compared to comparator (predicate device's Image Review Manipulation Software - IRMS) for identical image reproduction. This indicates visual adequacy for human readers. | The 95th percentile of pixel-wise differences between FullFocus and the comparators were less than 3 CIEDE2000, indicating that their output images can be considered to be pixel-wise identical. FullFocus has been found to visually adequately reproduce digital pathology images to human readers with respect to its intended use. |
Turnaround time (Case selection): It should not take longer than 10 seconds until the image is fully loaded when selecting a case. | System requirements fulfilled: Not longer than 10 seconds until the image is fully loaded. |
Turnaround time (Panning/Zooming): It shall not take longer than 7 seconds until the image is fully loaded when panning and zooming the image. | System requirements fulfilled: Not longer than 7 seconds until the image is fully loaded. |
Measurement Accuracy (Straight Line): The 1mm measured line should match the reference value exactly 1mm ± 0mm. | All straight-line measurements compared to the reference were exactly 1mm, with no error. |
Measurement Accuracy (Area): The measured area must match the reference area exactly 0.2 x 0.2 mm for a total of 0.04 mm² ± 0 mm². | All area measurements compared to the reference value were exactly 0.04mm², with no error. |
Measurement Accuracy (Scalebar): 2mm scalebar is accurate. | All Tests Passed. |
Human Factors Testing: (Implied from previous clearance) Safe and effective use by representative users for critical user tasks and use scenarios. | Human factors study designed around critical user tasks and use scenarios performed by representative users were conducted for previously cleared FullFocus, version 1.2.1, in K201005, per FDA guidance “Applying Human Factors and Usability Engineering to Medical Devices (2016)". Human factors validation testing is not necessary as the user interface hasn't changed. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Pixel-wise Comparison: 30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides, representing a range of human anatomical sites.
- Sample Size for Turnaround Time & Measurements: Not explicitly stated as a number of distinct cases or images beyond the 30 slides used for pixel-wise comparison. For measurements, a "1 Calibration Slide" was used per test.
- Data Provenance: The text does not explicitly state the country of origin. The slides are described as "representing a range of human anatomical sites," implying a diverse set of real-world pathology samples. It is a retrospective study as it states "30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides... were scanned".
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Pixel-wise Comparison: "For each WSI, three regions of interest (ROIs) were identified to highlight relevant pathological features, as verified by a pathologist."
- Number of Experts: At least one pathologist.
- Qualifications: "A pathologist" (specific qualifications like years of experience are not provided).
- Measurements: No expert was explicitly mentioned for establishing ground truth for measurements; it relies on a "test image containing objects with known sizes" (calibration slide) and "reference value."
4. Adjudication Method for the Test Set
- The text does not mention an explicit adjudication method (like 2+1 or 3+1 consensus) for the pixel-wise comparison or measurement accuracy. For the pixel-wise comparison, ROIs were "verified by a pathologist," suggesting a single-expert verification rather than a consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not done in this context. The study focused on demonstrating identical image reproduction (pixel-wise comparison) and technical performance (turnaround time, measurement accuracy) of the FullFocus viewer against predicate devices' viewing components. It did not directly assess the improvement in human reader performance (e.g., diagnostic accuracy or efficiency) with or without AI assistance. The device is a "viewer and management software," not an AI diagnostic aid in the sense of providing specific findings or interpretations.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone "algorithm only" performance was effectively done for the technical aspects. The pixel-wise comparison directly compares the image rendering of FullFocus with the predicate viewer's rendering without human intervention in the comparison process itself (though a pathologist verified ROIs). Similarly, turnaround times and measurement accuracy are intrinsic technical performances of the software.
7. The Type of Ground Truth Used
- Pixel-wise Comparison: The ground truth for this test was the digital image data as rendered by the predicate device's IRMS. The goal was to show that FullFocus reproduces the same image data. The "relevant pathological features" within ROIs were "verified by a pathologist" which served as a reference for what areas to test, not necessarily a diagnostic ground truth for the device's output.
- Measurements: The ground truth was based on known physical dimensions within a calibration slide and corresponding "reference values."
8. The Sample Size for the Training Set
- The provided text does not mention a training set. This is expected because FullFocus is a viewer and management software for digital pathology images, not an AI or machine learning algorithm that is "trained" on data to make predictions or assist in diagnosis directly. Its core function is to display existing image data accurately and efficiently.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned (since it's a viewer software), this question is not applicable based on the provided text.
Ask a specific question about this device
(458 days)
Trade/Device Name: 8MP Color LCD Displays C811W, C811WT, PA27, PA27T Regulation Number: 21 CFR 864.3700
8MP Color LCD Display
- Classification Name: Digital Pathology Display
- Regulation Number: 21 CFR 864.3700
8MP Color LCD Displays C811W, C811WT, PA27 and PA27T are intended for in vitro diagnostic use to display digital images of histopathology slides acquired from IVD-labeled whole-slide imaging scanners and viewed using IVD-labeled digital pathology image viewing software that have been validated for use with this device. They are an aid to the pathologist to review and interpret digital images of histopathology slides for primary diagnosis. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using 8MP Color LCD Displays C811W, C811WT, PA27 and PA27T. The displays are not intended for use with digital images from frozen section, cytology, or non-formalin-fixed, paraffin embedded (non-FFPE) hematopathology specimens.
The C811WT. PA27T. C811W. PA27 are 8MP Color LCD Displays, specifically intended for review and interpretation of surgical pathology slides from IVD-labeled whole-slide imaging scanners that have been validated for use with the display.
The displays are equipped with a 27-inch color LCD panel with a fine pixel pitch. They use the latest generation of LED backlight panels. The built-in brightness stabilization control circuits make sure the brightness of these displays is stable in their life, so the products meet the demands of high precision medical imaging.
For C811WT. PA27T. C811W. PA27. the only difference is the capacitive touch screen.
C811WT, PA27T have the same capacitive touch screen. C811W, PA27 don't have a capacitive touch screen.
The difference between C811WT and PA27T is only different customers which results in the two models. The difference between C811W and PA27 is only different customers which results in the two models.
The provided text describes the acceptance criteria and a study proving the device meets these criteria for the Shenzhen Beacon Display Technology Co., Ltd.'s 8MP Color LCD Displays (C811W, C811WT, PA27, PA27T).
Here's the breakdown of the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the "Test" column, and the reported device performance is presented in the "C811W", "C811WT", "PA27", and "PA27T" columns. The predicate device (Barco MDPC-8127) is also listed for comparison of technological characteristics.
Acceptance Criteria (Derived from "Test") | Reported Device Performance (Example for C811W) |
---|---|
User Controls (Luminance, White Point, Color Space, Warm-up time) | Luminance target, Maximum: 500 cd/m2; Display function: sRGB; White point: D65 (6500K); Color space: sRGB; 30 minutes of warm-up time |
Spatial resolution (MTF at Nyquist frequency) | Vertical and Horizontal MTFs are 0.861 and 0.862 at Nyquist frequency |
Pixel defects (count and map) | Total number of bright and dark pixels |
Ask a specific question about this device
(92 days)
Arizona 85755
Re: K242783
Trade/Device Name: Roche Digital Pathology Dx Regulation Number: 21 CFR 864.3700
Pathology Dx |
| Classification Name: | Whole Slide Imaging System |
| Regulation Section: | 21 CFR 864.3700
|
| Regulation
Section | 21 CFR 864.3700
CFR Parts 801 and 809, as applicable, and the
special controls for this device type under 21 CFR 864.3700
Roche Digital Pathology Dx is an automated digital slide creation, viewing and management system. Roche Digital Pathology Dx is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of scanned pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue. Roche Digital Pathology Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens. Roche Digital Pathology Dx is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
Roche Digital Pathology Dx is composed of VENTANA DP 200 slide scanner, VENTANA DP 600 slide scanner, Roche uPath enterprise software, and ASUS PA248QV display. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Roche Digital Pathology Dx.
Roche Digital Pathology Dx (hereinafter referred to as RDPD), is a whole slide imaging (WSI) system. It is an automated digital slide creation, viewing, and management system intended to aid pathologists in generating, reviewing, and interpreting digital images of surgical pathology slides that would otherwise be appropriate for manual visualization by conventional light microscopy. RDPD system is composed of the following components:
- · VENTANA DP 200 slide scanner,
- · VENTANA DP 600 slide scanner,
- · Roche uPath enterprise software, and
- · ASUS PA248QV display.
VENTANA DP 600 slide scanner has a total capacity of 240 slides through 40 trays with 6 slides each. The VENTANA DP 600 slide scanner and VENTANA DP 200 slide scanner use the same Image Acquisition Unit.
Both VENTANA DP 200 and DP 600 slide scanners are bright-field digital pathology scanners that accommodate loading and scanning of 6 and 240 standard glass microscope slides, respectively. The scanners each have a high-numerical aperture Plan Apochromat 20x objective and are capable of scanning at both 20x and 40x magnifications. The scanners feature automatic detection of the tissue specimen on the glass slide, automated 1D and 2D barcode reading, and selectable volume scanning (3 to 15 focus layers). The International Color Consortium (ICC) color profile is embedded in each scanned slide image for color management. The scanned slide images are generated in a proprietary file format, Biolmagene Image File (BIF), that can be uploaded to the uPath Image Management System (IMS), provided with the Roche uPath enterprise software.
Roche uPath enterprise software (uPath), a component of Roche Digital Pathology Dx system, is a web-based image management and workflow software application. uPath enterprise software can be accessed on a Windows workstation using the Google Chrome or Microsoft Edge web browser. The user interface of uPath software enables laboratories to manage their workflow from the time the whole slide image is produced and acquired by VENTANA DP 200 and/or DP 600 slide scanners through the subsequent processes, such as review of the digital image on the monitor screen and reporting of results. The uPath software incorporates specific functions for pathologists, laboratory histology staff, workflow coordinators, and laboratory administrators.
The provided document is a 510(k) summary for the "Roche Digital Pathology Dx" system (K242783), which is a modification of a previously cleared device (K232879). This modification primarily involves adding a new slide scanner model, VENTANA DP 600, to the existing system. The document asserts that due to the identical Image Acquisition Unit (IAU) between the new DP 600 scanner and the previously cleared DP 200 scanner, the technical performance assessment from the predicate device is fully applicable. Therefore, the information provided below will primarily refer to the studies and acceptance criteria from the predicate device that are deemed applicable to the current submission due to substantial equivalence.
Here's an analysis based on the provided text, focusing on the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a discrete "acceptance criteria" table with corresponding numerical performance metrics for the current submission (K242783). Instead, it states that the performance characteristics data collected on the VENTANA DP 200 slide scanner (the predicate device) are representative of the VENTANA DP 600 slide scanner performance because both scanners use the same Image Acquisition Unit (IAU). The table below lists the "Technical Performance Assessment (TPA) Sections" as presented in the document (Table 3), which serve as categories of performance criteria that were evaluated for the predicate device and are considered applicable to the current device. The reported performance for these sections is summarized as "Information was provided in K232879" for the DP 600, indicating that the past performance data is considered valid.
TPA Section (Acceptance Criteria Category) | Reported Device Performance (for DP 600 scanner) |
---|---|
Components: Slide Feeder | No double wide slide tray compatibility, new FMEA provided in K242783. (Predicate: Information on configuration, user interaction, FMEA) |
Components: Light Source | Information was provided in K232879. (Predicate: Descriptive info on lamp/condenser, spectral distribution verified) |
Components: Imaging Optics | Information was provided in K232879. (Predicate: Optical schematic, descriptive info, testing for irradiance, distortions, aberrations) |
Components: Focusing System | Information was provided in K232879. (Predicate: Schematic, description, optical system, cameras, algorithm) |
Components: Mechanical Scanner Movement | Same except no double wide slide tray compatibility, replaced references & new FMEA items provided in K242783. (Predicate: Information/specs on stage, movement, FMEA, repeatability) |
Components: Digital Imaging Sensor | Information was provided in K232879. (Predicate: Information/specs on sensor type, pixels, responsivity, noise, data, testing) |
Components: Image Processing Software | Information was provided in K232879. (Predicate: Information/specs on exposure, white balance, color correction, subsampling, pixel correction) |
Components: Image Composition | Information was provided in K232879. (Predicate: Information/specs on scanning method, speed, Z-axis planes, analysis of image composition) |
Components: Image File Formats | Information was provided in K232879. (Predicate: Information/specs on compression, ratio, file format, organization) |
Image Review Manipulation Software | Information was provided in K232879. (Predicate: Information/specs on panning, zooming, Z-axis displacement, comparison, image enhancement, annotation, bookmarks) |
Computer Environment | Select upgrades of sub-components & specifications. (Predicate: Information/specs on hardware, OS, graphics, color management, display interface) |
Display | Information was provided in K232879. (Predicate: Information/specs on pixel density, aspect ratio, display surface, and other display characteristics; performance testing for user controls, spatial resolution, pixel defects, artifacts, temporal response, luminance, uniformity, gray tracking, color scale, color gamut) |
System-level Assessments: Color Reproducibility | Information was provided in K232879. (Predicate: Test data for color reproducibility) |
System-level Assessments: Spatial Resolution | Information was provided in K232879. (Predicate: Test data for composite optical performance) |
System-level Assessments: Focusing Test | Information was provided in K232879. (Predicate: Test data for technical focus quality) |
System-level Assessments: Whole Slide Tissue Coverage | Information was provided in K232879. (Predicate: Test data for tissue detection algorithms and inclusion of tissue in digital image file) |
System-level Assessments: Stitching Error | Information was provided in K232879. (Predicate: Test data for stitching errors and artifacts) |
System-level Assessments: Turnaround Time | Information was provided in K232879. (Predicate: Test data for turnaround time) |
User Interface | Identical workflow, replacement of new scanner component depiction. (Predicate: Information on user interaction, human factors/usability validation) |
Labeling | Same content, replaced references. (Predicate: Compliance with 21 CFR Parts 801 and 809, special controls) |
Quality Control | Same content, replaced references. (Predicate: QC activities by user, lab technician, pathologist prior to/after scanning) |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document explicitly states that the technical performance assessment data was "collected on the VENTANA DP 200 slide scanner" (the predicate device). However, the specific sample sizes for these technical studies (e.g., number of slides used for focusing tests, stitching error analysis, etc.) are not detailed in this summary. The data provenance (country of origin, retrospective/prospective) for these underlying technical studies from K232879 is also not provided in this document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This section is not applicable or not provided by the document. The studies mentioned are primarily technical performance assessments related to image quality, system components, and usability, rather than diagnostic accuracy studies requiring expert pathologist interpretation for ground truth. For the predicate device, it mentions "a qualified pathologist" is responsible for interpretation, but this refers to the end-user clinical use, not the establishment of ground truth for device validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This section is not applicable or not provided by the document. As noted above, the document details technical performance studies rather than diagnostic performance studies that would typically involve multiple readers and adjudication methods for diagnostic discrepancies.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study is mentioned in the provided text. The device, "Roche Digital Pathology Dx," is described as a "digital slide creation, viewing and management system" intended "as an aid to the pathologist to review and interpret digital images." It is a Whole Slide Imaging (WSI) system, and the submission is focused on demonstrating the technical equivalence of a new scanner component, not on evaluating AI assistance or its impact on human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No standalone algorithm performance study is mentioned. The device is a WSI system for "aid to the pathologist to review and interpret digital images," implying a human-in-the-loop system. The document does not describe any specific algorithms intended for automated diagnostic interpretation or analysis in a standalone capacity.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the technical performance studies referenced from the predicate device (K232879), the "ground truth" would be established by physical measurements and engineering specifications for aspects like spatial resolution, color reproducibility, focusing quality, tissue coverage, and stitching errors, rather than clinical outcomes or diagnostic pathology. For instance, color reproducibility would be assessed against a known color standard, and spatial resolution against resolution targets. The document does not explicitly state the exact types of ground truth used for each technical assessment but refers to "test data to evaluate" these characteristics.
8. The sample size for the training set
Not applicable/not provided. The document describes a WSI system, not an AI/ML-based diagnostic algorithm that would typically require a training set. The specific "Image Acquisition Unit" components (hardware and software for pixel pipeline) are stated to be "functionally identical" to the predicate, implying established design rather than iterative machine learning.
9. How the ground truth for the training set was established
Not applicable/not provided. As there is no mention of a training set for an AI/ML algorithm, the method for establishing its ground truth is not discussed.
Ask a specific question about this device
Page 1 of 4