Search Results
Found 1 results
510(k) Data Aggregation
(81 days)
For In Vitro Diagnostic Use
The PathPresenter Clinical Viewer is a software intended for viewing and managing whole slide images of scanned glass sides derived from formalin fixed paraffin embedded (FFPE) tissue. It is an aid to pathologists to review and render a diagnosis using the digital images for the purposes of primary diagnosis. PathPresenter Clinical is not intended for use with frozen sections, cytology specimens, or non-FFPE specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using PathPresenter Clinical software. PathPresenter Clinical Viewer is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner NDPI image formats viewed on the Barco NV MDPC-8127 display device.
The PathPresenter Clinical Viewer (version V1.0.1) is a web-based software application designed for viewing and managing whole slide images generated from scanned glass slides of formalin-fixed, paraffin-embedded (FFPE) surgical pathology tissue. It serves as a diagnostic aid, enabling pathologists to review digital images and render a primary pathology diagnosis. Functions of the viewer include zooming and panning the image, annotating the image, measuring distances and areas in the image and retrieving multiple images from the slide tray including prior cases and deprecated slides.
Here's a breakdown of the acceptance criteria and study information for the PathPresenter Clinical Viewer based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for PathPresenter Clinical Viewer
1. Table of Acceptance Criteria and Reported Device Performance
| Test | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Pixelwise Comparison | The 95th percentile of the pixel-wise color difference in any image pair is less than 3 CIEDE2000 (< 3 ΔE00) when comparing the PathPresenter Clinical Viewer (Subject Viewer) with the Hamamatsu NanoZoomer S360MD Slide scanner with NZViewMD viewer (Predicate Viewer). | The device demonstrates substantial equivalence, with the 95th percentile of the pixel-wise color difference being less than 3 CIEDE2000 (< 3 ΔE00) for all comparisons (PathPresenter Clinical Viewer on Microsoft Edge vs. Predicate, and PathPresenter Clinical Viewer on Google Chrome vs. Predicate). |
| Turnaround Time (TAT) Study - Image Loading | Loading of the first image visible to the user: ≤ 8 seconds | Actual: 2.72 seconds |
| Turnaround Time (TAT) Study - Panning | Loading of the whole field of view after panning: ≤ 2 seconds | Actual: 1.22 seconds |
| Turnaround Time (TAT) Study - Zooming | Loading of the whole field of view after zooming: ≤ 3 seconds | Actual: 0.60 seconds |
| Measurement Accuracy | All measured values match predetermined measurements relevant to the zoom level of the viewer, with no allowable deviation. | All measured values matched the reference values exactly, with zero observed error across multiple magnification settings for both length and area measurements. |
| Human Factors Study | Critical tasks required for operation are performed accurately and without any use-related errors that could result in patient harm or diagnostic inaccuracies. Device meets HFE/UE requirements and is acceptable for deployment use in clinical settings. | Validation results confirmed that critical tasks were performed accurately and without any use-related errors that could result in patient harm or diagnostic inaccuracies. Observed usability issues were considered easily mitigable through design improvements, training, and labeling, posing no unacceptable risk. |
2. Sample Size Used for the Test Set and Data Provenance
- Pixelwise Comparison:
- Sample Size: Images from 30 FFPE tissue glass slides, with 3 Regions of Interest (ROIs) selected per slide, and 2 zoom levels (20x and 40x) per ROI. This resulted in 180 total image pairs (30 slides x 3 ROIs x 3 Zoom Levels [implied, likely meaning 20x, 40x, and perhaps an overview] x 2 Browsers).
- Data Provenance: Not specified, but generally regulatory submissions imply real-world pathology samples. It is not explicitly stated whether the data was retrospective or prospective, nor the country of origin.
- Turnaround Time Study:
- Sample Size: Minimum of 20 slides.
- Data Provenance: Not specified.
- Measurement Accuracy:
- Sample Size: An image of a calibration scale slide with objects of known sizes.
- Data Provenance: Not specified.
- Human Factors Study:
- Sample Size: "Representative users" (board-certified pathologists). No specific number provided.
- Data Provenance: Not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Pixelwise Comparison: One board-certified Pathologist was used to pre-identify the Regions of Interest (ROIs). Their specific years of experience are not mentioned beyond "board-certified Pathologist." This pathologist was blinded to the rest of the pixel-wise comparison study.
- Measurement Accuracy: The ground truth was established by using an image of a calibration scale slide with objects of known sizes. This does not involve human experts for ground truth establishment.
- Human Factors Study: "Board-certified pathologists" served as representative users, performing critical tasks. They were the subjects of the study, not necessarily establishing ground truth for device performance but rather testing usability and safety in a clinical context.
4. Adjudication Method for the Test Set
- The document does not explicitly describe an adjudication method for establishing ground truth for the pixelwise comparison or measurement accuracy tests.
- For the pixelwise comparison, the individual pathologist selected ROIs, but the automated comparison of pixel differences is a technical measurement against a predicate, not requiring multi-expert adjudication.
- For measurement accuracy, it was a direct comparison against known values from a calibration slide.
- The Human Factors study involved observations of user performance, not adjudication of diagnostic interpretations.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of Human Reader Improvement with AI vs. Without AI Assistance
- No, an MRMC comparative effectiveness study was not performed, or at least not described in this summary. The PathPresenter Clinical Viewer is described as a "software intended for viewing and managing whole slide images" and an "aid to pathologists." It is a viewer, not an AI diagnostic algorithm that provides an independent reading or an assistive output that would typically be evaluated in an MRMC study comparing human performance with and without AI assistance. The studies focused on technical equivalence to a predicate viewer and usability.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- The document describes standalone performance tests for certain aspects of the software, particularly the pixelwise comparison and measurement accuracy. These tests evaluated the software's output (pixel rendering, measurement values) directly against a known reference (predicate viewer, calibration slide) without human interpretation as part of the core evaluation criteria.
- However, the overall device is an "aid to pathologists" implying human-in-the-loop use. There is no mention of the software making diagnostic recommendations or interpretations by itself.
7. The Type of Ground Truth Used
- Pixelwise Comparison: The ground truth for comparison was the image output from the predicate device (Hamamatsu NanoZoomer S360MD Slide scanner with NZViewMD viewer). The "ground truth" here is the established rendering quality of the predicate viewer.
- Turnaround Time Study: The ground truth involved established time targets for specific operations.
- Measurement Accuracy: The ground truth was the known measurements of objects on a calibration scale slide.
- Human Factors Study: The "ground truth" for success was the accurate and error-free performance of critical tasks by representative users, consistent with safety.
8. The Sample Size for the Training Set
- The provided document does not mention a training set for the PathPresenter Clinical Viewer. This is expected as the viewer is described as image management and viewing software, not an AI algorithm that learns from data.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned or implied for this device, information on how its ground truth was established is not applicable here.
Ask a specific question about this device
Page 1 of 1