Search Results
Found 3 results
510(k) Data Aggregation
(248 days)
For In Vitro Diagnostic Use
FullFocus is a software intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended to be used with the interoperable components specified in the below Table.
Table: Interoperable components of FullFocus
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
FullFocus, version 2.29, is a web-based software-only device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from FDA cleared digital pathology scanners on FDA cleared displays. FullFocus renders these digitized pathology images for review, management and navigation for pathology primary diagnosis.
Image acquisition is performed using the intended scanner (s), with the operator conducting quality control on the digital WSI images according to the scanner's instructions for use and lab specifications to determine if re-scans are needed. Please see the Intended Use section and below tables for specifics on scanners and respective displays for clinical use.
Once a whole slide image is acquired using the intended scanner and becomes available in the scanner's database file system, a separate medical image communications software (not part of the device), automatically uploads the image and corresponding metadata to persistent cloud storage. Integrity checks are performed during the upload to ensure data accuracy.
The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
FullFocus operates with and is validated for use with the FDA cleared components specified in the tables below:
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
Table 1: Interoperable Components Intended for Use with FullFocus
FullFocus version 2.29 was not validated for the use with images generated with Philips Ultra Fast Scanner.
Table 2: Computer Environment/System Requirements for during the use of FullFocus
Environment | Component | Minimum Requirements |
---|---|---|
Hardware | Processor | 1 CPU, 2 cores, 1.6GHz |
Memory | 4 GB RAM | |
Network | Bandwidth of 10Mbps | |
Software | Operating System | • Windows |
• macOS | ||
Browser | • Google Chrome (129.0.6668.90 or higher) | |
• Microsoft Edge (129.0.2792.79 or higher) |
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison: The 95th percentile of pixel-wise color differences in any image pair across all required screenshots must be less than 3.0 ΔE00 when compared to comparator (predicate device's Image Review Manipulation Software - IRMS) for identical image reproduction. This indicates visual adequacy for human readers. | The 95th percentile of pixel-wise differences between FullFocus and the comparators were less than 3 CIEDE2000, indicating that their output images can be considered to be pixel-wise identical. FullFocus has been found to visually adequately reproduce digital pathology images to human readers with respect to its intended use. |
Turnaround time (Case selection): It should not take longer than 10 seconds until the image is fully loaded when selecting a case. | System requirements fulfilled: Not longer than 10 seconds until the image is fully loaded. |
Turnaround time (Panning/Zooming): It shall not take longer than 7 seconds until the image is fully loaded when panning and zooming the image. | System requirements fulfilled: Not longer than 7 seconds until the image is fully loaded. |
Measurement Accuracy (Straight Line): The 1mm measured line should match the reference value exactly 1mm ± 0mm. | All straight-line measurements compared to the reference were exactly 1mm, with no error. |
Measurement Accuracy (Area): The measured area must match the reference area exactly 0.2 x 0.2 mm for a total of 0.04 mm² ± 0 mm². | All area measurements compared to the reference value were exactly 0.04mm², with no error. |
Measurement Accuracy (Scalebar): 2mm scalebar is accurate. | All Tests Passed. |
Human Factors Testing: (Implied from previous clearance) Safe and effective use by representative users for critical user tasks and use scenarios. | Human factors study designed around critical user tasks and use scenarios performed by representative users were conducted for previously cleared FullFocus, version 1.2.1, in K201005, per FDA guidance “Applying Human Factors and Usability Engineering to Medical Devices (2016)". Human factors validation testing is not necessary as the user interface hasn't changed. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Pixel-wise Comparison: 30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides, representing a range of human anatomical sites.
- Sample Size for Turnaround Time & Measurements: Not explicitly stated as a number of distinct cases or images beyond the 30 slides used for pixel-wise comparison. For measurements, a "1 Calibration Slide" was used per test.
- Data Provenance: The text does not explicitly state the country of origin. The slides are described as "representing a range of human anatomical sites," implying a diverse set of real-world pathology samples. It is a retrospective study as it states "30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides... were scanned".
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Pixel-wise Comparison: "For each WSI, three regions of interest (ROIs) were identified to highlight relevant pathological features, as verified by a pathologist."
- Number of Experts: At least one pathologist.
- Qualifications: "A pathologist" (specific qualifications like years of experience are not provided).
- Measurements: No expert was explicitly mentioned for establishing ground truth for measurements; it relies on a "test image containing objects with known sizes" (calibration slide) and "reference value."
4. Adjudication Method for the Test Set
- The text does not mention an explicit adjudication method (like 2+1 or 3+1 consensus) for the pixel-wise comparison or measurement accuracy. For the pixel-wise comparison, ROIs were "verified by a pathologist," suggesting a single-expert verification rather than a consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not done in this context. The study focused on demonstrating identical image reproduction (pixel-wise comparison) and technical performance (turnaround time, measurement accuracy) of the FullFocus viewer against predicate devices' viewing components. It did not directly assess the improvement in human reader performance (e.g., diagnostic accuracy or efficiency) with or without AI assistance. The device is a "viewer and management software," not an AI diagnostic aid in the sense of providing specific findings or interpretations.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone "algorithm only" performance was effectively done for the technical aspects. The pixel-wise comparison directly compares the image rendering of FullFocus with the predicate viewer's rendering without human intervention in the comparison process itself (though a pathologist verified ROIs). Similarly, turnaround times and measurement accuracy are intrinsic technical performances of the software.
7. The Type of Ground Truth Used
- Pixel-wise Comparison: The ground truth for this test was the digital image data as rendered by the predicate device's IRMS. The goal was to show that FullFocus reproduces the same image data. The "relevant pathological features" within ROIs were "verified by a pathologist" which served as a reference for what areas to test, not necessarily a diagnostic ground truth for the device's output.
- Measurements: The ground truth was based on known physical dimensions within a calibration slide and corresponding "reference values."
8. The Sample Size for the Training Set
- The provided text does not mention a training set. This is expected because FullFocus is a viewer and management software for digital pathology images, not an AI or machine learning algorithm that is "trained" on data to make predictions or assist in diagnosis directly. Its core function is to display existing image data accurately and efficiently.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned (since it's a viewer software), this question is not applicable based on the provided text.
Ask a specific question about this device
(264 days)
Paige Prostate is a software only device intended to assist pathologists in the detection of foci that are suspicious for cancer during the review of scanned whole slide images (WSI) from prostate needle biopsies prepared from hematoxylin & eosin (H&E) stained formalinfixed paraffin embedded (FFPE) tissue. After initial diagnostic review of the WSI by the pathologist, if Paige Prostate detects tissue morphology suspicious for cancer, it provides coordinates (X,Y) on a single location on the image with the highest likelihood of having cancer for further review by the pathologist.
Paige Prostate is intended to be used with slide images digitized with Philips Ultra Fast Scanner and visualized with Paige FullFocus WSI viewing software.
Paige Prostate is an adjunctive computer-assisted methodology and its output should not be used as the primary diagnosis. Pathologists should only use Paige Prostate in conjunction with their complete standard of care evaluation of the slide image.
Paige Prostate is an in vitro diagnostic medical device software, derived from a deterministic deep learning system that has been developed with digitized WSIs of H&E stained prostate needle biopsy slides.
Paige Prostate utilizes several accessory devices as shown in Figure 1 below, for automated ingestion of the input. The device identifies areas suspicious for cancer on the input WSIs. For each input WSI, Paige Prostate automatically analyzes the WSI and outputs the following:
- . Binary classification of suspicious or not suspicious for cancer based on a pre-defined threshold on the neural network output.
- . If the slide is classified as suspicious for cancer, a single coordinate (X,Y) of the location with the highest probability of cancer on an image determined to be suspicious for cancer.
Here's a breakdown of the acceptance criteria and the study details for Paige Prostate, based on the provided text:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance | Comments |
---|---|---|
Algorithm Localization (X,Y Coordinate) and Accuracy Study | Sensitivity: 94.5% (95% CI: 91.4%; 96.6%) | |
Specificity: 94.0% (95% CI: 91.3%; 95.9%) | This study evaluated the standalone performance of the algorithm in identifying suspicious foci and localizing them. | |
Precision Study (Within-scanner) | Cancer Slides: Probability of result being "Cancer" with same scanner/operator is 99.0% (95%CI: 94.8%; 99.8%) | |
Benign Slides: Probability of result being "Benign" with same scanner/operator is 94.4% (95%CI: 88.4%; 97.4%) | This assessed the consistency of the device's output under repeated scans by the same operator on the same scanner. | |
Precision Study (Reproducibility: Between-scanner and between-operator) | Cancer Slides: Probability of result being "Cancer" with different scanners/operators is 100% (95%CI: 96.5%; 100%) | |
Benign Slides: Probability of result being "Benign" with different scanners/operators is 93.5% (95%CI: 87.2%; 96.8%) | This assessed the consistency of the device's output across different scanners and operators. | |
Localization Precision Study | Location Correct (Within-Scanner, Op1/Sc1): 98.2% (56/57) (95%CI: 90.7%; 99.7%) | |
Location Correct (3 Scanners, 3 Operators): 96.4% (53/55) (95%CI: 87.7%; 99.0%) | This focused specifically on the precision of the (X,Y) coordinate localization. | |
Clinical Study (Pathologist Performance with AI Assistance) | Average Improvement in Sensitivity: 7.3% (95% CI: 3.9%; 11.4%) (statistically significant) | |
Average Difference in Specificity: 1.1% (95% CI: -0.7%; 3.4%) (not statistically significant) | This study measured the efficacy of Paige Prostate as an adjunctive tool for pathologists. "Positive" was defined as 'deferred' or 'cancer', and 'negative' as 'benign'. |
Study Information
2. Sample size used for the test set and the data provenance:
-
Algorithm Localization and Accuracy Study:
- Test Set Size: 728 WSIs (311 cancer, 417 benign) from unique patients.
- Provenance: De-identified WSIs from:
- Consecutive prostate cancer slides from an internal site (located in US).
- Challenging cancer slides (≤0.5mm tumor) from an internal site.
- Consecutive cancer slides submitted from external sites.
- Challenging cancer slides submitted from external sites.
- Benign slides from consecutive prostate biopsy cases from an internal site.
- Consecutive benign slides submitted from external sites (submitted to internal site for expert consultation).
- External Sites: Included 217 different sites located throughout the world (including US).
- Retrospective/Prospective: Retrospective.
-
Precision Study:
- Test Set Size: 35 cancer WSIs and 36 benign WSIs from unique patients.
- Provenance: Slides from an internal site and external sites (217 different sites).
- Retrospective/Prospective: Retrospective.
-
Clinical Study:
- Test Set Size: 527 WSIs (171 prostate cancer, 356 benign) from unique patients.
- Provenance: 44.15% from cases prepared, reviewed, diagnosed, and digitized at the internal site (US). 55.85% from cases prepared at 156 different external sites but reviewed, diagnosed, and digitized at the internal site.
- Retrospective/Prospective: Retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
-
Algorithm Localization and Accuracy Study (Localization Ground Truth):
- Number of Experts: 3 study pathologists.
- Qualifications: US board-certified pathologists (2 completed anatomic pathology fellowship and 1 sub-specialized genitourinary pathologist). They were blinded to Paige Prostate results.
-
Clinical Study (Ground Truth for slide-level cancer/benign):
- Number of Experts: Not explicitly stated as "experts for ground truth creation" but implies the original pathologists who generated the synoptic diagnostic reports.
- Qualifications: Pathologists at the internal site generating synoptic diagnostic reports.
4. Adjudication method for the test set:
-
Algorithm Localization and Accuracy Study (Localization Ground Truth):
- Adjudication Method: The union of annotations between at least 2 of the 3 annotating pathologists was used as the localization ground truth.
-
Clinical Study (Slide-Level Cancer/Benign Ground Truth):
- Adjudication Method: "Synoptic diagnostic reports from the internal site were used to generate the ground truth for each slide as either cancer or no cancer." This implies a single, established diagnostic report rather than a consensus process for the study's ground truth.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC comparative effectiveness study was done (the "Clinical Study").
- Effect Size of Improvement:
- Average Improvement in Sensitivity: 7.3% (95% CI: 3.9%; 11.4%)
- Average Difference in Specificity: 1.1% (95% CI: -0.7%; 3.4%)
- The document clarifies that this is an average across 16 pathologists.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. This is detailed in the "Analytical Performance" section, specifically the "Algorithm Localization (X,Y Coordinate) and Accuracy Study."
- Sensitivity (Standalone): 94.5%
- Specificity (Standalone): 94.0%
7. The type of ground truth used:
- Algorithm Localization and Accuracy Study (Slide-Level Cancer Ground Truth): Synoptic pathology diagnostic reports from the internal site.
- Algorithm Localization and Accuracy Study (Localization Ground Truth): Consensus of 3 US board-certified pathologists who manually annotated image patches.
- Precision Study (Slide-Level Cancer Ground Truth): Synoptic diagnostic reports from the internal site.
- Clinical Study (Slide-Level Cancer/Benign Ground Truth): Original diagnostic synoptic reports.
8. The sample size for the training set:
- Training Dataset: 33,543 slide images.
9. How the ground truth for the training set was established:
- "De-identified slides were labeled as benign or cancer based on the synoptic diagnostic pathology report."
Ask a specific question about this device
(90 days)
FullFocus™ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended for use with Philips Ultra Fast Scanner and monitor displays validated with verified test methods to meet required performance characteristics.
FullFocus is a web-based software-only device for viewing and manipulating digital pathology images of glass slides obtained from the Philips IntelliSite Pathology Solution (PIPS) Ultra Fast Scanner (UFS) on the monitor displays that are validated with verified test methods to meet required performance characteristics. FullFocus reproduces the whole slide images and is an aid to the pathologist to review. interpret and manage digital images of pathology slides for primary diagnosis.
Here's a breakdown of the acceptance criteria and the study information for the FullFocus device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Pixel-wise comparison | Visually adequately reproduces digital pathology images to human readers with respect to its intended use (compared to PIPS, including zooming and panning). |
Turnaround time (Case selection) | Not longer than 10 seconds until the image is fully loaded. |
Turnaround time (Panning) | Not longer than 7 seconds until the image is fully loaded (for panning one quarter of the monitor). |
Measurements Accuracy | Performs accurate measurements (verified using a test image containing objects with known sizes). |
Human factors testing | Found to be safe and effective for the intended users, uses, and use environments; user interface is intuitive, safe, and effective for the range of intended users. |
Further Study Information
-
Sample size used for the test set and data provenance:
- Clinical Study: No clinical study involving diagnosis by human readers for diagnostic accuracy comparison is mentioned in this document. The "studies" described are non-clinical technical performance assessments and human factors testing.
- Pixel-wise comparison: The document doesn't specify a sample size for slides or images, only that it "was conducted to compare color images reproduced by FullFocus and PIPS IMS." Data provenance is not mentioned, but it's implied the images were generated by a Philips Ultra Fast Scanner, given the device's compatibility and comparison to the PIPS IMS.
- Measurements: "a test image containing objects with known sizes" was used. Specific sample size is not indicated.
- Human Factors Testing: "Task-based usability tests" were performed. The number of participants (intended users) is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the pixel-wise comparison, the "ground truth" was essentially the visual fidelity to images produced by the predicate device (PIPS IMS). The "human readers" mentioned in the performance description are not described as experts establishing ground truth, but rather as observers confirming visual adequacy. No specific number or qualifications of these readers are given.
- For measurements, the ground truth was the "known sizes" of objects within a test image. This would not require expert pathologists to establish.
- For human factors testing, the "ground truth" relates to usability and safety, which is assessed directly by intended users during task performance, rather than established by an "expert" in the diagnostic sense.
-
Adjudication method for the test set:
- Not applicable as there was no study described that involved diagnostic interpretations requiring adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document explicitly states that FullFocus is a "software only device intended for viewing and management of digital images... It is an aid to the pathologist to review, interpret, and manage digital images...". It is a viewer, not an AI-assisted diagnostic tool. Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or described.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. FullFocus is a viewing and management system for pathologists, not a standalone diagnostic algorithm. Its function is to facilitate human review.
-
The type of ground truth used:
- For Pixel-wise comparison, the ground truth was the visual representation and fidelity of images from the predicate device (PIPS IMS).
- For Measurements, the ground truth was "known sizes" of objects in a test image.
- For Turnaround time and Human factors testing, the ground truth was based on pre-defined system requirements and direct usability observations/feedback.
-
The sample size for the training set:
- Not applicable. FullFocus is described as a viewing and management software, not an AI or machine learning algorithm that requires a training set in the typical sense for diagnostic performance.
-
How the ground truth for the training set was established:
- Not applicable, as no training set for an AI/ML algorithm is described.
Ask a specific question about this device
Page 1 of 1