K Number
K241273
Device Name
FullFocus
Manufacturer
Date Cleared
2025-01-09

(248 days)

Product Code
Regulation Number
864.3700
Panel
PA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

For In Vitro Diagnostic Use

FullFocus is a software intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended to be used with the interoperable components specified in the below Table.

Table: Interoperable components of FullFocus

Scanner HardwareScanner Output file formatInteroperable Displays
Leica Aperio GT 450 DX scannerDICOM, SVSDell UP3017
Dell U3023E
Hamamatsu NanoZoomer S360MD Slide ScannerNDPIDell U3223QE
JVC-Kenwood JD-C240BN01A
Device Description

FullFocus, version 2.29, is a web-based software-only device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from FDA cleared digital pathology scanners on FDA cleared displays. FullFocus renders these digitized pathology images for review, management and navigation for pathology primary diagnosis.

Image acquisition is performed using the intended scanner (s), with the operator conducting quality control on the digital WSI images according to the scanner's instructions for use and lab specifications to determine if re-scans are needed. Please see the Intended Use section and below tables for specifics on scanners and respective displays for clinical use.

Once a whole slide image is acquired using the intended scanner and becomes available in the scanner's database file system, a separate medical image communications software (not part of the device), automatically uploads the image and corresponding metadata to persistent cloud storage. Integrity checks are performed during the upload to ensure data accuracy.

The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.

FullFocus operates with and is validated for use with the FDA cleared components specified in the tables below:

Scanner HardwareScanner Output file formatInteroperable Displays
Leica Aperio GT 450 DX scannerDICOM, SVSDell UP3017
Dell U3023E
Hamamatsu NanoZoomer S360MD Slide ScannerNDPIDell U3223QE
JVC-Kenwood JD-C240BN01A

Table 1: Interoperable Components Intended for Use with FullFocus

FullFocus version 2.29 was not validated for the use with images generated with Philips Ultra Fast Scanner.

Table 2: Computer Environment/System Requirements for during the use of FullFocus

EnvironmentComponentMinimum Requirements
HardwareProcessor1 CPU, 2 cores, 1.6GHz
Memory4 GB RAM
NetworkBandwidth of 10Mbps
SoftwareOperating System• Windows
• macOS
Browser• Google Chrome (129.0.6668.90 or higher)
• Microsoft Edge (129.0.2792.79 or higher)
AI/ML Overview

Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriterionReported Device Performance
Pixel-wise comparison: The 95th percentile of pixel-wise color differences in any image pair across all required screenshots must be less than 3.0 ΔE00 when compared to comparator (predicate device's Image Review Manipulation Software - IRMS) for identical image reproduction. This indicates visual adequacy for human readers.The 95th percentile of pixel-wise differences between FullFocus and the comparators were less than 3 CIEDE2000, indicating that their output images can be considered to be pixel-wise identical. FullFocus has been found to visually adequately reproduce digital pathology images to human readers with respect to its intended use.
Turnaround time (Case selection): It should not take longer than 10 seconds until the image is fully loaded when selecting a case.System requirements fulfilled: Not longer than 10 seconds until the image is fully loaded.
Turnaround time (Panning/Zooming): It shall not take longer than 7 seconds until the image is fully loaded when panning and zooming the image.System requirements fulfilled: Not longer than 7 seconds until the image is fully loaded.
Measurement Accuracy (Straight Line): The 1mm measured line should match the reference value exactly 1mm ± 0mm.All straight-line measurements compared to the reference were exactly 1mm, with no error.
Measurement Accuracy (Area): The measured area must match the reference area exactly 0.2 x 0.2 mm for a total of 0.04 mm² ± 0 mm².All area measurements compared to the reference value were exactly 0.04mm², with no error.
Measurement Accuracy (Scalebar): 2mm scalebar is accurate.All Tests Passed.
Human Factors Testing: (Implied from previous clearance) Safe and effective use by representative users for critical user tasks and use scenarios.Human factors study designed around critical user tasks and use scenarios performed by representative users were conducted for previously cleared FullFocus, version 1.2.1, in K201005, per FDA guidance “Applying Human Factors and Usability Engineering to Medical Devices (2016)". Human factors validation testing is not necessary as the user interface hasn't changed.

2. Sample Size Used for the Test Set and Data Provenance

  • Sample Size for Pixel-wise Comparison: 30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides, representing a range of human anatomical sites.
  • Sample Size for Turnaround Time & Measurements: Not explicitly stated as a number of distinct cases or images beyond the 30 slides used for pixel-wise comparison. For measurements, a "1 Calibration Slide" was used per test.
  • Data Provenance: The text does not explicitly state the country of origin. The slides are described as "representing a range of human anatomical sites," implying a diverse set of real-world pathology samples. It is a retrospective study as it states "30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides... were scanned".

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Pixel-wise Comparison: "For each WSI, three regions of interest (ROIs) were identified to highlight relevant pathological features, as verified by a pathologist."
    • Number of Experts: At least one pathologist.
    • Qualifications: "A pathologist" (specific qualifications like years of experience are not provided).
  • Measurements: No expert was explicitly mentioned for establishing ground truth for measurements; it relies on a "test image containing objects with known sizes" (calibration slide) and "reference value."

4. Adjudication Method for the Test Set

  • The text does not mention an explicit adjudication method (like 2+1 or 3+1 consensus) for the pixel-wise comparison or measurement accuracy. For the pixel-wise comparison, ROIs were "verified by a pathologist," suggesting a single-expert verification rather than a consensus process.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

  • No, an MRMC comparative effectiveness study was not done in this context. The study focused on demonstrating identical image reproduction (pixel-wise comparison) and technical performance (turnaround time, measurement accuracy) of the FullFocus viewer against predicate devices' viewing components. It did not directly assess the improvement in human reader performance (e.g., diagnostic accuracy or efficiency) with or without AI assistance. The device is a "viewer and management software," not an AI diagnostic aid in the sense of providing specific findings or interpretations.

6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

  • Yes, a standalone "algorithm only" performance was effectively done for the technical aspects. The pixel-wise comparison directly compares the image rendering of FullFocus with the predicate viewer's rendering without human intervention in the comparison process itself (though a pathologist verified ROIs). Similarly, turnaround times and measurement accuracy are intrinsic technical performances of the software.

7. The Type of Ground Truth Used

  • Pixel-wise Comparison: The ground truth for this test was the digital image data as rendered by the predicate device's IRMS. The goal was to show that FullFocus reproduces the same image data. The "relevant pathological features" within ROIs were "verified by a pathologist" which served as a reference for what areas to test, not necessarily a diagnostic ground truth for the device's output.
  • Measurements: The ground truth was based on known physical dimensions within a calibration slide and corresponding "reference values."

8. The Sample Size for the Training Set

  • The provided text does not mention a training set. This is expected because FullFocus is a viewer and management software for digital pathology images, not an AI or machine learning algorithm that is "trained" on data to make predictions or assist in diagnosis directly. Its core function is to display existing image data accurately and efficiently.

9. How the Ground Truth for the Training Set Was Established

  • As no training set is mentioned (since it's a viewer software), this question is not applicable based on the provided text.

§ 864.3700 Whole slide imaging system.

(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.