K Number
K230839
Device Name
Concentriq Dx
Manufacturer
Date Cleared
2024-02-08

(318 days)

Product Code
Regulation Number
864.3700
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

For In Vitro Diagnostic Use

Concentriq® Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage these digital slide images of primary diagnosis. Concentriq® Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and the validity of the interpretation of images using Concentriq Dx is intended for use with the Hamamatsu NanoZoomer S360MD Slide scanner and JVC JD-C240BN01A monitor.

Device Description

CONCENTRIQ DX IS OPERATED AS FOLLOWS:

    1. The image acquisition is performed using the validated WSI scanner. A lab technician prepares, and scans slides and reviews the slide quality in accordance with the WSI scanner Instructional Manual and standard lab procedures. The Concentriq Dx workflow is initiated when the WSI from the local file system is ingested into Concentriq Dx.
    1. The reading pathologist selects a case from a worklist external to the subject device or from within the subject device, whereby the subject device fetches the associated images from the image storage.
    1. The reading pathologist uses the subject device to view and interpret the images:
    • Zoom and pan the image ●
    • Measure distances and areas in the image
    • Annotate the image ●
    • View multiple images side by side
    1. Prior to using a whole slide image for diagnosis, a data and image quality assessment is performed.
    1. The above steps are repeated as required.
    1. After viewing all images for a case, the pathologist will make a diagnosis. The diagnosis will be documented in another system, e.g., a Laboratory Information System (LIS).
    1. When finished using the system, the pathologist clicks "Sign Out" in the user menu.

Quality Control:

Prior to using a whole slide image for diagnosis, the pathologist should ensure that all scanned slide images have been imported for every case and the images are of acceptable quality for diagnostic purposes. The pathologist reviews scanned images from all the slides associated with a case before rendering a diagnosis.

AI/ML Overview

Here's an analysis of the provided text to extract the acceptance criteria and study information:

Acceptance Criteria and Device Performance

Acceptance Criteria CategoryAcceptance CriteriaReported Device Performance
Clinical Equivalence (Non-inferiority)The major discordance rate between manual digital (MD) reads using Concentriq Dx and the ground truth (GT) should be non-inferior to the major discordance rate between manual optical (MO) reads using conventional light microscopy and the ground truth (GT). The prespecified non-inferiority threshold was 4%.The difference in major discordance rates between MD and GT compared to MO and GT was -0.1% (95% CI, -1.0, 0.4) for all cases across the 3 reading pathologists. The upper limit of the CI for the difference was 0.4%, which is less than the 4% threshold.
Image Loading/Turnaround TimeImages load in less than 5 seconds when selected for viewing.Images load in less than 5 seconds when selected for viewing.
Images load in less than 2 seconds when spanning and zooming.Images load in less than 2 seconds when spanning and zooming.
Measurement Accuracy (Area and Distance)Measurements of markings made in the Concentriq viewer should accurately reflect the known distances and areas of markings on a calibrated cross scale slide.Tests verified that the distance and area measurements made in the Concentriq Dx viewer accurately reflected the distance and area of the markings on an image of the calibrated slide.
Human Factors ValidationConcentriq Dx should be found safe and effective for the intended users, uses, and use environments.Concentriq Dx has been found to be safe and effective for the intended users, uses and use environments.
Pixel-wise Image Reproduction (Bench Testing)The 95th percentile of pixel-wise differences between Concentriq Dx and the NanoZoomer S360MD Slide scanner system should meet a certain threshold for identical image reproduction (though the text reports that it was not met, triggering the clinical study, rather than stating it as an acceptance criterion that was met). The text implies the initial expectation was closer to pixel-wise identity, but the outcome led to a clinical study instead.The 95th percentile of pixel-wise differences between Concentriq Dx and NanoZoomer S360MD Slide scanner system was greater than 3 CIEDE2000, indicating their output images are not pixel-wise identical as Concentriq Dx applies image compression. This did not meet pixel-wise identicality, necessitating a clinical study.

Study Details

The provided text details a clinical study to establish the non-inferiority of Concentriq Dx for primary diagnosis compared to conventional light microscopy.

  1. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Size: Not explicitly stated as a single number for cases or slides. The study involved "all cases across the 3 reading pathologists." Without additional context, the total number of cases/slides reviewed cannot be determined from this document alone.
    • Data Provenance: Not explicitly stated (e.g., country of origin). The study involved comparing reads against an "original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution." This suggests the data was retrospective from a clinical institution where standard diagnoses were already made.
  2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:

    • Number of Experts for Ground Truth: The ground truth (GT) was defined as the "original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution." This implies a single, qualified pathologist at the institution made the original diagnosis for each case, forming the initial ground truth.
    • Qualifications of Experts for Study Reads: The study involved 3 reading pathologists. Their specific qualifications (e.g., years of experience) are not detailed in this document, but they are referred to as "qualified pathologist[s]" in the Indications for Use and Device Description sections who review and interpret digital slide images.
  3. Adjudication Method for the Test Set:

    • The document implies that the ground truth for the comparison was the "original sign-out pathologic diagnosis" from the institution. The study then measured major discordance rates between the digital reads (MD) and this established ground truth (GT), and between optical reads (MO) and GT. There is no mention of a separate adjudication process among multiple experts to establish the ground truth for the study itself; rather, they rely on pre-existing diagnoses.
  4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    • Yes, an MRMC comparative effectiveness study was done. The study specifically compared performance of "manual digital read (MD)" with "manual optical (MO)" against a "reference (main) diagnosis... [ground truth, (GT)]". It involved "3 reading pathologists."
    • Effect Size: The study found that "The differences in major discordance rates between MD and GT compared to MO and GT were -0.1% (95% CI, -1.0, 0.4) for all cases across the 3 reading pathologists." The upper limit of the confidence interval (0.4%) was less than the prespecified noninferiority threshold of 4%, thereby demonstrating non-inferiority rather than a direct "improvement" effect size in the traditional sense of AI assistance. The study's goal was to show MD is not worse than MO. It does not quantify how much human readers improve with AI vs without AI assistance, as Concentriq Dx is a viewing/management software, not an AI-assisted diagnostic tool in this context, and the comparison is between digital vs. optical viewing.
  5. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done:

    • No. The device, Concentriq Dx, is a "software only device intended for viewing and management of digital images" and an "aid to the pathologist." The clinical study evaluated "manual digital read (MD)," which inherently involves a human pathologist-in-the-loop using the device. There is no mention of a standalone algorithm performance evaluation.
  6. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):

    • The ground truth (GT) was the "original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution." This is based on pathology diagnoses made by a qualified pathologist via conventional light microscopy.
  7. The Sample Size for the Training Set:

    • The document does not mention a specific training set size or methodology for Concentriq Dx. Given its description as a "software only device intended for viewing and management of digital images," not an AI diagnostic algorithm, it likely does not undergo a "training" process in the machine learning sense that would require a dedicated training set. The clinical study is for validation, not training.
  8. How the Ground Truth for the Training Set Was Established:

    • As no training set or AI algorithm training is described for Concentriq Dx in this document, this question is not applicable based on the provided information.

§ 864.3700 Whole slide imaging system.

(a)
Identification. The whole slide imaging system is an automated digital slide creation, viewing, and management system intended as an aid to the pathologist to review and interpret digital images of surgical pathology slides. The system generates digital images that would otherwise be appropriate for manual visualization by conventional light microscopy.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Premarket notification submissions must include the following information:
(i) The indications for use must specify the tissue specimen that is intended to be used with the whole slide imaging system and the components of the system.
(ii) A detailed description of the device and bench testing results at the component level, including for the following, as appropriate:
(A) Slide feeder;
(B) Light source;
(C) Imaging optics;
(D) Mechanical scanner movement;
(E) Digital imaging sensor;
(F) Image processing software;
(G) Image composition techniques;
(H) Image file formats;
(I) Image review manipulation software;
(J) Computer environment; and
(K) Display system.
(iii) Detailed bench testing and results at the system level, including for the following, as appropriate:
(A) Color reproducibility;
(B) Spatial resolution;
(C) Focusing test;
(D) Whole slide tissue coverage;
(E) Stitching error; and
(F) Turnaround time.
(iv) Detailed information demonstrating the performance characteristics of the device, including, as appropriate:
(A) Precision to evaluate intra-system and inter-system precision using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(B) Reproducibility data to evaluate inter-site variability using a comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems and diseases. Multiple whole slide imaging systems, multiple sites, and multiple readers must be included.
(C) Data from a clinical study to demonstrate that viewing, reviewing, and diagnosing digital images of surgical pathology slides prepared from tissue slides using the whole slide imaging system is non-inferior to using an optical microscope. The study should evaluate the difference in major discordance rates between manual digital (MD) and manual optical (MO) modalities when compared to the reference (
e.g., main sign-out diagnosis).(D) A detailed human factor engineering process must be used to evaluate the whole slide imaging system user interface(s).
(2) Labeling compliant with 21 CFR 809.10(b) must include the following:
(i) The intended use statement must include the information described in paragraph (b)(1)(i) of this section, as applicable, and a statement that reads, “It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using this device.”
(ii) A description of the technical studies and the summary of results, including those that relate to paragraphs (b)(1)(ii) and (iii) of this section, as appropriate.
(iii) A description of the performance studies and the summary of results, including those that relate to paragraph (b)(1)(iv) of this section, as appropriate.
(iv) A limiting statement that specifies that pathologists should exercise professional judgment in each clinical situation and examine the glass slides by conventional microscopy if there is doubt about the ability to accurately render an interpretation using this device alone.