K Number
K221612
Device Name
ClearRead CT
Date Cleared
2022-12-05

(185 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

ClearRead CT is comprised of computer-assisted reading tools designed to aid the radiologist in the detection and characterization of pulmonary nodules during the review of screening and surveillance (low-dose) CT examinations of the chest on a non-oncological patient population. ClearRead CT requires both lungs be in the field of view and is not intended for monitoring patients undergoing therapy for lung cancer or limited field of view CT scans. ClearRead CT provides adjunctive information and is not intended to be used without the original CT series.

Device Description

ClearRead CT Compare is a post-processing application which processes a prior chest CT to determine whether a nodule detected in the current exam was present in the prior exam using the same detection algorithm used on the current exam. ClearRead CT Compare requires both lungs to be in the field of view. ClearRead CT Compare provides adjunctive information and is not intended to be used without the original CT series and is only invoked on those patients where a prior exam exists and if a nodule is detected in the current exam. ClearRead CT Compare receives images according to the DICOM® protocol, processes the Lung CT series, and delivers the resulting information through the same DICOM network interface in conjunction with results provided for the current exam, specifically whether the nodule is present on the prior exam and if so, the percent volume change between the current and prior exam along with the volume doubling time. Series inputs are limited to Computed Tomography (CT). The ClearRead CT Compare Processor processes each prior series received. The ClearRead CT Compare output is sent to a destination device that conforms to the ClearRead CT DICOM Conformance Statement, such as a storage archive. ClearRead CT Compare does not support printing or DICOM media. ClearRead CT Compare is a product extension of our FDA cleared and marketed ClearRead CT device (K161201). The initial device contained ClearRead CT Vessel Suppress as well as ClearRead CT Detect. ClearRead CT (the base system), includes normalization, segmentation, and characterization of nodules, and provides the following key features: ● ClearRead CT Vessel Suppress aids radiologists by suppressing normal structures in the input chest CT series. . ClearRead CT Detect aids radiologists in the detection and characterizations of nodules in the input chest CT series. ● ClearRead CT Compare includes Scan Registration and Nodule Matching functions and adds the following key features: ClearRead CT Compare aids radiologists in ● tracking nodule changes over time, providing additional characterizations per nodule, including percent nodule change and volume doubling time.

AI/ML Overview

Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Nodule Match Rate (Overall Target)Minimum of 90% for each selected stratum.
Nodule Match Rate (Solid Nodule Type)0.961 (0.952, 0.978) - Exceeds 90% benchmark
Nodule Match Rate (Part-solid Nodule Type)0.957 (0.942, 0.971) - Exceeds 90% benchmark
Nodule Match Rate (Ground Glass Nodule Type)0.946 (0.934, 0.965) - Exceeds 90% benchmark
Nodule Match Rate (Isolated Nodule Location)0.940 (0.934, 0.947) - Exceeds 90% benchmark
Nodule Match Rate (Juxta-Vascular Nodule Location)0.969 (0.963, 0.975) - Exceeds 90% benchmark
Nodule Match Rate (Juxta-Pleura Nodule Location)0.955 (0.949, 0.961) - Exceeds 90% benchmark
Volume Doubling Time (VDT) and % Change Calculation AccuracyManual and automated calculations matched in every instance.
Vessel Suppress Performance for Thicker Slices (3.5mm - 5mm)Performance across different slice thickness values (3.5mm, 4.0mm, 4.5mm, 5.0mm) had little impact compared to baseline (1mm data), remaining well within the predefined 10% significance threshold for rejecting a test (non-contrast and contrast cases). Average performance change ranged from -4.5% to 4.6%.
Nodule Registration ErrorAverage registration error of 4.46mm, with a standard deviation of 2.69mm, well within the predefined 15mm tolerance.
New Nodule Identification (Real Cases)All 3 new nodules were detected and correctly identified as new.

2. Sample Size Used for the Test Set and Data Provenance

  • Quantitative Nodule Matching Performance: A total of 900 nodules were used for assessment.
  • Vessel Suppress Thicker Slice Evaluation: The same data previously used to assess vessel suppression performance for 1mm to 3mm slice thickness was used, extended to include 3.5mm, 4.0mm, 4.5mm, and 5.0mm.
  • Clinical Performance Testing (Real Nodules): A 25-patient cohort containing 40 real nodules (42 actionable nodules identified by radiologists, with 39 having prior counterparts and 3 being new).

Data Provenance:
The document does not explicitly state the country of origin for the data. It also does not specify whether the data was retrospective or prospective for the 900 nodules or the vessel suppress evaluation. For the "clinical performance testing," the use of "real nodules" from a "25 patient cohort" suggests existing patient data, which is typically retrospective.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

  • Quantitative Nodule Matching Performance: The document does not explicitly state how the ground truth for the 900 nodules was established, nor the number or qualifications of experts involved. It only states that "detected nodules were split into three categories based on their attenuation pattern."
  • Clinical Performance Testing (Real Nodules): Ground truth for the 42 actionable nodules was established by "the radiologist." The document uses "radiologist" in the singular, implying one expert, but does not provide specific qualifications (e.g., years of experience).

4. Adjudication Method for the Test Set

The document does not describe a formal adjudication method (like 2+1 or 3+1) for establishing the ground truth of the test sets. For the "clinical performance testing," it states "42 actionable nodules were ground-truthed by the radiologist," implying a single observer established the ground truth.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

No, an MRMC comparative effectiveness study involving human readers' improvement with AI vs. without AI assistance was not explicitly described or presented in the provided text. The studies focused on the performance of the ClearRead CT Compare algorithm itself, not its impact on human reader performance.

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

Yes, the studies presented primarily focus on standalone (algorithm only) performance.

  • The nodule matching performance tables (Tables 5.2 and 5.3) report the algorithm's match and mismatch rates.
  • The VDT and % change calculation accuracy was checked against "manual computation," implying a comparison of algorithm output to a reference, not human interaction.
  • The vessel suppress evaluation directly assesses the algorithm's output (residual changes).
  • The "clinical performance testing" assessed the algorithm's ability to detect and match nodules and identify new ones, again without a human-in-the-loop component for the reported metrics.

The device's indication for use explicitly states it provides "adjunctive information and is not intended to be used without the original CT series," suggesting it is designed to aid radiologists, but the studies described focus on its internal performance metrics rather than its performance in an assisted reading workflow.

7. The Type of Ground Truth Used

  • Quantitative Nodule Matching Performance: The type of ground truth used for the 900 nodules is not explicitly stated beyond classification by attenuation pattern.
  • Volume Doubling Time and % Change Calculation: Manual computation of these values was used as ground truth.
  • Vessel Suppress Performance for Thicker Slices: The ground truth for this evaluation appears to be based on a baseline performance (1mm data) and a predefined significance threshold for residual changes. The "residual analysis" implies comparing the algorithm's output to an expected or ideal output for vessel suppression.
  • Clinical Performance Testing (Real Nodules): Expert radiologist assessment ("ground-truthed by the radiologist") was used to identify "actionable nodules" and their presence in prior scans.

8. The Sample Size for the Training Set

The document does not provide information regarding the sample size used for the training set for any component of ClearRead CT or ClearRead CT Compare.

9. How the Ground Truth for the Training Set Was Established

The document does not provide information regarding how the ground truth for the training set was established, as the training set details are not mentioned.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).