K Number
K190868
Device Name
Cleerly Labs
Manufacturer
Date Cleared
2019-11-05

(216 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Cleerly Labs is a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac computed tomography (CT) data for determining the presence and extent of coronary plaques and stenosis in patients who underwent Coronary Computed Tomography (CCTA) for evaluation of CAD or suspected CAD. This software post processes CT images obtained using any Computed Tomography (CT) scanner. The software provides tools for the measurement and visualization of coronary arteries.

The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

Device Description

Cleerly Labs is a post-processing web-based software application that enables trained medical professionals to analyze 2D/3D coronary images acquired from Computed Tomography (CT) angiographic scans. The software is a post-processing tool that aids in determining treatment paths for patients suspected to have coronary artery disease (CAD).

To aid in image analysis, tools are provided to users to navigate and manipulate images. Manual and semi-automatic segmentation of the coronary artery images are possible using editing tools, thus providing the user the flexibility to perform the coronary analysis.

The output of the software includes visual images of coronary arteries, distance and volume measurements of the lumen wall, vessel wall, and plaque, remodeling index as well as stenosis diameter and area. These measurements are based on user segmentation.

AI/ML Overview

Here's a breakdown of the acceptance criteria and the study details for the Cleerly Labs device, based on the provided FDA 510(k) summary:

1. Table of Acceptance Criteria and Reported Device Performance:

The document primarily focuses on demonstrating substantial equivalence to a predicate device and fulfilling general software and medical device standards. It doesn't explicitly state "acceptance criteria" as a set of numerical thresholds the device must meet in a formal table like some AI/ML device submissions. Instead, performance is reported as correlation and agreement with expert readers, which serve as the de-facto performance metrics.

However, we can infer the acceptance criteria for performance by looking at what was reported and considered satisfactory for clearance. The Pearson Correlation Coefficients and Bland-Altman Agreements are the key performance indicators.

Performance MetricAcceptance Criteria (Implied by reported values and clearance)Reported Device Performance
Pearson Correlation CoefficientHigh correlation (e.g., > 0.70 or > 0.80) with expert readers
Lumen Volume0.91
Vessel Volume0.93
Total Plaque Volume0.85
Calcified Plaque Volume0.94
Non-Calcified Plaque Volume0.74
Low-Density-Non-Calcified Plaque Volume0.53
Bland-Altman AgreementHigh agreement (e.g., > 95%) with expert readers
Lumen Volume96%
Vessel Volume97%
Total Plaque Volume(Missing/Garbled in text)
Calcified Plaque Volume(Missing/Garbled in text)
Non-Calcified Plaque Volume(Missing/Garbled in text)
Low-Density-Non-Calcified Plaque Volume97%

Note on Missing/Garbled Bland-Altman Data: The provided text has garbled characters for several Bland-Altman agreement values, making it impossible to extract precise numbers for "Total Plaque Volume," "Calcified Plaque Volume," and "Non-Calcified Plaque Volume."

2. Sample Size Used for the Test Set and Data Provenance:

The document states: "Pearson Correlation Coefficients and Bland-Altman Agreements between Cleerly Labs and expert reader results is reported Table 5." and "The machine learning algorithms were evaluated by comparing the output of the software to that of the ground truth using multiple ground truthers."

  • Test Set Sample Size: The exact number of cases or images in the test set is not explicitly stated in the provided text.
  • Data Provenance (Country of origin, retrospective/prospective): This information is not explicitly stated in the provided text.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

  • Number of Experts: The document mentions "multiple ground truthers" and "expert readers," but the exact number of experts is not specified.
  • Qualifications of Experts: It states that a "Usability test was conducted with U.S. board certified radiologists and technicians." While this refers to usability, it suggests that the "expert readers" for performance evaluation would likely hold similar, if not higher, qualifications (e.g., board-certified radiologists specializing in cardiac imaging). However, their specific specializations or years of experience are not explicitly detailed.

4. Adjudication Method for the Test Set:

  • The document states: "The machine learning algorithms were evaluated by comparing the output of the software to that of the ground truth using multiple ground truthers." This implies that ground truth was established by more than one expert. However, the specific adjudication method (e.g., 2+1, 3+1 consensus, average, majority vote, etc.) for resolving disagreements among these multiple ground truthers is not explicitly described.

5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

  • No, an MRMC comparative effectiveness study was not explicitly done or reported in this 510(k) summary. The study focuses on the standalone performance of the software compared to expert-established ground truth, not on how human readers improve with AI assistance versus without it. The document states, "No clinical testing was conducted to demonstrate safety or effectiveness as the device's non-clinical (bench) testing was sufficient to support the intended use of the device."

6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done:

  • Yes, a standalone performance evaluation was done. The reported Pearson Correlation Coefficients and Bland-Altman Agreements are for the "Cleerly Labs" output compared directly to "expert reader results (ground truth)," indicating a standalone assessment of the algorithm's performance. The software is described as a post-processing tool that "aids in determining treatment paths" and provides "suggested segmentation," which users can "edit," but the performance metrics provided are for the direct output of the software.

7. The Type of Ground Truth Used:

  • Expert Consensus (implied): The ground truth was established by "expert readers" or "multiple ground truthers." This strongly implies that the reference standard against which the software was compared was derived from human expert interpretation, likely through a consensus process or independent readings. There is no mention of pathology or outcomes data being used as ground truth for these specific quantitative plaque metrics.

8. The Sample Size for the Training Set:

  • The document does not provide information regarding the sample size used for the training set. It only discusses the evaluation of "machine learning algorithms" but not their development dataset.

9. How the Ground Truth for the Training Set was Established:

  • The document does not provide information on how the ground truth for the training set was established. This detail is typically separate from the test set evaluation and not always included in 510(k) summaries if not directly relevant to the substantial equivalence argument for the performance study.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).