Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K242338
    Manufacturer
    Date Cleared
    2025-03-07

    (212 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Cleerly, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cleerly LABS is a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac computed tomography (CT) data for determining the presence and extent of coronary plaques (i.e. atherosclerosis) and stenosis in patients who underwent Coronary Computed Tomography (CCTA) for evaluation of CAD or suspected CAD. This software post processes CT images obtained using any Computed Tomography (CT) scanner. The software provides tools for the measurement and visualization of coronary arteries.

    The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is data that is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    Device Description

    Cleerly LABS is a post-processing web-based software application that enables trained medical professionals to analyze 2D/3D coronary images acquired from Coronary Computed Tomography Angiography (CCTA) scans. The software is a post-processing tool that aids in determining treatment paths for patients suspected of having coronary artery disease (CAD).

    Cleerly LABS utilizes machine learning and simple rule-based mathematical calculation components which are performed on the backend of the software. The software applies deep learning methodology to identify high quality images, segment and label coronary arteries, and segment lumen and vessel walls. 2D and 3D images are presented to the user for review and manual editing. This segmentation is designed to improve efficiency for the user, and help shorten tedious, timeconsuming manual tasks.

    The user is then able to edit the suggested segmentation as well as adjust plaque thresholds, demarcate stenosis, stents, and chronic total occlusions (CTOs) as well as select dominance and indicate coronary anomalies. Plaque, stenosis, and vessel measurements are output based on the combination of user-editable segmentation and user-placed stenosis, stent, and CTO markers. These outputs are mathematical calculations and are not machine-learning based.

    Cleerly LABS provides a visualization of the Cleerly LABS analysis in the CORONARY Report. The CORONARY Report uses data previously acquired from the Cleerly LABS image analysis to generate a visually interactive and comprehensive report that details the atherosclerosis and stenosis findings of the patient. This report is not intended to be the final report (i.e., physician report) used in patient diagnosis and treatment. Cleerly Labs provides the ability to send the text report page of the CORONARY Report to the user's PACS system.

    Cleerly LABS software does not perform any functions that could not be accomplished by a trained user with manual tracing methods or other commercially available software. Rather, it represents a more robust semiautomatic software intended to enhance the performance of time-intensive, potentially error-prone, manual tasks, thereby improving efficiency for medical professionals in the assessment of coronary artery disease (CAD).

    AI/ML Overview

    The provided FDA 510(k) summary for Cleerly LABS (v2.0) indicates that no new clinical testing was conducted to demonstrate safety or effectiveness for this submission (K242338), as the non-clinical testing was deemed sufficient. The submission primarily focuses on demonstrating substantial equivalence to a previously cleared predicate device, Cleerly LABS v2.0 (K202280), primarily due to modifications in product labeling, workflow, and minor technological enhancements, without changes to the underlying algorithms or mathematical calculations.

    Therefore, the document does not contain details about specific acceptance criteria, device performance, sample sizes for test sets, data provenance, expert adjudication methods, MRMC studies, standalone performance, or ground truth establishment for a new clinical study. Instead, it refers to the sufficiency of previous non-clinical testing and substantial equivalence to the predicate device.

    Given this, I will extract information related to the overall performance claims and testing mentioned, emphasizing that these refer to the previous evaluation or software testing for this specific submission, rather than a new clinical study.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not provide a specific table of quantitative acceptance criteria and corresponding reported device performance metrics for a new clinical study. It states that "Results of testing re-confirmed that the software requirements fulfilled the pre-defined acceptance criteria." However, these specific criteria and results are not detailed in this summary.

    2. Sample Size Used for the Test Set and Data Provenance

    Not applicable for a new clinical study in this submission. The document states, "No clinical testing was conducted to demonstrate safety or effectiveness as the device's non-clinical testing was sufficient to support the intended use of the device." For software evaluation, it states "multiple pre-production environments using simulated data and in production for release verification." No specific sample sizes for these internal software tests or data provenance are provided.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Not applicable for a new clinical study in this submission. Ground truth establishment for previous studies or internal validation is not detailed.

    4. Adjudication Method for the Test Set

    Not applicable for a new clinical study in this submission.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    No MRMC comparative effectiveness study was mentioned or performed for this submission. The device is described as "more robust semiautomatic software intended to enhance the performance of time-intensive, potentially error-prone, manual tasks, thereby improving efficiency for medical professionals." However, no specific effect size or improvement metrics are provided for human readers with or without AI assistance in this document.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The device is described as a "web-based software application that is intended to be used by trained medical professionals as an interactive tool" and "not intended to replace the skill and judgment of a qualified medical practitioner." It also mentions that "The software applies deep learning methodology to identify high quality images, segment and label coronary arteries, and segment lumen and vessel walls." While the core functions are supported by AI, the tool is semi-automatic with user review and editing capabilities. The document doesn't explicitly state if a standalone performance study (without human interaction) was performed for regulatory submission, but rather focuses on its role as an interactive tool for professionals.

    7. The Type of Ground Truth Used

    Not explicitly stated for the "software evaluation activities" mentioned. For the underlying algorithms (which were unchanged from the predicate), the document implies that expert review and manual editing are part of the process, suggesting expert consensus or reference standards may have been used in the original development.

    8. The Sample Size for the Training Set

    No information on the sample size for the training set is provided in this document.

    9. How the Ground Truth for the Training Set Was Established

    No information on how the ground truth for the training set was established is provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K231335
    Device Name
    Cleerly ISCHEMIA
    Manufacturer
    Date Cleared
    2023-09-08

    (123 days)

    Product Code
    Regulation Number
    870.2200
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Cleerly, Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cleerly ISCHEMIA analysis software is an automated machine learning-based decision support tool, indicated as a diagnostic aid for patients undergoing CT analysis using Cleerly Labs software. When utilized by an interpreting healthcare provider, this software tool provides information that may be useful in detecting likely ischemia associated with coronary artery disease. Patient management decisions should not be made solely on the results of the Cleerly ISCHEMIA analysis.

    Device Description

    Cleerly ISCHEMIA is an add-on software module to Cleerly Labs (K202280, K190868) that determines the likely presence or absence of coronal vessel ischemia based on quantitative measures of atherosclerosis, stenosis, and significant vascular morphology from typically-acquired Coronary Computed Tomography Angiography images (CCTA). Cleerly ISCHEMIA, in conjunction with Cleerly Labs, outputs a Cleerly ISCHEMIA Index (CII), a binary indication of negative CII (likely absence of ischemia) or positive CII (likely presence of ischemia) with its threshold equivalent to invasive FFR >0.80 vs. ≤0.80, respectively, as identified in professional societal practice guidelines.

    AI/ML Overview

    The provided document describes the Cleerly ISCHEMIA device and its clinical validation. Here's a breakdown of the requested information based on the text:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state pre-defined acceptance criteria (e.g., minimum sensitivity of X% and specificity of Y%). Instead, it presents the results of the primary endpoint analysis from the CREDENCE Trial and then pooled results from additional studies. Therefore, the reported device performance serves as the basis for demonstrating acceptable clinical performance.

    Metric (Per-vessel territory)Reported Device Performance (CREDENCE Trial, Primary Endpoint)
    Sensitivity75.9% (167/220)
    Specificity83.4% (521/625)

    Additional performance data from pooled US and OUS cohorts are also provided:

    Metric (Pooled US + OUS, Per-vessel territory)Estimate95% CI
    Sensitivity76.2%71.9%, 80.3%
    Specificity85.2%82.8%, 87.4%
    PPV65.9%61.2%, 70.3%
    NPV90.5%88.5%, 92.3%
    LR+5.15-
    LR-0.28-
    Metric (Pooled US + OUS, Per patient territory)Estimate95% CI
    Sensitivity86.6%82.1%, 90.1%
    Specificity69.8%64.4%, 74.7%
    PPV73.2%68.2%, 77.7%
    NPV84.6%79.5%, 88.6%
    LR+2.87-
    LR-0.19-

    The conclusion states, "Cumulatively, these data demonstrate acceptable clinical performance," implying that the presented performance values met the internal acceptance standards for regulatory submission.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test Set (CREDENCE Trial Validation Set): 305 patients.
    • Data Provenance: The CREDENCE Trial was a prospective, multicenter trial conducted across 17 centers (later mentioned as 23 centers in the pooled data description, implying evolution or different reporting) between 2014 and 2017. It recruited patients with stable symptoms and without a prior diagnosis of CAD, referred for non-emergent ICA. The primary endpoint analysis was based on the validation set from this trial. The document states a "US/OUS cohort population" was used for pooled data, and then breaks down the pooled data into "Pooled US" (N=149 subjects) and "Pooled OUS" (N=433 subjects). The CREDENCE trial, being a large multi-center study, likely spanned multiple countries, but the specific breakdown of US vs. OUS for the initial CREDENCE derivation/validation sets isn't explicitly detailed; however, subsequent pooled data clearly delineate US and OUS categories.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document states, "Clinical validation testing was done to validate the diagnostic performance of Cleerly ISCHEMIA for non-invasive determination of the functional significance of CAD, as referenced to direct invasive measurement of FFR as the reference standard." It also mentions, "All index tests were interpreted blindly by core laboratories."

    • Number of Experts: Not explicitly stated for the interpretation of FFR.
    • Qualifications of Experts: Not explicitly stated, though "core laboratories" implies a standard of expertise in cardiology and interventional procedures necessary for FFR measurement and interpretation.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document mentions that FFR was the "reference standard." Invasive FFR measurement is an objective physiological assessment, rather than a subjective interpretation requiring adjudication. For the interpretation of the CCTA images that serve as input to Cleerly Labs (and subsequently Cleerly ISCHEMIA), it states, "All index tests were interpreted blindly by core laboratories." The specific adjudication method (e.g., consensual read vs. single reader) by these core laboratories for CCTA interpretation is not detailed. However, the ground truth for Cleerly ISCHEMIA is directly linked to the quantitative invasive FFR values.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in this document. The study focuses on the standalone diagnostic performance of the Cleerly ISCHEMIA algorithm against an invasive reference standard (FFR). It is presented as a "diagnostic aid" for use by an interpreting healthcare provider, implying it provides information to the provider, but the study doesn't quantify interaction or improvement.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation was clearly done. The clinical validation section explicitly describes the performance of the "Cleerly ISCHEMIA" device in detecting likely ischemia as referenced to invasive FFR. The results (sensitivity, specificity, etc.) are reported for the algorithm's output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth used was invasive fractional flow reserve (FFR), described as the "reference standard" for determining the functional significance of coronary artery disease. A Cleerly ISCHEMIA Index (CII) of positive (likely ischemia) corresponds to invasive FFR ≤0.80, and negative CII corresponds to FFR >0.80.

    8. The sample size for the training set

    The CREDENCE Trial cohort was divided into two subsets: "the first half of enrollees at each site assigned to the derivation (n = 307) and the second half to the validation (n = 305) data set." The derivation set (n=307) would typically serve as the training/development set for the algorithm. The document doesn't explicitly refer to it as the "training set," but "derivation" implies its use in developing/optimizing the algorithm.

    9. How the ground truth for the training set was established

    For the derivation set, the ground truth would have been established in the same manner as for the validation set: direct invasive measurement of FFR. The CREDENCE trial collected FFR data for all enrollees, which were then allocated to either the derivation or validation sets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K202280
    Manufacturer
    Date Cleared
    2020-10-02

    (52 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Cleerly, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cleerly Labs is a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac computed tomography (CT) data for determining the presence and extent of coronary plaques (i.e. atherosclerosis in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of CAD or suspected CAD. This software post processes CT images obtained using any Computed Tomography (CT) scanner. The software provides tools for the measurement and visualization of coronary arteries.

    The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    Device Description

    Cleerly Labs is a post-processing web-based software application that enables trained medical professionals to analyze 2D/3D coronary images acquired from Coronary Computed Tomography (CCTA) scans. The software is a post-processing tool that aids in determining treatment paths for patients suspected to have coronary artery disease (CAD).

    Cleerly Labs utilizes machine learning and simple rule-based mathematical calculation components which are performed on the backend of the software applies deep learning methodology to identify high quality images, segment and label coronary arteries, and segment lumen and vessel walls. 2D and 3D images are presented to the user for review and manual editing. This segmentation is designed to improve efficiency for the user, and help shorten tedious, time-consuming manual tasks.

    The user is then able to edit the suggested segmentation as well as adjust plaque thresholds, and demarcate stenosis, stents, and chronic total occlusions (CTOs) as well as select dominance and indicate coronary anomalies. Plaque, stenosis, and vessel measurements are output based the combination of user-editable segmentation and user-placed stent, and CTO markers. These outputs are mathematical calculations and are not machine-learning based.

    Cleerly Labs provides a visualization of the Cleerly Labs analysis in the CORONARY Report. The CORONARY Report uses data previously acquired from the Cleerly Labs image analysis to generate a visually interactive and comprehensive report that details the atherosclerosis and stenosis findings of the patient. This report is not intended to be the final report (i.e., physician report) used in patient diagnosis and treatment. Cleerly Labs provides the ability to send the text report page of the CORONARY Report to the user's PACS system.

    Cleerly Labs software does not perform any functions that could not be accomplished by a trained user with manual tracing method or other commercially available software. Rather, it represents a more robust semiautomatic software intended to enhance the performance of time-intensive, potentially error-prone, manual tasks, thereby improving efficiency for medical professionals in the assessment of coronary artery disease (CAD).

    AI/ML Overview

    The provided text details the 510(k) summary for Cleerly Labs v2.0, focusing heavily on its substantial equivalence to the predicate device, Cleerly Labs (K190868). The performance data presented for Cleerly Labs v2.0 is referenced as being reproduced from the testing done for the predicate device (K190868), as no new clinical testing was conducted for v2.0. Therefore, the information provided below pertains to the study that proved Cleerly Labs (K190868) met its acceptance criteria, as the performance claims for v2.0 are based on this prior validation.

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided document:


    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the "Pearson Correlation Coefficient" and "Bland-Altman Agreement" values for specific volume measurements. The device performance (reported for the predicate device, K190868, and reproduced for K202280) is presented in Table 6. While explicit "acceptance criteria" numerical thresholds are not stated (e.g., "must achieve a Pearson Correlation >= 0.8"), the reported values represent the performance achieved to demonstrate substantial equivalence to expert reader results.

    OutputAcceptance Criteria (Implied)Reported Device Performance
    Lumen VolumeHigh Pearson Correlation0.91
    High Bland-Altman Agreement96%
    Vessel VolumeHigh Pearson Correlation0.93
    High Bland-Altman Agreement97%
    Total Plaque VolumeHigh Pearson Correlation0.85
    High Bland-Altman Agreement95%
    Calcified Plaque VolumeHigh Pearson Correlation0.94
    High Bland-Altman Agreement95%
    Non-Calcified Plaque VolumeHigh Pearson Correlation0.74
    High Bland-Altman Agreement95%
    Low-Density-Non-Calcified Plaque VolumeHigh Pearson Correlation0.53
    High Bland-Altman Agreement97%

    Note: The document states, "Additionally, the performance of the software was previously compared to ground truth results produced by expert readers (K190868). Pearson Correlation Coefficients and Bland-Altman Agreements between Cleerly Labs and expert reader results are reproduced in Table 6." This indicates that these performance metrics were the basis for acceptance for the predicate device.


    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: The document does not specify the sample size used for the test set in either the original K190868 submission or this K202280 submission.
    • Data Provenance: The document does not specify the country of origin of the data, nor does it explicitly state whether the data was retrospective or prospective. It only mentions that the data was from "patients who underwent Coronary Computed Tomography Angiography (CCTA)."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document refers to "expert readers" (plural) but does not specify the exact number of experts used to establish the ground truth.
    • Qualifications of Experts: The document does not provide the specific qualifications of these experts (e.g., years of experience, specific certifications). It only refers to them as "expert readers."

    4. Adjudication Method for the Test Set

    • The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing the ground truth from the expert readers. It only states that ground truth results were "produced by expert readers."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC Comparative Effectiveness Study was done for Cleerly Labs v2.0. The document explicitly states: "No clinical testing was conducted to demonstrate safety or effectiveness as the device's non-clinical testing was sufficient to support the intended use of the device."
    • The performance data provided (Table 6) refers to the comparison of the device's measurements against expert reader results in the predicate device's validation (K190868). This is a comparison between the device and human expert ground truth, not a study evaluating human readers' improvement with AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Performance

    • The data in Table 6, showing "Pearson Correlation Coefficients and Bland-Altman Agreements between Cleerly Labs and expert reader results," represents the standalone performance of the Cleerly Labs algorithm (or at least the initial segmented values before user editing).
    • The description of the software functionality notes: "Cleerly Labs utilizes machine learning... applies deep learning methodology to identify high quality images, segment and label coronary arteries, and segment lumen and vessel walls. 2D and 3D images are presented to the user for review and manual editing." The performance metrics in Table 6 likely refer to the agreement of these automated segmentations with expert results. However, the subsequent statement also says: "Plaque, stenosis, and vessel measurements are output based the combination of user-editable segmentation and user-placed stent, and CTO markers. These outputs are mathematical calculations and are not machine-learning based." This suggests the final outputs are influenced by user edits. The document doesn't explicitly clarify if the correlation values are based on the initial automated output or the final user-edited output compared to the expert ground truth. Given the context of showing an automated system's performance, it's more likely referring to the algorithm's capability.

    7. Type of Ground Truth Used

    • The ground truth used was expert consensus (or at least expert reader results). The document states that the software's performance was "compared to ground truth results produced by expert readers."

    8. Sample Size for the Training Set

    • The document does not specify the sample size used for the training set for the machine learning components of Cleerly Labs.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not describe how the ground truth for the training set was established. It only mentions that the machine learning components underwent validation.
    Ask a Question

    Ask a specific question about this device

    K Number
    K190868
    Device Name
    Cleerly Labs
    Manufacturer
    Date Cleared
    2019-11-05

    (216 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Cleerly Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Cleerly Labs is a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac computed tomography (CT) data for determining the presence and extent of coronary plaques and stenosis in patients who underwent Coronary Computed Tomography (CCTA) for evaluation of CAD or suspected CAD. This software post processes CT images obtained using any Computed Tomography (CT) scanner. The software provides tools for the measurement and visualization of coronary arteries.

    The software is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by people who have been appropriately trained in the software's functions, capabilities and limitations. Users should be aware that certain views make use of interpolated data. This is created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    Device Description

    Cleerly Labs is a post-processing web-based software application that enables trained medical professionals to analyze 2D/3D coronary images acquired from Computed Tomography (CT) angiographic scans. The software is a post-processing tool that aids in determining treatment paths for patients suspected to have coronary artery disease (CAD).

    To aid in image analysis, tools are provided to users to navigate and manipulate images. Manual and semi-automatic segmentation of the coronary artery images are possible using editing tools, thus providing the user the flexibility to perform the coronary analysis.

    The output of the software includes visual images of coronary arteries, distance and volume measurements of the lumen wall, vessel wall, and plaque, remodeling index as well as stenosis diameter and area. These measurements are based on user segmentation.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Cleerly Labs device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document primarily focuses on demonstrating substantial equivalence to a predicate device and fulfilling general software and medical device standards. It doesn't explicitly state "acceptance criteria" as a set of numerical thresholds the device must meet in a formal table like some AI/ML device submissions. Instead, performance is reported as correlation and agreement with expert readers, which serve as the de-facto performance metrics.

    However, we can infer the acceptance criteria for performance by looking at what was reported and considered satisfactory for clearance. The Pearson Correlation Coefficients and Bland-Altman Agreements are the key performance indicators.

    Performance MetricAcceptance Criteria (Implied by reported values and clearance)Reported Device Performance
    Pearson Correlation CoefficientHigh correlation (e.g., > 0.70 or > 0.80) with expert readers
    Lumen Volume0.91
    Vessel Volume0.93
    Total Plaque Volume0.85
    Calcified Plaque Volume0.94
    Non-Calcified Plaque Volume0.74
    Low-Density-Non-Calcified Plaque Volume0.53
    Bland-Altman AgreementHigh agreement (e.g., > 95%) with expert readers
    Lumen Volume96%
    Vessel Volume97%
    Total Plaque Volume(Missing/Garbled in text)
    Calcified Plaque Volume(Missing/Garbled in text)
    Non-Calcified Plaque Volume(Missing/Garbled in text)
    Low-Density-Non-Calcified Plaque Volume97%

    Note on Missing/Garbled Bland-Altman Data: The provided text has garbled characters for several Bland-Altman agreement values, making it impossible to extract precise numbers for "Total Plaque Volume," "Calcified Plaque Volume," and "Non-Calcified Plaque Volume."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document states: "Pearson Correlation Coefficients and Bland-Altman Agreements between Cleerly Labs and expert reader results is reported Table 5." and "The machine learning algorithms were evaluated by comparing the output of the software to that of the ground truth using multiple ground truthers."

    • Test Set Sample Size: The exact number of cases or images in the test set is not explicitly stated in the provided text.
    • Data Provenance (Country of origin, retrospective/prospective): This information is not explicitly stated in the provided text.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    • Number of Experts: The document mentions "multiple ground truthers" and "expert readers," but the exact number of experts is not specified.
    • Qualifications of Experts: It states that a "Usability test was conducted with U.S. board certified radiologists and technicians." While this refers to usability, it suggests that the "expert readers" for performance evaluation would likely hold similar, if not higher, qualifications (e.g., board-certified radiologists specializing in cardiac imaging). However, their specific specializations or years of experience are not explicitly detailed.

    4. Adjudication Method for the Test Set:

    • The document states: "The machine learning algorithms were evaluated by comparing the output of the software to that of the ground truth using multiple ground truthers." This implies that ground truth was established by more than one expert. However, the specific adjudication method (e.g., 2+1, 3+1 consensus, average, majority vote, etc.) for resolving disagreements among these multiple ground truthers is not explicitly described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    • No, an MRMC comparative effectiveness study was not explicitly done or reported in this 510(k) summary. The study focuses on the standalone performance of the software compared to expert-established ground truth, not on how human readers improve with AI assistance versus without it. The document states, "No clinical testing was conducted to demonstrate safety or effectiveness as the device's non-clinical (bench) testing was sufficient to support the intended use of the device."

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done:

    • Yes, a standalone performance evaluation was done. The reported Pearson Correlation Coefficients and Bland-Altman Agreements are for the "Cleerly Labs" output compared directly to "expert reader results (ground truth)," indicating a standalone assessment of the algorithm's performance. The software is described as a post-processing tool that "aids in determining treatment paths" and provides "suggested segmentation," which users can "edit," but the performance metrics provided are for the direct output of the software.

    7. The Type of Ground Truth Used:

    • Expert Consensus (implied): The ground truth was established by "expert readers" or "multiple ground truthers." This strongly implies that the reference standard against which the software was compared was derived from human expert interpretation, likely through a consensus process or independent readings. There is no mention of pathology or outcomes data being used as ground truth for these specific quantitative plaque metrics.

    8. The Sample Size for the Training Set:

    • The document does not provide information regarding the sample size used for the training set. It only discusses the evaluation of "machine learning algorithms" but not their development dataset.

    9. How the Ground Truth for the Training Set was Established:

    • The document does not provide information on how the ground truth for the training set was established. This detail is typically separate from the test set evaluation and not always included in 510(k) summaries if not directly relevant to the substantial equivalence argument for the performance study.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1