Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K203783
    Device Name
    ClariPulmo
    Manufacturer
    Date Cleared
    2022-04-06

    (464 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ClariPi Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClariPulmo is a non-invasive image analysis software for use with CT images which is intended to support the quantification of lung CT images. The software is designed to support the physician in the diagnosis and documentation of pulmonary tissue images (e.g., abnormalities) from the CT thoracic datasets. (The software is not intended for the diagnosis of pneumonia or COVID-19). The software provides automated segmentation of the lungs and quantification of low-attenuation and high-attenuation areas within the segmented lungs by using predefined Hounsfield unit thresholds. The software displays by color the segmented lungs and analysis results. ClariPulmo provides optional denoising and kernel normalization functions for improved quantification of lung CT images in cases when CT images were taken at low-dose conditions or with sharp reconstruction kernels.

    Device Description

    ClariPulmo is a standalone software application analyzing lung CT images that can be used to support the physician in the quantification of lung CT image when examining pulmonary tissues. ClariPulmo provides two main and optional functions: LAA Analysis provides quantitative measurement of pulmonary tissue image with low attenuation areas (LAA). LAA are measured by counting those voxels with low attenuation values under the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with low attenuation area. HAA Analysis provides quantitative measurement of pulmonary tissue image with high attenuation areas (HAA). HAA are measured by counting those voxels with high attenuation values using the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with high attenuation area. Lungs are automatically segmented using a pre-trained deep learning model. The optional Kernel Normalization function provides an image-to-image translation from a sharp kernel image to a smooth kernel image for improved quantification of lung CT images. The Kernel Normalization algorithm was constructed based on the U-Net architecture. The optional Denoising function provides an image-to-image translation from a noisy low-dose image to a noise-reduced enhanced quality image of LDCT for improved quantification of lung LDCT images. The Denoising algorithm was constructed based on the U-Net architecture. The ClariPulmo software provides summary reports for measurement results that contains color overlay images for the lungs tissues as well as table and charts displaying analysis results.

    AI/ML Overview

    The provided text is specifically from a 510(k) summary for the ClariPulmo device. It details non-clinical performance testing rather than a large-scale clinical study with human-in-the-loop performance. Therefore, some of the requested information (e.g., MRMC study, effect size of human reader improvement) may not be present in this document because it was not a requirement for this specific type of submission.

    Here's the breakdown of the information based on the provided text:


    Acceptance Criteria and Device Performance Study for ClariPulmo

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the "excellent agreement" reported for the different functions, measured by Pearson Correlation Coefficient (PCC) and Dice Coefficient.

    Acceptance Criteria (Implied)Reported Device Performance
    HAA Analysis: Excellent agreement with expert segmentations.PCC: 0.980 – 0.983 with expert-established segmentations of user-defined high attenuation areas.
    LAA Analysis: Excellent agreement with expert segmentations.PCC: 0.99 with expert-established segmentations of user-defined low attenuation areas.
    AI-based Lung Segmentation: Excellent agreement with expert segmentations.PCC: 0.977-0.992 and DICE coefficients of 0.98~0.99 with expert radiologist's imageJ based segmentation. Statistical significance across normal/LAA/HAA patients, CT scanner, reconstructed kernel and low-dose subgroups.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • HAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "patients with pneumonia and COVID-19."
    • LAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "both health and diseased patients."
    • AI-based Lung Segmentation Test Set: The specific number of cases is not explicitly stated. It used "one internal and two external datasets."
    • Data Provenance: The document does not specify the country of origin for the data. The studies were retrospective as they involved testing against existing datasets with established ground truth.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    • The document states "expert-established segmentations" and "expert radiologist's imageJ based segmentation."
    • The number of experts is not explicitly stated, nor are their specific qualifications (e.g., years of experience), beyond being referred to as "expert."

    4. Adjudication Method for the Test Set

    • The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). It implies a single "expert-established" ground truth was used for comparison, suggesting either a single expert or a pre-adjudicated consensus.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    • No, an MRMC comparative effectiveness study was not done or reported in this document. The performance testing described is focused on the standalone agreement of the AI with expert-established ground truth, not how human readers improve with AI assistance. The document explicitly states: "ClariPulmo does not require clinical studies to demonstrate substantial equivalence to the predicate device."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, standalone performance testing was done. The entire "Performance Testing" section describes the algorithm's performance against expert-established ground truth ("Al-based lung segmentation demonstrated excellent agreements with that by expert radiologist's imageJ based segmentation").

    7. The Type of Ground Truth Used

    • The ground truth used was expert consensus / expert-established segmentations. Specifically, for lung segmentation, it explicitly mentions "expert radiologist's imageJ based segmentation." For HAA and LAA, it refers to "expert-established segmentations."

    8. The Sample Size for the Training Set

    • The document does not specify the sample size for the training set. It only mentions that the lung segmentation used a "pre-trained deep learning model" and the Kernel Normalization and Denoising algorithms were "constructed based on the U-Net architecture."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only refers to the test set ground truth as "expert-established segmentations." It is implied that the training data would also have been expertly annotated, given the nature of deep learning model training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203785
    Device Name
    ClariSIGMAM
    Manufacturer
    Date Cleared
    2021-09-10

    (256 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ClariPi Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClariSIGMAM is a software application intended for use with compatible full field digital mammography systems. ClariSIGMAM calculates percent breast density defined as the ratio of fibroglandular tissue to total breast area estimates. ClariSIGMAM uses this numerical value to provide breast density group information (BI-RADS A+B as fatty and BI-RADS C+D as dense) to aid interpreting physicians in the assessment of breast tissue composition. ClariSIGMAM produces adjunctive information. It is not a diagnostic aid.

    Device Description

    ClariSIGMAM software is a standalone software application that automatically analyzes "for presentation" 2D digital mammograms to assess breast tissue composition. The software assesses the breast density of women and generates a breast density group information for the patient (BI-RADS A+B as fatty and BI-RADS C+D as dense) in accordance with the American College of Radiology's Breast Imaging Reporting and Data System (BI-RADS) density classification scale. Output of breast density by ClariSIGMAM is designed to display on a mammography workstation or PACS as DIOCM mammography structured report or secondary capture. The reports are configured to provide the following data: Breast area (cm²) for each breast, Fibroglandular tissue area (cm²) for each breast, Percent breast density for each breast, Breast density group information for the patient (BI-RADS A+B as fatty and BI-RADS C+D as dense).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for ClariSIGMAM, based on the provided text:

    1. Acceptance Criteria and Reported Device Performance

    The main acceptance criterion derived from the provided text is the agreement between ClariSIGMAM's binary breast density classification and a consensus of expert readers.

    Acceptance Criteria (Binary Breast Density Task)Reported Device Performance (ClariSIGMAM vs. Readers' Consensus)
    High accuracy in classifying breast density as "Fatty" (BI-RADS A+B) or "Dense" (BI-RADS C+D) compared to experts' consensus.Accuracy (Overall): (293+436) / 837 = 729 / 837 ≈ 87.1%
    Accuracy (Fatty): 86.6% (293/338)
    Accuracy (Dense): 87.3% (436/499)
    Kappa Statistic: 0.734 [95% CI: 0.688, 0.781] (indicating substantial agreement)

    2. Sample Size and Data Provenance

    • Test Set Sample Size: n=837 (based on the confusion matrix)
    • Data Provenance: The document states the dataset "spanned all compatible FFDM systems." It does not explicitly mention the country of origin or if the data was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Four expert readers.
    • Qualifications of Experts: They are referred to as "expert readers" with their assessment being "consensus visual assessment... according to BI-RADS 5th Edition." While specific years of experience or board certifications are not provided, the term "expert radiologist" is used elsewhere in the document when discussing an expert generating breast density measurements using Cumulus software for comparison. For the primary binary breast density task ground truth, they are simply "expert readers."

    4. Adjudication Method for the Test Set

    • Adjudication Method: "A consensus visual assessment of expert readers" was used to establish the ground truth for BI-RADS breast density category. This implies that the four experts arrived at a shared agreement for each case in the test set. The exact mechanism (e.g., majority vote, discussion to resolve discrepancies) is not detailed, but it was a "consensus" by "independent assessments" initially.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, a traditional MRMC comparative effectiveness study demonstrating human reader improvement with AI assistance was not explicitly described. The study primarily focused on the standalone performance of ClariSIGMAM against expert consensus, particularly for the binary density classification.
    • Effect Size (if applicable): Not applicable, as this type of study was not reported.

    6. Standalone Performance (Algorithm Only)

    • Was standalone performance done? Yes. The provided confusion matrix directly illustrates the standalone performance of the ClariSIGMAM algorithm in classifying breast density as "Fatty" or "Dense" when compared to the established ground truth.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The primary ground truth for the binary breast density classification was expert consensus visual assessment according to BI-RADS 5th Edition.
    • Other ground truth methods, used for other validations (not necessarily the main acceptance criteria met by the confusion matrix), included:
      • Interactive thresholding software (Cumulus) by an expert radiologist for "Gold Standard breast density estimates."
      • Paired mammograms one year apart to assess reproducibility over time.

    8. Sample Size for the Training Set

    • The document does not specify the sample size used for the training set. It only discusses the "substantial data sets" used for various validation tests and the "reference standard dataset" for the expert consensus comparison.

    9. How Ground Truth for Training Set Was Established

    • The document does not explicitly state how the ground truth was established for the training set. It focuses on the validation of the device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K212074
    Device Name
    ClariCT.AI
    Manufacturer
    Date Cleared
    2021-07-27

    (25 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ClariPi Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClariCT.AI is a software device intended for networking, communication, processing and enhancement of CT images in DICOM format regardless of the manufacturer of CT scanner or model.

    Device Description

    ClariCT.AI software is intended for denoise processing and enhancement of CT DICOM images when higher image quality and/or lower dose acquisitions are desired. ClariCT.Al software can be used to reduce noises in CT images of the head, chest, and abdomen, in particular in CT images with a lower radiation dose. ClariCT.Al may also improve the image quality of low-dose nondiagnostic Filtered Back Projection images as well as Iterative Reconstruction images. The subject device, ClariCT.Al, added a new module (named Al Marketplace Integration module) to the original cleared device (K183460) to enable installation on the Al Marketplace system. The module integrates the Denoising Processor of the original device into the Al Marketplace system. So ClariCT.Al can be hosted through a third-party Al marketplace that integrates centrally with PACS and seamlessly integrates into the existing IT and modality infrastructure.

    AI/ML Overview

    The ClariCT.AI device, as described in the 510(k) summary, is a software intended for denoise processing and enhancement of CT images. The submission K212074 focuses on the addition of a new module ("AI Marketplace Integration module") to the previously cleared device (K183460), enabling installation on an AI Marketplace system. This new module allows the device to be hosted through a third-party AI marketplace, integrating with PACS and existing IT infrastructure. The submission asserts that this change has no effect on the safety or efficacy of the device and does not raise any potential safety risks, and that the subject device is identical in performance to the legally marketed predicate device.

    Since the submission states that "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate devices" and that the subject device is "identical in performance to the legally marketed device (K183460)", it implies that the performance data for K212074 relies on the performance data of the original K183460 submission. However, the provided document does not contain the detailed acceptance criteria or a study proving the device (K212074, or even K183460) meets such criteria, nor does it provide information regarding sample size, data provenance, ground truth establishment, or any comparative effectiveness studies.

    Therefore, based solely on the provided text, I cannot provide the requested information in detail. The document primarily focuses on demonstrating substantial equivalence based on the functionality of the new integration module and adherence to general medical device standards.

    Here's a breakdown of what can be extracted and what is missing:

    1. Table of acceptance criteria and reported device performance:

    • Acceptance Criteria (Missing): The document states, "Meets the acceptance criteria and is adequate for its intended use," but does not explicitly list these criteria.
    • Reported Device Performance (Missing): No specific performance metrics (e.g., PSNR, SSIM, radiologists' scores for noise reduction, image quality, diagnostic accuracy improvements) are reported for either the subject or predicate device.

    2. Sample size used for the test set and data provenance:

    • Sample Size (Missing): Not mentioned in the provided text.
    • Data Provenance (Missing): Not mentioned in the provided text (e.g., country of origin, retrospective/prospective). The document mentions "substantial datasets" were used for testing, but no specifics.

    3. Number of experts used to establish the ground truth for the test set and their qualifications:

    • Missing: This information is not provided in the document.

    4. Adjudication method for the test set:

    • Missing: Not mentioned in the provided text.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size:

    • No: The document explicitly states: "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate devices." This implies that no MRMC comparative effectiveness study was conducted for this submission (K212074). Given the assertion of identical performance to the predicate, it also suggests such a study wasn't deemed necessary for K183460's clearance either, at least not for the purpose of demonstrating substantial equivalence.

    6. If a standalone (algorithm only without human-in-the-loop performance) was done:

    • Implied Yes, but details missing: The document mentions "performance tests" for the device, which is a software algorithm. However, the specific results of these standalone tests (e.g., quantitative metrics of noise reduction) are not provided. The function is described as "denoise processing and enhancement of CT images."

    7. The type of ground truth used:

    • Missing: Not specified. For a denoising algorithm, ground truth might involve noiseless or extremely low-noise reference images, or expert consensus on image quality. This is not detailed.

    8. The sample size for the training set:

    • Missing: Not mentioned. The device uses "pre-trained deep learning models," but the training set size is not provided.

    9. How the ground truth for the training set was established:

    • Missing: Not mentioned.

    Conclusion based on provided text:

    The 510(k) summary for ClariCT.AI (K212074) indicates that the device has undergone non-clinical performance testing to comply with international standards and FDA guidance. It asserts that "The test results in this 510(k), demonstrate that ClariCT.AI ... Meets the acceptance criteria and is adequate for its intended use." However, the document does not detail the specific acceptance criteria, the specific performance results against those criteria, or the methodology of any studies (e.g., sample sizes, ground truth establishment, expert involvement) that would prove these claims. The submission primarily focuses on the substantial equivalence of K212074 to its predicate (K183460) by asserting that the new AI Marketplace integration module does not alter its safety or efficacy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183460
    Device Name
    ClariCT.AI
    Manufacturer
    Date Cleared
    2019-06-13

    (182 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ClariPI Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClariCT.AI, is a software device intended for networking, communication, processing and enhancement of CT images in DICOM format regardless of the manufacturer of CT scanner or model.

    Device Description

    ClariCT.Al software is intended for denoise processing and enhancement of CT DICOM images when higher image quality and/or lower dose acquisitions are desired. ClariCT.Al software can be used to reduce noises in CT images of the head, chest, heart, and abdomen, in particular in CT images with a lower radiation dose. ClariCT.Al may also improve the image quality of low-dose nondiagnostic Filtered Back Projection images as well as Iterative Reconstruction images. The system enables the receipt of DICOM images from CT imaging devices (modalities), enables their denoise processing and enhancement, and transmission to a PACS workstation.

    AI/ML Overview

    The medical device, ClariCT.AI, is a software device intended for networking, communication, processing, and enhancement of CT images in DICOM format. It aims to reduce noise in CT images, particularly those with lower radiation doses, and improve image quality in low-dose non-diagnostic Filtered Back Projection and Iterative Reconstruction images.

    Acceptance Criteria and Device Performance:

    The document primarily focuses on demonstrating the substantial equivalence of ClariCT.AI to a predicate device (Zia, K160852) and compliance with regulatory standards. While specific quantitative acceptance criteria for image quality metrics (e.g., noise reduction percentage, CNR improvement) are not explicitly detailed in a table, the document states:

    • Acceptance Criteria: The device "Meets the acceptance criteria" and "is adequate for its intended use." This implies that the internal verification and validation processes of ClariPI Inc. established specific performance benchmarks, which the device successfully met.
    • Reported Device Performance: The document generally indicates that ClariCT.AI:
      • Complies with international and FDA-recognized consensus standards (ISO 14971, NEMA-PS 3.1-3.20 DICOM).
      • Complies with FDA guidance documents for software in medical devices and interoperable medical devices.
      • Demonstrates compliance through phantom data (ACR CT Accreditation Phantom) and clinical processed data. These tests evaluate the device's ability to maintain image quality while reducing noise and enhancing images.
      • The "Performance Data" section asserts that the test results "demonstrate that ClariCT.Al...Meets the acceptance criteria and is adequate for its intended use."

    A Table of Acceptance Criteria & Reported Performance is not explicitly provided in the document in a quantitative format. The document describes meeting unspecified acceptance criteria through various tests.

    Study Information:

    1. Sample Size used for the test set and the data provenance:

      • Test Set Description: The test set included "A variety of clinical processed data" which comprised:
        • "Paired datasets of low and high doses for the same patients"
        • "IR & FBP datasets" (Iterative Reconstruction & Filtered Back Projection)
        • "Datasets for subgroup analysis of datasets with various genders, ages, body weights, races, and ethnicities"
        • "Datasets with varying scan conditions using scanners from different vendors for different organs"
      • Sample Size: The exact number of patients or images in the test set is not specified in the provided text.
      • Data Provenance: The document does not specify the country of origin. The data is described as "clinical processed data," implying it's derived from real patient scans, but whether it's retrospective or prospective is not explicitly stated. However, given the nature of "paired datasets of low and high doses for the same patients" and "IR & FBP datasets," it strongly suggests these are retrospective analyses of existing clinical data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided in the document. The document mentions "clinical processed data" but does not detail how ground truth for image quality improvements or noise reduction effectiveness was established by experts.
    3. Adjudication method for the test set:

      • The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate device." This indicates that the regulatory pathway relied on demonstrating technical equivalence and performance through non-clinical means and potentially expert consensus on image quality, rather than a reader study.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance assessment was done. The entire "PERFORMANCE DATA" section describes the technical testing of the ClariCT.AI algorithm on phantom and clinical data to demonstrate its ability to reduce noise and enhance images, independent of human interaction during the measurement process. The compliance with standards and internal V&V processes are all focused on the algorithm's output.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The document implies that the ground truth for the "clinical processed data" and "phantom data" likely relied on objective measurements of image quality parameters (such as noise levels, signal-to-noise ratio, contrast-to-noise ratio) and/or expert visual assessment of image quality improvement, although the latter is not explicitly detailed as "ground truth." For the phantom, the known geometric and contrast properties serve as a form of ground truth for evaluating image fidelity after processing. For clinical data, "paired datasets of low and high doses for the same patients" suggests that the high-dose images might serve as a reference for expected image quality without significant noise. However, explicit details on how ground truth was established for image quality improvement are not provided.
    7. The sample size for the training set:

      • The document states that the "Noise reduction is performed with the use of pre-trained deep learning models." However, the sample size for the training set used to develop these deep learning models is not specified in the provided text.
    8. How the ground truth for the training set was established:

      • The document does not provide details on how the ground truth for the training set, used to develop the deep learning models, was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1