Search Results
Found 2 results
510(k) Data Aggregation
(25 days)
ClariCT.AI
ClariCT.AI is a software device intended for networking, communication, processing and enhancement of CT images in DICOM format regardless of the manufacturer of CT scanner or model.
ClariCT.AI software is intended for denoise processing and enhancement of CT DICOM images when higher image quality and/or lower dose acquisitions are desired. ClariCT.Al software can be used to reduce noises in CT images of the head, chest, and abdomen, in particular in CT images with a lower radiation dose. ClariCT.Al may also improve the image quality of low-dose nondiagnostic Filtered Back Projection images as well as Iterative Reconstruction images. The subject device, ClariCT.Al, added a new module (named Al Marketplace Integration module) to the original cleared device (K183460) to enable installation on the Al Marketplace system. The module integrates the Denoising Processor of the original device into the Al Marketplace system. So ClariCT.Al can be hosted through a third-party Al marketplace that integrates centrally with PACS and seamlessly integrates into the existing IT and modality infrastructure.
The ClariCT.AI device, as described in the 510(k) summary, is a software intended for denoise processing and enhancement of CT images. The submission K212074 focuses on the addition of a new module ("AI Marketplace Integration module") to the previously cleared device (K183460), enabling installation on an AI Marketplace system. This new module allows the device to be hosted through a third-party AI marketplace, integrating with PACS and existing IT infrastructure. The submission asserts that this change has no effect on the safety or efficacy of the device and does not raise any potential safety risks, and that the subject device is identical in performance to the legally marketed predicate device.
Since the submission states that "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate devices" and that the subject device is "identical in performance to the legally marketed device (K183460)", it implies that the performance data for K212074 relies on the performance data of the original K183460 submission. However, the provided document does not contain the detailed acceptance criteria or a study proving the device (K212074, or even K183460) meets such criteria, nor does it provide information regarding sample size, data provenance, ground truth establishment, or any comparative effectiveness studies.
Therefore, based solely on the provided text, I cannot provide the requested information in detail. The document primarily focuses on demonstrating substantial equivalence based on the functionality of the new integration module and adherence to general medical device standards.
Here's a breakdown of what can be extracted and what is missing:
1. Table of acceptance criteria and reported device performance:
- Acceptance Criteria (Missing): The document states, "Meets the acceptance criteria and is adequate for its intended use," but does not explicitly list these criteria.
- Reported Device Performance (Missing): No specific performance metrics (e.g., PSNR, SSIM, radiologists' scores for noise reduction, image quality, diagnostic accuracy improvements) are reported for either the subject or predicate device.
2. Sample size used for the test set and data provenance:
- Sample Size (Missing): Not mentioned in the provided text.
- Data Provenance (Missing): Not mentioned in the provided text (e.g., country of origin, retrospective/prospective). The document mentions "substantial datasets" were used for testing, but no specifics.
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Missing: This information is not provided in the document.
4. Adjudication method for the test set:
- Missing: Not mentioned in the provided text.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and its effect size:
- No: The document explicitly states: "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate devices." This implies that no MRMC comparative effectiveness study was conducted for this submission (K212074). Given the assertion of identical performance to the predicate, it also suggests such a study wasn't deemed necessary for K183460's clearance either, at least not for the purpose of demonstrating substantial equivalence.
6. If a standalone (algorithm only without human-in-the-loop performance) was done:
- Implied Yes, but details missing: The document mentions "performance tests" for the device, which is a software algorithm. However, the specific results of these standalone tests (e.g., quantitative metrics of noise reduction) are not provided. The function is described as "denoise processing and enhancement of CT images."
7. The type of ground truth used:
- Missing: Not specified. For a denoising algorithm, ground truth might involve noiseless or extremely low-noise reference images, or expert consensus on image quality. This is not detailed.
8. The sample size for the training set:
- Missing: Not mentioned. The device uses "pre-trained deep learning models," but the training set size is not provided.
9. How the ground truth for the training set was established:
- Missing: Not mentioned.
Conclusion based on provided text:
The 510(k) summary for ClariCT.AI (K212074) indicates that the device has undergone non-clinical performance testing to comply with international standards and FDA guidance. It asserts that "The test results in this 510(k), demonstrate that ClariCT.AI ... Meets the acceptance criteria and is adequate for its intended use." However, the document does not detail the specific acceptance criteria, the specific performance results against those criteria, or the methodology of any studies (e.g., sample sizes, ground truth establishment, expert involvement) that would prove these claims. The submission primarily focuses on the substantial equivalence of K212074 to its predicate (K183460) by asserting that the new AI Marketplace integration module does not alter its safety or efficacy.
Ask a specific question about this device
(182 days)
ClariCT.AI
ClariCT.AI, is a software device intended for networking, communication, processing and enhancement of CT images in DICOM format regardless of the manufacturer of CT scanner or model.
ClariCT.Al software is intended for denoise processing and enhancement of CT DICOM images when higher image quality and/or lower dose acquisitions are desired. ClariCT.Al software can be used to reduce noises in CT images of the head, chest, heart, and abdomen, in particular in CT images with a lower radiation dose. ClariCT.Al may also improve the image quality of low-dose nondiagnostic Filtered Back Projection images as well as Iterative Reconstruction images. The system enables the receipt of DICOM images from CT imaging devices (modalities), enables their denoise processing and enhancement, and transmission to a PACS workstation.
The medical device, ClariCT.AI, is a software device intended for networking, communication, processing, and enhancement of CT images in DICOM format. It aims to reduce noise in CT images, particularly those with lower radiation doses, and improve image quality in low-dose non-diagnostic Filtered Back Projection and Iterative Reconstruction images.
Acceptance Criteria and Device Performance:
The document primarily focuses on demonstrating the substantial equivalence of ClariCT.AI to a predicate device (Zia, K160852) and compliance with regulatory standards. While specific quantitative acceptance criteria for image quality metrics (e.g., noise reduction percentage, CNR improvement) are not explicitly detailed in a table, the document states:
- Acceptance Criteria: The device "Meets the acceptance criteria" and "is adequate for its intended use." This implies that the internal verification and validation processes of ClariPI Inc. established specific performance benchmarks, which the device successfully met.
- Reported Device Performance: The document generally indicates that ClariCT.AI:
- Complies with international and FDA-recognized consensus standards (ISO 14971, NEMA-PS 3.1-3.20 DICOM).
- Complies with FDA guidance documents for software in medical devices and interoperable medical devices.
- Demonstrates compliance through phantom data (ACR CT Accreditation Phantom) and clinical processed data. These tests evaluate the device's ability to maintain image quality while reducing noise and enhancing images.
- The "Performance Data" section asserts that the test results "demonstrate that ClariCT.Al...Meets the acceptance criteria and is adequate for its intended use."
A Table of Acceptance Criteria & Reported Performance is not explicitly provided in the document in a quantitative format. The document describes meeting unspecified acceptance criteria through various tests.
Study Information:
-
Sample Size used for the test set and the data provenance:
- Test Set Description: The test set included "A variety of clinical processed data" which comprised:
- "Paired datasets of low and high doses for the same patients"
- "IR & FBP datasets" (Iterative Reconstruction & Filtered Back Projection)
- "Datasets for subgroup analysis of datasets with various genders, ages, body weights, races, and ethnicities"
- "Datasets with varying scan conditions using scanners from different vendors for different organs"
- Sample Size: The exact number of patients or images in the test set is not specified in the provided text.
- Data Provenance: The document does not specify the country of origin. The data is described as "clinical processed data," implying it's derived from real patient scans, but whether it's retrospective or prospective is not explicitly stated. However, given the nature of "paired datasets of low and high doses for the same patients" and "IR & FBP datasets," it strongly suggests these are retrospective analyses of existing clinical data.
- Test Set Description: The test set included "A variety of clinical processed data" which comprised:
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The document mentions "clinical processed data" but does not detail how ground truth for image quality improvements or noise reduction effectiveness was established by experts.
-
Adjudication method for the test set:
- The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The document explicitly states: "ClariCT.AI does not require clinical studies to demonstrate substantial equivalence to the predicate device." This indicates that the regulatory pathway relied on demonstrating technical equivalence and performance through non-clinical means and potentially expert consensus on image quality, rather than a reader study.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance assessment was done. The entire "PERFORMANCE DATA" section describes the technical testing of the ClariCT.AI algorithm on phantom and clinical data to demonstrate its ability to reduce noise and enhance images, independent of human interaction during the measurement process. The compliance with standards and internal V&V processes are all focused on the algorithm's output.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies that the ground truth for the "clinical processed data" and "phantom data" likely relied on objective measurements of image quality parameters (such as noise levels, signal-to-noise ratio, contrast-to-noise ratio) and/or expert visual assessment of image quality improvement, although the latter is not explicitly detailed as "ground truth." For the phantom, the known geometric and contrast properties serve as a form of ground truth for evaluating image fidelity after processing. For clinical data, "paired datasets of low and high doses for the same patients" suggests that the high-dose images might serve as a reference for expected image quality without significant noise. However, explicit details on how ground truth was established for image quality improvement are not provided.
-
The sample size for the training set:
- The document states that the "Noise reduction is performed with the use of pre-trained deep learning models." However, the sample size for the training set used to develop these deep learning models is not specified in the provided text.
-
How the ground truth for the training set was established:
- The document does not provide details on how the ground truth for the training set, used to develop the deep learning models, was established.
Ask a specific question about this device
Page 1 of 1