Search Results
Found 1 results
510(k) Data Aggregation
(464 days)
ClariPulmo
ClariPulmo is a non-invasive image analysis software for use with CT images which is intended to support the quantification of lung CT images. The software is designed to support the physician in the diagnosis and documentation of pulmonary tissue images (e.g., abnormalities) from the CT thoracic datasets. (The software is not intended for the diagnosis of pneumonia or COVID-19). The software provides automated segmentation of the lungs and quantification of low-attenuation and high-attenuation areas within the segmented lungs by using predefined Hounsfield unit thresholds. The software displays by color the segmented lungs and analysis results. ClariPulmo provides optional denoising and kernel normalization functions for improved quantification of lung CT images in cases when CT images were taken at low-dose conditions or with sharp reconstruction kernels.
ClariPulmo is a standalone software application analyzing lung CT images that can be used to support the physician in the quantification of lung CT image when examining pulmonary tissues. ClariPulmo provides two main and optional functions: LAA Analysis provides quantitative measurement of pulmonary tissue image with low attenuation areas (LAA). LAA are measured by counting those voxels with low attenuation values under the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with low attenuation area. HAA Analysis provides quantitative measurement of pulmonary tissue image with high attenuation areas (HAA). HAA are measured by counting those voxels with high attenuation values using the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with high attenuation area. Lungs are automatically segmented using a pre-trained deep learning model. The optional Kernel Normalization function provides an image-to-image translation from a sharp kernel image to a smooth kernel image for improved quantification of lung CT images. The Kernel Normalization algorithm was constructed based on the U-Net architecture. The optional Denoising function provides an image-to-image translation from a noisy low-dose image to a noise-reduced enhanced quality image of LDCT for improved quantification of lung LDCT images. The Denoising algorithm was constructed based on the U-Net architecture. The ClariPulmo software provides summary reports for measurement results that contains color overlay images for the lungs tissues as well as table and charts displaying analysis results.
The provided text is specifically from a 510(k) summary for the ClariPulmo device. It details non-clinical performance testing rather than a large-scale clinical study with human-in-the-loop performance. Therefore, some of the requested information (e.g., MRMC study, effect size of human reader improvement) may not be present in this document because it was not a requirement for this specific type of submission.
Here's the breakdown of the information based on the provided text:
Acceptance Criteria and Device Performance Study for ClariPulmo
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the "excellent agreement" reported for the different functions, measured by Pearson Correlation Coefficient (PCC) and Dice Coefficient.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
HAA Analysis: Excellent agreement with expert segmentations. | PCC: 0.980 – 0.983 with expert-established segmentations of user-defined high attenuation areas. |
LAA Analysis: Excellent agreement with expert segmentations. | PCC: 0.99 with expert-established segmentations of user-defined low attenuation areas. |
AI-based Lung Segmentation: Excellent agreement with expert segmentations. | PCC: 0.977-0.992 and DICE coefficients of 0.98~0.99 with expert radiologist's imageJ based segmentation. Statistical significance across normal/LAA/HAA patients, CT scanner, reconstructed kernel and low-dose subgroups. |
2. Sample Sizes Used for the Test Set and Data Provenance
- HAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "patients with pneumonia and COVID-19."
- LAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "both health and diseased patients."
- AI-based Lung Segmentation Test Set: The specific number of cases is not explicitly stated. It used "one internal and two external datasets."
- Data Provenance: The document does not specify the country of origin for the data. The studies were retrospective as they involved testing against existing datasets with established ground truth.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- The document states "expert-established segmentations" and "expert radiologist's imageJ based segmentation."
- The number of experts is not explicitly stated, nor are their specific qualifications (e.g., years of experience), beyond being referred to as "expert."
4. Adjudication Method for the Test Set
- The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). It implies a single "expert-established" ground truth was used for comparison, suggesting either a single expert or a pre-adjudicated consensus.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size
- No, an MRMC comparative effectiveness study was not done or reported in this document. The performance testing described is focused on the standalone agreement of the AI with expert-established ground truth, not how human readers improve with AI assistance. The document explicitly states: "ClariPulmo does not require clinical studies to demonstrate substantial equivalence to the predicate device."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, standalone performance testing was done. The entire "Performance Testing" section describes the algorithm's performance against expert-established ground truth ("Al-based lung segmentation demonstrated excellent agreements with that by expert radiologist's imageJ based segmentation").
7. The Type of Ground Truth Used
- The ground truth used was expert consensus / expert-established segmentations. Specifically, for lung segmentation, it explicitly mentions "expert radiologist's imageJ based segmentation." For HAA and LAA, it refers to "expert-established segmentations."
8. The Sample Size for the Training Set
- The document does not specify the sample size for the training set. It only mentions that the lung segmentation used a "pre-trained deep learning model" and the Kernel Normalization and Denoising algorithms were "constructed based on the U-Net architecture."
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly describe how the ground truth for the training set was established. It only refers to the test set ground truth as "expert-established segmentations." It is implied that the training data would also have been expertly annotated, given the nature of deep learning model training.
Ask a specific question about this device
Page 1 of 1