Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K214036
    Device Name
    AVIEW
    Date Cleared
    2022-12-23

    (365 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K183460, K191550

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.

    Device Description

    The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries

    AI/ML Overview

    The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.

    However, I can extract the information that is present and highlight what is missing.

    Here's an analysis based on the provided text, indicating where information is present and where it is absent:


    Acceptance Criteria and Device Performance (Partial)

    The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."

    Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriterionReported Device Performance
    General Software PerformancePassed all tests based on pre-determined Pass/Fail criteriaPassed all tests
    Unit TestSuccessful functional, performance, and algorithm analysis for image processing algorithm componentsTests conducted using Google C++ Unit Test Framework
    System Test (Defects)No 'Major' or 'Moderate' defects foundNo 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion)
    Kernel Conversion (LAA result reliability)LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion.Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was).
    Fissure CompletenessCompared to radiologists' assessmentEvaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided).

    Detailed Breakdown of Study Information:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
      • Reported Device Performance:
        • General: "passed all of the tests."
        • System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
        • Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
        • Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
    2. Sample sizes used for the test set and the data provenance:

      • Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
      • Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
      • Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not specified in the provided text.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
      • For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
    8. The sample size for the training set:

      • Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
    9. How the ground truth for the training set was established:

      • Not specified in the provided text.

    Summary of Missing Information:

    The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203783
    Device Name
    ClariPulmo
    Manufacturer
    Date Cleared
    2022-04-06

    (464 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141069, K200990, K183460

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ClariPulmo is a non-invasive image analysis software for use with CT images which is intended to support the quantification of lung CT images. The software is designed to support the physician in the diagnosis and documentation of pulmonary tissue images (e.g., abnormalities) from the CT thoracic datasets. (The software is not intended for the diagnosis of pneumonia or COVID-19). The software provides automated segmentation of the lungs and quantification of low-attenuation and high-attenuation areas within the segmented lungs by using predefined Hounsfield unit thresholds. The software displays by color the segmented lungs and analysis results. ClariPulmo provides optional denoising and kernel normalization functions for improved quantification of lung CT images in cases when CT images were taken at low-dose conditions or with sharp reconstruction kernels.

    Device Description

    ClariPulmo is a standalone software application analyzing lung CT images that can be used to support the physician in the quantification of lung CT image when examining pulmonary tissues. ClariPulmo provides two main and optional functions: LAA Analysis provides quantitative measurement of pulmonary tissue image with low attenuation areas (LAA). LAA are measured by counting those voxels with low attenuation values under the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with low attenuation area. HAA Analysis provides quantitative measurement of pulmonary tissue image with high attenuation areas (HAA). HAA are measured by counting those voxels with high attenuation values using the user-predefined thresholds within the segmented lungs. This feature supports the physician in quantifying lung tissue image with high attenuation area. Lungs are automatically segmented using a pre-trained deep learning model. The optional Kernel Normalization function provides an image-to-image translation from a sharp kernel image to a smooth kernel image for improved quantification of lung CT images. The Kernel Normalization algorithm was constructed based on the U-Net architecture. The optional Denoising function provides an image-to-image translation from a noisy low-dose image to a noise-reduced enhanced quality image of LDCT for improved quantification of lung LDCT images. The Denoising algorithm was constructed based on the U-Net architecture. The ClariPulmo software provides summary reports for measurement results that contains color overlay images for the lungs tissues as well as table and charts displaying analysis results.

    AI/ML Overview

    The provided text is specifically from a 510(k) summary for the ClariPulmo device. It details non-clinical performance testing rather than a large-scale clinical study with human-in-the-loop performance. Therefore, some of the requested information (e.g., MRMC study, effect size of human reader improvement) may not be present in this document because it was not a requirement for this specific type of submission.

    Here's the breakdown of the information based on the provided text:


    Acceptance Criteria and Device Performance Study for ClariPulmo

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the "excellent agreement" reported for the different functions, measured by Pearson Correlation Coefficient (PCC) and Dice Coefficient.

    Acceptance Criteria (Implied)Reported Device Performance
    HAA Analysis: Excellent agreement with expert segmentations.PCC: 0.980 – 0.983 with expert-established segmentations of user-defined high attenuation areas.
    LAA Analysis: Excellent agreement with expert segmentations.PCC: 0.99 with expert-established segmentations of user-defined low attenuation areas.
    AI-based Lung Segmentation: Excellent agreement with expert segmentations.PCC: 0.977-0.992 and DICE coefficients of 0.98~0.99 with expert radiologist's imageJ based segmentation. Statistical significance across normal/LAA/HAA patients, CT scanner, reconstructed kernel and low-dose subgroups.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • HAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "patients with pneumonia and COVID-19."
    • LAA Analysis Test Set: The specific number of cases for the test set is not explicitly stated. It mentions "both health and diseased patients."
    • AI-based Lung Segmentation Test Set: The specific number of cases is not explicitly stated. It used "one internal and two external datasets."
    • Data Provenance: The document does not specify the country of origin for the data. The studies were retrospective as they involved testing against existing datasets with established ground truth.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts

    • The document states "expert-established segmentations" and "expert radiologist's imageJ based segmentation."
    • The number of experts is not explicitly stated, nor are their specific qualifications (e.g., years of experience), beyond being referred to as "expert."

    4. Adjudication Method for the Test Set

    • The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). It implies a single "expert-established" ground truth was used for comparison, suggesting either a single expert or a pre-adjudicated consensus.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and its effect size

    • No, an MRMC comparative effectiveness study was not done or reported in this document. The performance testing described is focused on the standalone agreement of the AI with expert-established ground truth, not how human readers improve with AI assistance. The document explicitly states: "ClariPulmo does not require clinical studies to demonstrate substantial equivalence to the predicate device."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, standalone performance testing was done. The entire "Performance Testing" section describes the algorithm's performance against expert-established ground truth ("Al-based lung segmentation demonstrated excellent agreements with that by expert radiologist's imageJ based segmentation").

    7. The Type of Ground Truth Used

    • The ground truth used was expert consensus / expert-established segmentations. Specifically, for lung segmentation, it explicitly mentions "expert radiologist's imageJ based segmentation." For HAA and LAA, it refers to "expert-established segmentations."

    8. The Sample Size for the Training Set

    • The document does not specify the sample size for the training set. It only mentions that the lung segmentation used a "pre-trained deep learning model" and the Kernel Normalization and Denoising algorithms were "constructed based on the U-Net architecture."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not explicitly describe how the ground truth for the training set was established. It only refers to the test set ground truth as "expert-established segmentations." It is implied that the training data would also have been expertly annotated, given the nature of deep learning model training.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1