Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K221592
    Date Cleared
    2023-02-24

    (267 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AVIEW Lung Nodule CAD is a Computer-Aided Detection (CAD) software designed to assist radiologists in the detection of pulmonary nodules (with diameter 3-20 mm) during the review of CT examinations of the chest for asymptomatic populations. AVIEW Lung Nodule CAD provides adjunctive information to alert the radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. AVIEW Lung Nodule CAD may be used as a second reader after the radiologist has completed their initial read. The algorithm has been validated using non-contrast CT images, the majority of which were acquired on Siemens SOMATOM CT series scanners; therefore, limiting device use to use with Siemens SOMATOM CT series is recommended.

    Device Description

    The AVIEW Lung Nodule CAD is a software product that detects nodules in the lung. The lung nodule detection model was trained by Deep Convolution Network (CNN) based algorithm from the chest CT image. Automatic detection of lung nodules of 3 to 20mm in chest CT images. By complying with DICOM standards, this product can be linked with the Picture Archiving and Communication System (PACS) and provides a separate user interface to provide functions such as analyzing, identifying, storing, and transmitting quantified values related to lung nodules. The CAD's results could be displayed after the user's first read, and the user could select or de-select the mark provided by the CAD. The device's performance was validated with SIEMENS’ SOMATOM series manufacturing. The device is intended to be used with a cleared AVIEW platform.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the AVIEW Lung Nodule CAD, as derived from the provided document:

    Acceptance Criteria and Reported Device Performance

    Criteria (Standalone Performance)Acceptance CriteriaReported Device Performance
    Sensitivity (patient level)> 0.80.907 (0.846-0.95)
    Sensitivity (nodule level)> 0.8Not explicitly stated as separate from patient level, but overall sensitivity is 0.907.
    Specificity> 0.60.704 (0.622-0.778)
    ROC AUC> 0.80.961 (0.939-0.983)
    Sensitivity at FP/scan < 2> 0.80.889 (0.849-0.93) at FP/scan=0.504

    Study Details

    1. Acceptance Criteria and Reported Device Performance (as above)

    2. Sample size used for the test set and data provenance:

    • Test Set Size: 282 cases (140 cases with nodule data and 142 cases without nodule data) for the standalone study.
    • Data Provenance:
      * Geographically distinct US clinical sites.
      * All datasets were built with images from the U.S.
      * Anonymized medical data was purchased.
      * Included both incidental and screening populations.
      * For the Multi-Reader Multi-Case (MRMC) study, the dataset consisted of 151 Chest CTs (103 negative controls and 48 cases with one or more lung nodules).

    3. Number of experts used to establish the ground truth for the test set and their qualifications:

    • Number of Experts: Three (for both the MRMC study and likely for the standalone ground truth, given the consistency in expert involvement).
    • Qualifications: Dedicated chest radiologists with at least ten years of experience.

    4. Adjudication method for the test set:

    • Not explicitly stated for the "standalone study" ground truth establishment.
    • For the MRMC study, the three dedicated chest radiologists "determined the ground truth" in a blinded fashion. This implies a consensus or majority vote, but the exact method (e.g., 2+1, 3+1) is not specified. It does state "All lung nodules were segmented in 3D" which implies detailed individual expert review before ground truth finalization.

    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study:

    • Yes, an MRMC study was performed.
    • Effect size of human readers improving with AI vs. without AI assistance:
      * AUC: The point estimate difference was 0.19 (from 0.73 unassisted to 0.92 aided).
      * Sensitivity: The point estimate difference was 0.23 (from 0.68 unassisted to 0.91 aided).
      * FP/scan: The point estimate difference was 0.24 (from 0.48 unassisted to 0.28 aided), indicating a reduction in false positives.
    • Reading Time: "Reading time was decreased when AVIEW Lung Nodule CAD aided radiologists."

    6. Standalone (algorithm only without human-in-the-loop performance) study:

    • Yes, a standalone study was performed.
    • The acceptance criteria and reported performance for this study are detailed in the table above.

    7. Type of ground truth used:

    • Expert consensus by three dedicated chest radiologists with at least ten years of experience. For the standalone study, it is directly compared against "ground truth," which is established by these experts. For the MRMC study, the experts "determined the ground truth." The phrase "All lung nodules were segmented in 3D" suggests a thorough and detailed ground truth establishment.

    8. Sample size for the training set:

    • Not explicitly stated in the provided text. The document mentions the lung nodule detection model was "trained by Deep Convolution Network (CNN) based algorithm from the chest CT image," but does not provide details on the training set size.

    9. How the ground truth for the training set was established:

    • Not explicitly stated in the provided text.
    Ask a Question

    Ask a specific question about this device

    K Number
    K214036
    Device Name
    AVIEW
    Date Cleared
    2022-12-23

    (365 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.

    Device Description

    The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries

    AI/ML Overview

    The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.

    However, I can extract the information that is present and highlight what is missing.

    Here's an analysis based on the provided text, indicating where information is present and where it is absent:


    Acceptance Criteria and Device Performance (Partial)

    The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."

    Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriterionReported Device Performance
    General Software PerformancePassed all tests based on pre-determined Pass/Fail criteriaPassed all tests
    Unit TestSuccessful functional, performance, and algorithm analysis for image processing algorithm componentsTests conducted using Google C++ Unit Test Framework
    System Test (Defects)No 'Major' or 'Moderate' defects foundNo 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion)
    Kernel Conversion (LAA result reliability)LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion.Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was).
    Fissure CompletenessCompared to radiologists' assessmentEvaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided).

    Detailed Breakdown of Study Information:

    1. A table of acceptance criteria and the reported device performance:

      • Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
      • Reported Device Performance:
        • General: "passed all of the tests."
        • System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
        • Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
        • Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
    2. Sample sizes used for the test set and the data provenance:

      • Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
      • Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
      • Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Not specified in the provided text.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
      • For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
    8. The sample size for the training set:

      • Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
    9. How the ground truth for the training set was established:

      • Not specified in the provided text.

    Summary of Missing Information:

    The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1