K Number
K214036
Device Name
AVIEW
Date Cleared
2022-12-23

(365 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.

Device Description

The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries

AI/ML Overview

The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.

However, I can extract the information that is present and highlight what is missing.

Here's an analysis based on the provided text, indicating where information is present and where it is absent:


Acceptance Criteria and Device Performance (Partial)

The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."

Table of Acceptance Criteria and Reported Device Performance

Feature/MetricAcceptance CriterionReported Device Performance
General Software PerformancePassed all tests based on pre-determined Pass/Fail criteriaPassed all tests
Unit TestSuccessful functional, performance, and algorithm analysis for image processing algorithm componentsTests conducted using Google C++ Unit Test Framework
System Test (Defects)No 'Major' or 'Moderate' defects foundNo 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion)
Kernel Conversion (LAA result reliability)LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion.Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was).
Fissure CompletenessCompared to radiologists' assessmentEvaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided).

Detailed Breakdown of Study Information:

  1. A table of acceptance criteria and the reported device performance:

    • Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
    • Reported Device Performance:
      • General: "passed all of the tests."
      • System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
      • Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
      • Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
  2. Sample sizes used for the test set and the data provenance:

    • Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
    • Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
    • Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
  4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not specified in the provided text.
  5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
  6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
  7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
    • For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
  8. The sample size for the training set:

    • Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
  9. How the ground truth for the training set was established:

    • Not specified in the provided text.

Summary of Missing Information:

The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).