Search Results
Found 3 results
510(k) Data Aggregation
(267 days)
AVIEW Lung Nodule CAD is a Computer-Aided Detection (CAD) software designed to assist radiologists in the detection of pulmonary nodules (with diameter 3-20 mm) during the review of CT examinations of the chest for asymptomatic populations. AVIEW Lung Nodule CAD provides adjunctive information to alert the radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. AVIEW Lung Nodule CAD may be used as a second reader after the radiologist has completed their initial read. The algorithm has been validated using non-contrast CT images, the majority of which were acquired on Siemens SOMATOM CT series scanners; therefore, limiting device use to use with Siemens SOMATOM CT series is recommended.
The AVIEW Lung Nodule CAD is a software product that detects nodules in the lung. The lung nodule detection model was trained by Deep Convolution Network (CNN) based algorithm from the chest CT image. Automatic detection of lung nodules of 3 to 20mm in chest CT images. By complying with DICOM standards, this product can be linked with the Picture Archiving and Communication System (PACS) and provides a separate user interface to provide functions such as analyzing, identifying, storing, and transmitting quantified values related to lung nodules. The CAD's results could be displayed after the user's first read, and the user could select or de-select the mark provided by the CAD. The device's performance was validated with SIEMENS’ SOMATOM series manufacturing. The device is intended to be used with a cleared AVIEW platform.
Here's a breakdown of the acceptance criteria and study details for the AVIEW Lung Nodule CAD, as derived from the provided document:
Acceptance Criteria and Reported Device Performance
Criteria (Standalone Performance) | Acceptance Criteria | Reported Device Performance |
---|---|---|
Sensitivity (patient level) | > 0.8 | 0.907 (0.846-0.95) |
Sensitivity (nodule level) | > 0.8 | Not explicitly stated as separate from patient level, but overall sensitivity is 0.907. |
Specificity | > 0.6 | 0.704 (0.622-0.778) |
ROC AUC | > 0.8 | 0.961 (0.939-0.983) |
Sensitivity at FP/scan 0.8 | 0.889 (0.849-0.93) at FP/scan=0.504 |
Study Details
1. Acceptance Criteria and Reported Device Performance (as above)
2. Sample size used for the test set and data provenance:
- Test Set Size: 282 cases (140 cases with nodule data and 142 cases without nodule data) for the standalone study.
- Data Provenance:
* Geographically distinct US clinical sites.
* All datasets were built with images from the U.S.
* Anonymized medical data was purchased.
* Included both incidental and screening populations.
* For the Multi-Reader Multi-Case (MRMC) study, the dataset consisted of 151 Chest CTs (103 negative controls and 48 cases with one or more lung nodules).
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Three (for both the MRMC study and likely for the standalone ground truth, given the consistency in expert involvement).
- Qualifications: Dedicated chest radiologists with at least ten years of experience.
4. Adjudication method for the test set:
- Not explicitly stated for the "standalone study" ground truth establishment.
- For the MRMC study, the three dedicated chest radiologists "determined the ground truth" in a blinded fashion. This implies a consensus or majority vote, but the exact method (e.g., 2+1, 3+1) is not specified. It does state "All lung nodules were segmented in 3D" which implies detailed individual expert review before ground truth finalization.
5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study:
- Yes, an MRMC study was performed.
- Effect size of human readers improving with AI vs. without AI assistance:
* AUC: The point estimate difference was 0.19 (from 0.73 unassisted to 0.92 aided).
* Sensitivity: The point estimate difference was 0.23 (from 0.68 unassisted to 0.91 aided).
* FP/scan: The point estimate difference was 0.24 (from 0.48 unassisted to 0.28 aided), indicating a reduction in false positives. - Reading Time: "Reading time was decreased when AVIEW Lung Nodule CAD aided radiologists."
6. Standalone (algorithm only without human-in-the-loop performance) study:
- Yes, a standalone study was performed.
- The acceptance criteria and reported performance for this study are detailed in the table above.
7. Type of ground truth used:
- Expert consensus by three dedicated chest radiologists with at least ten years of experience. For the standalone study, it is directly compared against "ground truth," which is established by these experts. For the MRMC study, the experts "determined the ground truth." The phrase "All lung nodules were segmented in 3D" suggests a thorough and detailed ground truth establishment.
8. Sample size for the training set:
- Not explicitly stated in the provided text. The document mentions the lung nodule detection model was "trained by Deep Convolution Network (CNN) based algorithm from the chest CT image," but does not provide details on the training set size.
9. How the ground truth for the training set was established:
- Not explicitly stated in the provided text.
Ask a specific question about this device
(365 days)
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries
The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.
However, I can extract the information that is present and highlight what is missing.
Here's an analysis based on the provided text, indicating where information is present and where it is absent:
Acceptance Criteria and Device Performance (Partial)
The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."
Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criterion | Reported Device Performance |
---|---|---|
General Software Performance | Passed all tests based on pre-determined Pass/Fail criteria | Passed all tests |
Unit Test | Successful functional, performance, and algorithm analysis for image processing algorithm components | Tests conducted using Google C++ Unit Test Framework |
System Test (Defects) | No 'Major' or 'Moderate' defects found | No 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion) |
Kernel Conversion (LAA result reliability) | LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion. | Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was). |
Fissure Completeness | Compared to radiologists' assessment | Evaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided). |
Detailed Breakdown of Study Information:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
- Reported Device Performance:
- General: "passed all of the tests."
- System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
- Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
- Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
-
Sample sizes used for the test set and the data provenance:
- Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
- Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
- Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified in the provided text.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
- For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
-
The sample size for the training set:
- Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
-
How the ground truth for the training set was established:
- Not specified in the provided text.
Summary of Missing Information:
The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.
Ask a specific question about this device
(269 days)
AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images which could be used as an initial contour for the clinicians to approve and edit by the radiation oncology department for treatment planning or other professions where a segmented mask of organs is needed.
- a. Deep learning contouring from four body parts (Head & Neck, Breast, Abdomen, and Pelvis)
- b. Generates RT-DICOM structure of contoured organs
- c. Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Note that the Breast (Both right and left lung, Heart) were validated with non-contrast CT. Head & Neck (Both right and left Eyes, Brain and Mandible), Abdomen (Both right and Liver), and Pelvis (Both right and left Femur and Bladder) were validated with Contrast CT only.
The AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images. This software could be used by the radiation oncology department planning, or other professions where a segmented mask of organs is needed.
- Deep learning contouring: it can automatically contour the organ-at-risk (OARs) from four body parts (Head ● & Neck, Breast, Abdomen, and Pelvis)
- . Generates RT-DICOM structure of contoured organs
- . Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The general acceptance criterion for the AVIEW RT ACS device appears to be comparable performance to a predicate device (MIM-MRT Dosimetry) in terms of segmentation accuracy, as measured by Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD). While explicit numerical acceptance thresholds are not stated in the provided text (e.g., "DSC must be greater than X"), the study is structured as a comparative effectiveness study. The expectation is that the AVIEW RT ACS performance should be at least equivalent to, if not better than, the predicate device.
The study's tables (Tables 1-30) consistently show the AVIEW RT ACS achieving higher average DSC values (closer to 1, indicating better overlap) and generally lower average 95% HD values (closer to 0, indicating less maximum distance between contours), across various organs, demographic groups, and scanner parameters, compared to the predicate device.
Table of Acceptance Criteria and Reported Device Performance:
Metric / Organ (Examples) | Acceptance Criterion (Implicit) | AVIEW RT ACS Performance (Mean ± SD, [95% CI]) | Predicate Device Performance (Mean ± SD, [95% CI]) | Difference (AVIEW - Predicate) | Meets Criteria? |
---|---|---|---|---|---|
Overall DSC | Should be comparable to or better than predicate device. | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly positive | Yes |
Overall 95% HD (mm) | Should be comparable to or better than predicate device (i.e., lower HD). | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly negative (indicating better AVIEW) | Yes |
Brain DSC | Comparable to or better than predicate. | 0.97 ± 0.01 (0.97, 0.98) | 0.96 ± 0.01 (0.96, 0.96) | 0.01 | Yes |
Brain 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.92 ± 20.46 (-1.1, 14.94) | 4.61 ± 2.17 (3.76, 5.46) | 2.31 | Mixed (Higher HD for AVIEW, but wide CI) |
Heart DSC | Comparable to or better than predicate. | 0.94 ± 0.03 (0.93, 0.95) | 0.78 ± 1.20 (0.70, 8.56) | 0.16 | Yes (Significantly better) |
Heart 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.19 ± 4.21 (4.73, 7.65) | 18.90 ± 5.09 (17.14, 20.67) | -12.71 | Yes (Significantly better) |
Liver DSC | Comparable to or better than predicate. | 0.96 ± 0.01 (0.96, 0.97) | 0.87 ± 0.06 (0.85, 0.90) | 0.09 | Yes |
Liver 95% HD (mm) | Comparable to or better than predicate (lower HD). | 7.17 ± 12.07 (2.54, 11.81) | 24.62 ± 15.16 (18.79, 30.44) | -17.44 | Yes (Significantly better) |
Bladder DSC | Comparable to or better than predicate. | 0.88 ± 0.14 (0.84, 0.93) | 0.52 ± 0.26 (0.44, 0.60) | 0.36 | Yes (Significantly better) |
Bladder 95% HD (mm) | Comparable to or better than predicate (lower HD). | 10.55 ± 20.56 (3.74, 17.36) | 30.48 ± 22.76 (22.94, 38.02) | -19.93 | Yes (Significantly better) |
Note: The tables throughout the document provide specific performance metrics for individual organs and sub-groups (race, vendors, slice thickness, kernel types). The general conclusion from these tables is that the AVIEW RT ACS consistently performs as well as or better than the predicate device across most metrics and categories.
Study Details for Acceptance Criteria Proof:
-
Sample Size Used for the Test Set: 120 cases.
- Data Provenance: The dataset included cases from both South Korea and the USA. It was constructed with various ethnicities (White, Black, Asian, Hispanic, Latino, African, American, etc.), and from four major vendors (GE, Siemens, Toshiba, and Philips).
- Retrospective/Prospective: Not explicitly stated, but the mention of a data set constructed for validation suggests a retrospective collection.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Number of Experts: 3 radiation oncology physicians.
- Qualifications: All were trained by "The Korean Society for Radiation Oncology," board-certified by the "Ministry of Health and Welfare," with a range of 9-21 years of experience in radiotherapy. The experts included attending assistant professors (n=2) and professors (n=1) from three institutions.
-
Adjudication Method for the Test Set:
- The method was a sequential editing process:
- One expert manually delineated the organs.
- The segmentation results from the first expert were then sequentially edited by the other two experts.
- The first expert made corrections.
- The result was then received by another expert who finalized the gold standard.
- This can be considered a form of sequential consensus or collaborative review rather than a strict N+1 or M+N+1 method.
- The method was a sequential editing process:
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- Yes, a comparative effectiveness study was done. The study directly compares the AVIEW RT ACS against a predicate device (MIM-MRT Dosimetry).
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study does not measure the improvement of human readers with AI assistance. Instead, it evaluates the standalone performance of the AI device against the standalone performance of a predicate AI device, both compared to expert-generated ground truth. The "human readers" (the three experts) were used solely to create the ground truth, not to evaluate their performance with or without AI assistance.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The study compares the auto-segmentation results of the AVIEW RT ACS directly to the expert-derived "gold standard" and also compares it to the auto-segmentation of the predicate device. This is purely an algorithm-only evaluation.
-
The Type of Ground Truth Used:
- Expert Consensus. The ground truth was established by three radiation oncology physicians through a sequential delineation and editing process to create a "robust gold standard."
-
The Sample Size for the Training Set:
- Not specified within the provided text. The document refers only to the validation/test set.
-
How the Ground Truth for the Training Set Was Established:
- Not specified within the provided text. Since the training set size and characteristics are not mentioned, neither is the method for establishing its ground truth.
Ask a specific question about this device
Page 1 of 1