Search Results
Found 1 results
510(k) Data Aggregation
(166 days)
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part. measured in sections perpendicular to the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
- -Nodule measurement
- Adding nodule by segmentation or by lines .
- Semi-automatic nodule measurement (segmentation) "
- . Maximal plane measure, 3D measure and volumetric measure.
- . Automatic large vessel removal.
- י Provides various features calculated per each nodule such as size, major(longest diameter measured in 2D/3D), minor (shortest diameter measured in 2D/3D), maximal plane, volume, mean HU, minimum HU, maximum HU for solid nodules and ratio of the longest axis for solid to non solid for paritla solid nodules.
- . Fully supporting Lung-RADS workflow: US Lung-RADS and KR Lung-RADS.
- . Nodule malignancy score (PANCAN model) calculation.
- . Importing from CAD results
- -Follow-up
- ' Automatic retrieving the past data
- י Follow-up support with nodule matching and comparison
- Automatic calculation of VDT (volume doubling time)
- Automatic nodule detection (CADe) -
- Seamless integration with Mevis Visia (FDA 510k Cleared) .
- -Lungs and lobes segmentation
- Better segmentation of lungs and lobes based on deep-learning algorithms.
- -Report
- PDF report generation .
- . It saves or sends the pdf report and captured images in DICOM files.
- . It provides structured report including following items such as nodule location and, also input finding on nodules.
- Reports are generated using the results of all nodules nodules detected so far (Lung RADS) .
- -Save Result
- . It saves the results in internal format
The provided text describes the acceptance criteria and the study conducted to prove the AVIEW LCS device meets these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present a formal table of acceptance criteria with corresponding reported device performance metrics in a single, clear format for all functionalities. Instead, it describes various tests and their success standards implicitly serving as acceptance criteria.
Based on the "Semi-automatic Nodule Segmentation" section, here's a reconstructed table for that specific functionality:
Acceptance Criteria (Semi-automatic Nodule Segmentation) | Reported Device Performance |
---|---|
Measured length should be less than one voxel size compared to the size of the sphere produced. | Implied "standard judgment" met through testing with spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm). |
Measured volume should be within 10% error compared to the volume of the sphere created. | Implied "standard judgment" met through testing with spheres of various radii. |
For "Nodule Matching test with Lung Registration":
Acceptance Criteria (Nodule Matching) | Reported Device Performance |
---|---|
Voxel Distance error between converted position and Nodule position of the Moving image (for evaluation of DVF). | Measured for 28 locations. Implied acceptance as the study "check the applicability of Registry" and "Start-up verification". |
Accuracy evaluation of DVF subsampling for size optimization. | Implied acceptable accuracy for reducing DVF capacity. |
For "Software Verification and Validation - System Test":
Acceptance Criteria (System Test) | Reported Device Performance |
---|---|
Not finding 'Major' defects. | Implied satisfied (device passed all tests). |
Not finding 'Moderate' defects. | Implied satisfied (device passed all tests). |
For "Auto segmentation (based on deep-learning algorithms) test":
Acceptance Criteria (Auto Segmentation - Korean Data) | Reported Device Performance |
---|---|
Auto-segmentation results identified by specialist and radiologist and classified as '2 (very good)'. | "The results of auto-segmentation are identified by a specialist and radiologist and classified as 0 (Not good), 1 (need adjustment), and 2(very good)". This suggests an evaluation was performed to categorize segmentation quality, but the specific percentage or number of "very good" classifications is not provided as a direct performance metric. |
Acceptance Criteria (Auto Segmentation - NLST Data) | Reported Device Performance |
---|---|
Dice similarity coefficient between auto-segmentation and manual segmentation (performed by radiolographer and confirmed by radiologist). | "The dice similarity coefficient is performed to check how similar they are." The specific threshold or result of the DSC is not provided. |
2. Sample Size Used for the Test Set and Data Provenance
-
Semi-automatic Nodule Segmentation:
- Sample Size: Not explicitly stated as a number of nodules or patients. Spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm) were created for testing.
- Data Provenance: Artificially generated spheres.
-
Nodule Matching test with Lung Registration:
- Sample Size: "28 locations" (referring to nodule locations for DVF calculation).
- Data Provenance: "deployed" experimental data, likely retrospective CT scans. Specific country of origin not mentioned but given the company's origin (Republic of Korea), it could involve Korean data.
-
Mevis CAD Integration test:
- Sample Size: Not explicitly stated. The test confirms data transfer and display.
- Data Provenance: Not specified, likely internal test data.
-
Brock Score (aka. PANCAN) Risk Calculation test:
- Sample Size:
- Former paper: "PanCan data set, 187 persons had 7008 nodules, of which 102 were malignant," and "BCCA data set, 1090 persons had 5021 nodules, of which 42 were malignant."
- Latter paper: "4431 nodules (4315 benign nodules and 116 malignant nodules of NLST data)."
- Data Provenance: Retrospective, from published papers. PANCAN data set, BCCA data set, and NLST (National Lung Screening Trial) data.
- Sample Size:
-
VDT Calculation test:
- Sample Size: Not explicitly stated.
- Data Provenance: Unit tests, implying simulated or internally generated data for calculation verification.
-
Lung RADS Calculation test:
- Sample Size: "10 cases were extracted."
- Data Provenance: Retrospective, "from the Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
-
Auto segmentation (based on deep-learning algorithms) test:
- Korean Data: "192 suspected COPD patients." (Retrospective, implicitly from Korea given the source "Korean Society of Thoracic Radiology").
- NLST Data: "80 patient's Chest CT data who were enrolled in NLST." (Retrospective, from the National Lung Screening Trial).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
-
Auto segmentation (based on deep-learning algorithms) test - Korean Data:
- Number of Experts: Not explicitly stated, but "a specialist and radiologist" suggests at least two experts.
- Qualifications: "specialist and radiologist." No specific years of experience or sub-specialty mentioned.
-
Auto segmentation (based on deep-learning algorithms) test - NLST Data:
- Number of Experts: At least two. "experienced radiolographer and confirmed by experienced radiologist."
- Qualifications: "experienced radiolographer" and "experienced radiologist." No specific years of experience or sub-specialty mentioned.
-
Brock Score (aka. PANCAN) Risk Calculation test: Ground truth established through the studies referenced; details on experts for those specific studies are not provided in this document.
-
Lung RADS Calculation test: Ground truth implicitly established by the "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology." The experts who created this survey table are not detailed here.
4. Adjudication Method for the Test Set
The document does not explicitly describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth for any of the tests.
- For the Auto segmentation (based on deep-learning algorithms) test, the "Korean Data" section mentions results are "identified by a specialist and radiologist and classified." This suggests independent review or consensus, but no specific adjudication rule is given. For the "NLST Data" section, manual segmentation was "performed by experienced radiolographer and confirmed by experienced radiologist," indicating a two-step process of creation and verification, rather than a conflict resolution method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was mentioned or conducted in this document. The device is for "review and analysis" and "reporting," and integrates with a third-party CAD, but its direct impact on human reader performance through an MRMC study is not detailed.
6. Standalone Performance Study (Algorithm only without human-in-the-loop performance)
Yes, standalone performance studies were conducted for several functionalities, focusing on the algorithm's performance without direct human-in-the-loop tasks:
- Semi-automatic Nodule Segmentation: The test on artificial spheres evaluates the algorithm's measurement accuracy directly.
- Nodule Matching test with Lung Registration: Evaluates the algorithm's ability to calculate DVF and match nodules.
- Brock Score (aka. PANCAN) Risk Calculation test: Unit tests comparing calculated values from the algorithm against an Excel sheet.
- VDT Calculation test: Unit tests to confirm calculation.
- Lung RADS Calculation test: Unit tests to confirm implementation accuracy against regulations.
- Auto segmentation (based on deep-learning algorithms) test: This is a standalone evaluation of the algorithm's segmentation performance against expert classification or manual segmentation (NLST data).
7. Type of Ground Truth Used
- Semi-automatic Nodule Segmentation: Known physical properties of artificially generated spheres (e.g., precise radius and volume).
- Nodule Matching test with Lung Registration: Nodule positions marked on "Fixed image" and "Moving image," implying expert identification on real CT scans.
- Brock Score (aka. PANCAN) Risk Calculation test: Reference values from published literature (PanCan, BCCA, NLST data) which themselves are derived from ground truth of malignancy (pathology, clinical follow-up).
- VDT Calculation test: Established mathematical formulas for VDT.
- Lung RADS Calculation test: Lung-RADS regulations and a "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
- Auto segmentation (based on deep-learning algorithms) test - Korean Data: Expert classification ("0 (Not good), 1 (need adjustment), and 2(very good)") by a specialist and radiologist.
- Auto segmentation (based on deep-learning algorithms) test - NLST Data: Manual segmentation performed by an experienced radiolographer and confirmed by an experienced radiologist.
8. Sample Size for the Training Set
The document does not explicitly state the sample size used for the training set(s) for the deep-learning algorithms or other components of the AVIEW LCS. It only details the test sets.
9. How the Ground Truth for the Training Set Was Established
Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document. However, given the nature of the evaluation for the test set (expert marking, classification), it is highly probable that similar methods involving expert radiologists or specialists would have been used to establish ground truth for any training data.
Ask a specific question about this device
Page 1 of 1