Search Results
Found 2 results
510(k) Data Aggregation
(166 days)
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part. measured in sections perpendicular to the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
- -Nodule measurement
- Adding nodule by segmentation or by lines .
- Semi-automatic nodule measurement (segmentation) "
- . Maximal plane measure, 3D measure and volumetric measure.
- . Automatic large vessel removal.
- י Provides various features calculated per each nodule such as size, major(longest diameter measured in 2D/3D), minor (shortest diameter measured in 2D/3D), maximal plane, volume, mean HU, minimum HU, maximum HU for solid nodules and ratio of the longest axis for solid to non solid for paritla solid nodules.
- . Fully supporting Lung-RADS workflow: US Lung-RADS and KR Lung-RADS.
- . Nodule malignancy score (PANCAN model) calculation.
- . Importing from CAD results
- -Follow-up
- ' Automatic retrieving the past data
- י Follow-up support with nodule matching and comparison
- Automatic calculation of VDT (volume doubling time)
- Automatic nodule detection (CADe) -
- Seamless integration with Mevis Visia (FDA 510k Cleared) .
- -Lungs and lobes segmentation
- Better segmentation of lungs and lobes based on deep-learning algorithms.
- -Report
- PDF report generation .
- . It saves or sends the pdf report and captured images in DICOM files.
- . It provides structured report including following items such as nodule location and, also input finding on nodules.
- Reports are generated using the results of all nodules nodules detected so far (Lung RADS) .
- -Save Result
- . It saves the results in internal format
The provided text describes the acceptance criteria and the study conducted to prove the AVIEW LCS device meets these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present a formal table of acceptance criteria with corresponding reported device performance metrics in a single, clear format for all functionalities. Instead, it describes various tests and their success standards implicitly serving as acceptance criteria.
Based on the "Semi-automatic Nodule Segmentation" section, here's a reconstructed table for that specific functionality:
Acceptance Criteria (Semi-automatic Nodule Segmentation) | Reported Device Performance |
---|---|
Measured length should be less than one voxel size compared to the size of the sphere produced. | Implied "standard judgment" met through testing with spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm). |
Measured volume should be within 10% error compared to the volume of the sphere created. | Implied "standard judgment" met through testing with spheres of various radii. |
For "Nodule Matching test with Lung Registration":
Acceptance Criteria (Nodule Matching) | Reported Device Performance |
---|---|
Voxel Distance error between converted position and Nodule position of the Moving image (for evaluation of DVF). | Measured for 28 locations. Implied acceptance as the study "check the applicability of Registry" and "Start-up verification". |
Accuracy evaluation of DVF subsampling for size optimization. | Implied acceptable accuracy for reducing DVF capacity. |
For "Software Verification and Validation - System Test":
Acceptance Criteria (System Test) | Reported Device Performance |
---|---|
Not finding 'Major' defects. | Implied satisfied (device passed all tests). |
Not finding 'Moderate' defects. | Implied satisfied (device passed all tests). |
For "Auto segmentation (based on deep-learning algorithms) test":
Acceptance Criteria (Auto Segmentation - Korean Data) | Reported Device Performance |
---|---|
Auto-segmentation results identified by specialist and radiologist and classified as '2 (very good)'. | "The results of auto-segmentation are identified by a specialist and radiologist and classified as 0 (Not good), 1 (need adjustment), and 2(very good)". This suggests an evaluation was performed to categorize segmentation quality, but the specific percentage or number of "very good" classifications is not provided as a direct performance metric. |
Acceptance Criteria (Auto Segmentation - NLST Data) | Reported Device Performance |
---|---|
Dice similarity coefficient between auto-segmentation and manual segmentation (performed by radiolographer and confirmed by radiologist). | "The dice similarity coefficient is performed to check how similar they are." The specific threshold or result of the DSC is not provided. |
2. Sample Size Used for the Test Set and Data Provenance
-
Semi-automatic Nodule Segmentation:
- Sample Size: Not explicitly stated as a number of nodules or patients. Spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm) were created for testing.
- Data Provenance: Artificially generated spheres.
-
Nodule Matching test with Lung Registration:
- Sample Size: "28 locations" (referring to nodule locations for DVF calculation).
- Data Provenance: "deployed" experimental data, likely retrospective CT scans. Specific country of origin not mentioned but given the company's origin (Republic of Korea), it could involve Korean data.
-
Mevis CAD Integration test:
- Sample Size: Not explicitly stated. The test confirms data transfer and display.
- Data Provenance: Not specified, likely internal test data.
-
Brock Score (aka. PANCAN) Risk Calculation test:
- Sample Size:
- Former paper: "PanCan data set, 187 persons had 7008 nodules, of which 102 were malignant," and "BCCA data set, 1090 persons had 5021 nodules, of which 42 were malignant."
- Latter paper: "4431 nodules (4315 benign nodules and 116 malignant nodules of NLST data)."
- Data Provenance: Retrospective, from published papers. PANCAN data set, BCCA data set, and NLST (National Lung Screening Trial) data.
- Sample Size:
-
VDT Calculation test:
- Sample Size: Not explicitly stated.
- Data Provenance: Unit tests, implying simulated or internally generated data for calculation verification.
-
Lung RADS Calculation test:
- Sample Size: "10 cases were extracted."
- Data Provenance: Retrospective, "from the Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
-
Auto segmentation (based on deep-learning algorithms) test:
- Korean Data: "192 suspected COPD patients." (Retrospective, implicitly from Korea given the source "Korean Society of Thoracic Radiology").
- NLST Data: "80 patient's Chest CT data who were enrolled in NLST." (Retrospective, from the National Lung Screening Trial).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
-
Auto segmentation (based on deep-learning algorithms) test - Korean Data:
- Number of Experts: Not explicitly stated, but "a specialist and radiologist" suggests at least two experts.
- Qualifications: "specialist and radiologist." No specific years of experience or sub-specialty mentioned.
-
Auto segmentation (based on deep-learning algorithms) test - NLST Data:
- Number of Experts: At least two. "experienced radiolographer and confirmed by experienced radiologist."
- Qualifications: "experienced radiolographer" and "experienced radiologist." No specific years of experience or sub-specialty mentioned.
-
Brock Score (aka. PANCAN) Risk Calculation test: Ground truth established through the studies referenced; details on experts for those specific studies are not provided in this document.
-
Lung RADS Calculation test: Ground truth implicitly established by the "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology." The experts who created this survey table are not detailed here.
4. Adjudication Method for the Test Set
The document does not explicitly describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth for any of the tests.
- For the Auto segmentation (based on deep-learning algorithms) test, the "Korean Data" section mentions results are "identified by a specialist and radiologist and classified." This suggests independent review or consensus, but no specific adjudication rule is given. For the "NLST Data" section, manual segmentation was "performed by experienced radiolographer and confirmed by experienced radiologist," indicating a two-step process of creation and verification, rather than a conflict resolution method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was mentioned or conducted in this document. The device is for "review and analysis" and "reporting," and integrates with a third-party CAD, but its direct impact on human reader performance through an MRMC study is not detailed.
6. Standalone Performance Study (Algorithm only without human-in-the-loop performance)
Yes, standalone performance studies were conducted for several functionalities, focusing on the algorithm's performance without direct human-in-the-loop tasks:
- Semi-automatic Nodule Segmentation: The test on artificial spheres evaluates the algorithm's measurement accuracy directly.
- Nodule Matching test with Lung Registration: Evaluates the algorithm's ability to calculate DVF and match nodules.
- Brock Score (aka. PANCAN) Risk Calculation test: Unit tests comparing calculated values from the algorithm against an Excel sheet.
- VDT Calculation test: Unit tests to confirm calculation.
- Lung RADS Calculation test: Unit tests to confirm implementation accuracy against regulations.
- Auto segmentation (based on deep-learning algorithms) test: This is a standalone evaluation of the algorithm's segmentation performance against expert classification or manual segmentation (NLST data).
7. Type of Ground Truth Used
- Semi-automatic Nodule Segmentation: Known physical properties of artificially generated spheres (e.g., precise radius and volume).
- Nodule Matching test with Lung Registration: Nodule positions marked on "Fixed image" and "Moving image," implying expert identification on real CT scans.
- Brock Score (aka. PANCAN) Risk Calculation test: Reference values from published literature (PanCan, BCCA, NLST data) which themselves are derived from ground truth of malignancy (pathology, clinical follow-up).
- VDT Calculation test: Established mathematical formulas for VDT.
- Lung RADS Calculation test: Lung-RADS regulations and a "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
- Auto segmentation (based on deep-learning algorithms) test - Korean Data: Expert classification ("0 (Not good), 1 (need adjustment), and 2(very good)") by a specialist and radiologist.
- Auto segmentation (based on deep-learning algorithms) test - NLST Data: Manual segmentation performed by an experienced radiolographer and confirmed by an experienced radiologist.
8. Sample Size for the Training Set
The document does not explicitly state the sample size used for the training set(s) for the deep-learning algorithms or other components of the AVIEW LCS. It only details the test sets.
9. How the Ground Truth for the Training Set Was Established
Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document. However, given the nature of the evaluation for the test set (expert marking, classification), it is highly probable that similar methods involving expert radiologists or specialists would have been used to establish ground truth for any training data.
Ask a specific question about this device
(142 days)
The AVIEW Modeler is intended for use as an image review and segmentation system that operates on DICOM imaging information obtained from a medical scanner. It is also used as a pre-operative software for surgical planning. 3D printed models generated from the output file are for visualization and educational purposes only and not for diagnostic use.
The AVIEW Modeler is a software product which can be installed on a separate PC, it displays patient medical images on the screen by acquiring it from image Acquisition Device. The image on the screen can be checked edited, saved and received.
- -Various displaying functions
- Thickness MPR., oblique MPR, shaded volume rendering and shaded surface rendering.
- . Hybrid rendering of simultaneous volume-rendering and surface-rendering.
- -Provides easy to-use manual and semi-automatic segmentation methods
- Brush, paint-bucket, sculpting, thresholding and region growing. "
- . Magic cut (based on Randomwalk algorithm)
- -Morphological and Boolean operations for mask generation.
- Mesh generation and manipulation algorithms. -
- Mesh smoothing, cutting, fixing and Boolean operations.
- -Exports 3d-printable models in open formats, such as STL.
- DICOM 3.0 compliant (C-STORE, C-FIND) -
The provided text describes the 510(k) Summary for AVIEW Modeler, focusing on its substantial equivalence to predicate devices, rather than a detailed performance study directly addressing specific acceptance criteria. The document emphasizes software verification and validation activities.
Therefore, I cannot fully complete all sections of your request concerning acceptance criteria and device performance based solely on the provided text. However, I can extract information related to software testing and general conclusions.
Here's an attempt to answer your questions based on the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with corresponding performance metrics like accuracy, sensitivity, or specificity for the segmentation features. Instead, it discusses the successful completion of various software tests.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Functional Adequacy | "passed all of the tests based on pre-determined Pass/Fail criteria." |
Performance Adequacy | Performance tests conducted "according to the performance evaluation standard and method that has been determined with prior consultation between software development team and testing team" to check non-functional requirements. |
Reliability | System tests concluded "not finding 'Major'. 'Moderate' defect." |
Compatibility | STL data created by AVIEW Modeler was "imported into Stratasys printing Software, Object Studio to validate the STL before 3d-printing with Objet260 Connex3." (implies successful validation for 3D printing) |
Safety and Effectiveness | "The new device does not introduce a fundamentally new scientific technology, and the nonclinical tests demonstrate that the device is safe and effective." |
2. Sample sizes used for the test set and the data provenance
The document does not specify the sample size (number of images or patients) used for any of the tests (Unit, System, Performance, Compatibility). It also does not explicitly state the country of origin of the data or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide any information about the number or qualifications of experts used to establish ground truth for a test set. The focus is on internal software validation and comparison to a predicate device.
4. Adjudication method for the test set
The document does not mention any adjudication method for a test set, as it does not describe a clinical performance study involving human readers.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
No, the provided text does not describe an MRMC comparative effectiveness study involving human readers with or without AI assistance. The study described is a software verification and validation, concluding substantial equivalence to a predicate device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document describes various software tests (Unit, System, Performance, Compatibility) which could be considered forms of standalone testing for the algorithm's functionality and performance. However, it does not present quantitative standalone performance metrics typical of an algorithm-only study (e.g., precision, recall, Dice score for segmentation). It focuses on internal software quality and compatibility.
7. The type of ground truth used
The type of "ground truth" used is not explicitly defined in terms of clinical outcomes or pathology. For the software validation, the "ground truth" would likely refer to pre-defined correct outputs or expected behavior of the software components, established by the software development and test teams. For example, for segmentation, it would be the expected segmented regions based on the algorithm's design and previous validation efforts (likely through comparison to expert manual segmentations or another validated method, though not detailed here).
8. The sample size for the training set
The document does not mention a training set or its sample size. This is a 510(k) summary for a medical image processing software (AVIEW Modeler), and while it mentions a "Magic cut (based on Randomwalk algorithm)," it does not describe an AI model that underwent a separate training phase with a specific dataset, nor does it classify the device as having "machine learning" capabilities in the context of FDA regulation. The focus is on traditional software validation.
9. How the ground truth for the training set was established
As no training set is mentioned (see point 8), there is no information on how its ground truth would have been established.
Ask a specific question about this device
Page 1 of 1