Search Results
Found 11 results
510(k) Data Aggregation
(110 days)
Coreline Soft Co., Ltd.
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel to soft kernel for quantitative analysis of segmenting low attenuation areas of the lung Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule, and measurements such as size (major axis), estimated effective diameter from the volume of the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major: length of the longest diameter measure in 3D for a solid portion of the nodule. Solid 2nd Maior: The size of the longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.)). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified AVIEW Lung Nodule CAD (Computer aided detection) (K221592). It also provides the Agatston score, volume score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening and management recommendations that are based on type, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries.
The provided text is a 510(k) premarket notification letter and summary for a medical device called "AVIEW." This document primarily asserts substantial equivalence to a predicate device and notes general software changes rather than providing detailed acceptance criteria and study results for specific performance metrics that would typically be found in performance study reports.
Specifically, the document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the substantial equivalence of the device is supported by the non-clinical testing." This means that the submission does not include information about a standalone or MRMC study designed to prove the device meets specific performance acceptance criteria for its analytical functions.
Therefore, I cannot provide the requested information from the given text as the detailed performance study data is not present. The document focuses on regulatory equivalence based on technological characteristics and intended use being similar to a predicate device, rather than providing new performance study data.
Ask a specific question about this device
(77 days)
Coreline Soft Co., Ltd.
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast/non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, transferring, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing non-contrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text indicates that the device, AVIEW CAC, calculates the Agatston score for coronary artery calcification from non-contrast/non-gated Chest CT scans. It segments and evaluates the right and left coronary arteries and provides risk stratification based on calcium score, gender, and age, using percentile-based risk categories by established guidelines. The device is for healthcare professionals (radiologists and cardiologists) and analyzes existing CT studies from adult patients over 40 years old, acquired on GE equipment.
The document states that a clinical study was not considered necessary and that non-clinical testing supports the substantial equivalence of the device to its predicate. However, it does not provide specific acceptance criteria or an explicit study description with performance metrics for the AVIEW CAC device. It states that the device is substantially equivalent to a predicate device (K233211, also named AVIEW CAC) and that the substantial equivalence is supported by non-clinical testing.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert involvement, and ground truth establishment are not present in the provided text.
Based on the available information:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly state acceptance criteria or report specific device performance metrics in a tabular format. It generally states that "the results of the software verification and validation tests concluded that the proposed device is substantially equivalent" and "the nonclinical tests demonstrate that the device is safe and effective."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
This information is not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
This information is not provided in the document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A MRMC comparative effectiveness study is not mentioned in the document. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the indications for use is equivalent to the predicate device. The substantial equivalence of the device is supported by the non-clinical testing."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document implies that the "nonclinical tests" evaluated the device's performance, which would typically involve standalone algorithm performance. However, specific details about such a study or its results are not provided. The device's function is centered on automatic analysis (calculation of Agatston score, segmenting and evaluating arteries).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
This information is not provided in the document.
8. The sample size for the training set:
This information is not provided in the document.
9. How the ground truth for the training set was established:
This information is not provided in the document.
Ask a specific question about this device
(183 days)
Coreline Soft Co., Ltd.
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing noncontrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text describes the acceptance criteria and the study conducted for the AVIEW CAC device.
Here's the breakdown of the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the quantitative analysis of calcified plaques is primarily based on the Intraclass Correlation Coefficient (ICC) of the Agatston score against a ground truth and a predicate device.
Acceptance Criteria | Reported Device Performance (AVIEW CAC vs. Ground Truth) | Reported Device Performance (AVIEW CAC vs. Predicate Device) |
---|---|---|
P-value > 0.8 for ICC (implied target for strong agreement) | Agatston Score ICC (95% CI): | Agatston Score ICC (95% CI): |
Total: 0.896 (0.857, 0.925) | Total: 0.939 (0.916, 0.956) | |
LCA: 0.927 (0.899, 0.947) | LCA: 0.955 (0.938, 0.968) | |
RCA: 0.840 (0.778, 0.884) | RCA: 0.887 (0.844, 0.918) | |
**All p-values 0.8", which usually signifies a strong correlation. The reported ICC values are all above this implied threshold. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- 150 CSCT (gated) cases
- 150 Chest CT (non-gated) cases
- Additionally, 280 datasets collected from multiple institutions were used for a separate "MI functionality test report" which also evaluated correlation.
- Data Provenance: The document does not explicitly state the country of origin. The test cases were derived from "multiple institutions". It is implied to be retrospective as the device analyzes "existing" non-contrast/non-gated chest CT studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their detailed qualifications (e.g., "radiologist with 10 years of experience") for establishing the ground truth.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1, 3+1) for establishing the ground truth. It simply refers to "ground truth" without detailing its consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study focuses on the standalone performance of the algorithm against a defined ground truth and comparison against a predicate device, not on human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The performance data section explicitly states, "we evaluated the agreement in A coronary calcium scoring between the subject device and the ground truth" and "the correlation coefficient A between the AVIEW CAC automatic analysis results of the chest CT based on the heart CT and the Agatston scores was over 90%". This indicates the algorithm's performance without human intervention.
7. The Type of Ground Truth Used
The ground truth used was Agatston scores for coronary artery calcification. The document does not specify if this ground truth was established by expert consensus of human readers, pathology, or outcomes data. However, the comparison is made to "Ground Truth" for Agatston Score measurements, which implies a highly reliable, perhaps manually derived or reference Agatston score.
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions the test set sizes.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only refers to deep learning for automatic segmentation but does not detail the process for creating the ground truth data used to train the deep learning model.
Ask a specific question about this device
(267 days)
Coreline Soft Co.,Ltd.
AVIEW Lung Nodule CAD is a Computer-Aided Detection (CAD) software designed to assist radiologists in the detection of pulmonary nodules (with diameter 3-20 mm) during the review of CT examinations of the chest for asymptomatic populations. AVIEW Lung Nodule CAD provides adjunctive information to alert the radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. AVIEW Lung Nodule CAD may be used as a second reader after the radiologist has completed their initial read. The algorithm has been validated using non-contrast CT images, the majority of which were acquired on Siemens SOMATOM CT series scanners; therefore, limiting device use to use with Siemens SOMATOM CT series is recommended.
The AVIEW Lung Nodule CAD is a software product that detects nodules in the lung. The lung nodule detection model was trained by Deep Convolution Network (CNN) based algorithm from the chest CT image. Automatic detection of lung nodules of 3 to 20mm in chest CT images. By complying with DICOM standards, this product can be linked with the Picture Archiving and Communication System (PACS) and provides a separate user interface to provide functions such as analyzing, identifying, storing, and transmitting quantified values related to lung nodules. The CAD's results could be displayed after the user's first read, and the user could select or de-select the mark provided by the CAD. The device's performance was validated with SIEMENS’ SOMATOM series manufacturing. The device is intended to be used with a cleared AVIEW platform.
Here's a breakdown of the acceptance criteria and study details for the AVIEW Lung Nodule CAD, as derived from the provided document:
Acceptance Criteria and Reported Device Performance
Criteria (Standalone Performance) | Acceptance Criteria | Reported Device Performance |
---|---|---|
Sensitivity (patient level) | > 0.8 | 0.907 (0.846-0.95) |
Sensitivity (nodule level) | > 0.8 | Not explicitly stated as separate from patient level, but overall sensitivity is 0.907. |
Specificity | > 0.6 | 0.704 (0.622-0.778) |
ROC AUC | > 0.8 | 0.961 (0.939-0.983) |
Sensitivity at FP/scan 0.8 | 0.889 (0.849-0.93) at FP/scan=0.504 |
Study Details
1. Acceptance Criteria and Reported Device Performance (as above)
2. Sample size used for the test set and data provenance:
- Test Set Size: 282 cases (140 cases with nodule data and 142 cases without nodule data) for the standalone study.
- Data Provenance:
* Geographically distinct US clinical sites.
* All datasets were built with images from the U.S.
* Anonymized medical data was purchased.
* Included both incidental and screening populations.
* For the Multi-Reader Multi-Case (MRMC) study, the dataset consisted of 151 Chest CTs (103 negative controls and 48 cases with one or more lung nodules).
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Three (for both the MRMC study and likely for the standalone ground truth, given the consistency in expert involvement).
- Qualifications: Dedicated chest radiologists with at least ten years of experience.
4. Adjudication method for the test set:
- Not explicitly stated for the "standalone study" ground truth establishment.
- For the MRMC study, the three dedicated chest radiologists "determined the ground truth" in a blinded fashion. This implies a consensus or majority vote, but the exact method (e.g., 2+1, 3+1) is not specified. It does state "All lung nodules were segmented in 3D" which implies detailed individual expert review before ground truth finalization.
5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study:
- Yes, an MRMC study was performed.
- Effect size of human readers improving with AI vs. without AI assistance:
* AUC: The point estimate difference was 0.19 (from 0.73 unassisted to 0.92 aided).
* Sensitivity: The point estimate difference was 0.23 (from 0.68 unassisted to 0.91 aided).
* FP/scan: The point estimate difference was 0.24 (from 0.48 unassisted to 0.28 aided), indicating a reduction in false positives. - Reading Time: "Reading time was decreased when AVIEW Lung Nodule CAD aided radiologists."
6. Standalone (algorithm only without human-in-the-loop performance) study:
- Yes, a standalone study was performed.
- The acceptance criteria and reported performance for this study are detailed in the table above.
7. Type of ground truth used:
- Expert consensus by three dedicated chest radiologists with at least ten years of experience. For the standalone study, it is directly compared against "ground truth," which is established by these experts. For the MRMC study, the experts "determined the ground truth." The phrase "All lung nodules were segmented in 3D" suggests a thorough and detailed ground truth establishment.
8. Sample size for the training set:
- Not explicitly stated in the provided text. The document mentions the lung nodule detection model was "trained by Deep Convolution Network (CNN) based algorithm from the chest CT image," but does not provide details on the training set size.
9. How the ground truth for the training set was established:
- Not explicitly stated in the provided text.
Ask a specific question about this device
(365 days)
Coreline Soft Co.,Ltd.
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries
The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.
However, I can extract the information that is present and highlight what is missing.
Here's an analysis based on the provided text, indicating where information is present and where it is absent:
Acceptance Criteria and Device Performance (Partial)
The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."
Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criterion | Reported Device Performance |
---|---|---|
General Software Performance | Passed all tests based on pre-determined Pass/Fail criteria | Passed all tests |
Unit Test | Successful functional, performance, and algorithm analysis for image processing algorithm components | Tests conducted using Google C++ Unit Test Framework |
System Test (Defects) | No 'Major' or 'Moderate' defects found | No 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion) |
Kernel Conversion (LAA result reliability) | LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion. | Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was). |
Fissure Completeness | Compared to radiologists' assessment | Evaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided). |
Detailed Breakdown of Study Information:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
- Reported Device Performance:
- General: "passed all of the tests."
- System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
- Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
- Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
-
Sample sizes used for the test set and the data provenance:
- Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
- Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
- Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified in the provided text.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
- For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
-
The sample size for the training set:
- Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
-
How the ground truth for the training set was established:
- Not specified in the provided text.
Summary of Missing Information:
The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.
Ask a specific question about this device
(269 days)
Coreline Soft Co.,Ltd
AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images which could be used as an initial contour for the clinicians to approve and edit by the radiation oncology department for treatment planning or other professions where a segmented mask of organs is needed.
- a. Deep learning contouring from four body parts (Head & Neck, Breast, Abdomen, and Pelvis)
- b. Generates RT-DICOM structure of contoured organs
- c. Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Note that the Breast (Both right and left lung, Heart) were validated with non-contrast CT. Head & Neck (Both right and left Eyes, Brain and Mandible), Abdomen (Both right and Liver), and Pelvis (Both right and left Femur and Bladder) were validated with Contrast CT only.
The AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images. This software could be used by the radiation oncology department planning, or other professions where a segmented mask of organs is needed.
- Deep learning contouring: it can automatically contour the organ-at-risk (OARs) from four body parts (Head ● & Neck, Breast, Abdomen, and Pelvis)
- . Generates RT-DICOM structure of contoured organs
- . Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The general acceptance criterion for the AVIEW RT ACS device appears to be comparable performance to a predicate device (MIM-MRT Dosimetry) in terms of segmentation accuracy, as measured by Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD). While explicit numerical acceptance thresholds are not stated in the provided text (e.g., "DSC must be greater than X"), the study is structured as a comparative effectiveness study. The expectation is that the AVIEW RT ACS performance should be at least equivalent to, if not better than, the predicate device.
The study's tables (Tables 1-30) consistently show the AVIEW RT ACS achieving higher average DSC values (closer to 1, indicating better overlap) and generally lower average 95% HD values (closer to 0, indicating less maximum distance between contours), across various organs, demographic groups, and scanner parameters, compared to the predicate device.
Table of Acceptance Criteria and Reported Device Performance:
Metric / Organ (Examples) | Acceptance Criterion (Implicit) | AVIEW RT ACS Performance (Mean ± SD, [95% CI]) | Predicate Device Performance (Mean ± SD, [95% CI]) | Difference (AVIEW - Predicate) | Meets Criteria? |
---|---|---|---|---|---|
Overall DSC | Should be comparable to or better than predicate device. | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly positive | Yes |
Overall 95% HD (mm) | Should be comparable to or better than predicate device (i.e., lower HD). | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly negative (indicating better AVIEW) | Yes |
Brain DSC | Comparable to or better than predicate. | 0.97 ± 0.01 (0.97, 0.98) | 0.96 ± 0.01 (0.96, 0.96) | 0.01 | Yes |
Brain 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.92 ± 20.46 (-1.1, 14.94) | 4.61 ± 2.17 (3.76, 5.46) | 2.31 | Mixed (Higher HD for AVIEW, but wide CI) |
Heart DSC | Comparable to or better than predicate. | 0.94 ± 0.03 (0.93, 0.95) | 0.78 ± 1.20 (0.70, 8.56) | 0.16 | Yes (Significantly better) |
Heart 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.19 ± 4.21 (4.73, 7.65) | 18.90 ± 5.09 (17.14, 20.67) | -12.71 | Yes (Significantly better) |
Liver DSC | Comparable to or better than predicate. | 0.96 ± 0.01 (0.96, 0.97) | 0.87 ± 0.06 (0.85, 0.90) | 0.09 | Yes |
Liver 95% HD (mm) | Comparable to or better than predicate (lower HD). | 7.17 ± 12.07 (2.54, 11.81) | 24.62 ± 15.16 (18.79, 30.44) | -17.44 | Yes (Significantly better) |
Bladder DSC | Comparable to or better than predicate. | 0.88 ± 0.14 (0.84, 0.93) | 0.52 ± 0.26 (0.44, 0.60) | 0.36 | Yes (Significantly better) |
Bladder 95% HD (mm) | Comparable to or better than predicate (lower HD). | 10.55 ± 20.56 (3.74, 17.36) | 30.48 ± 22.76 (22.94, 38.02) | -19.93 | Yes (Significantly better) |
Note: The tables throughout the document provide specific performance metrics for individual organs and sub-groups (race, vendors, slice thickness, kernel types). The general conclusion from these tables is that the AVIEW RT ACS consistently performs as well as or better than the predicate device across most metrics and categories.
Study Details for Acceptance Criteria Proof:
-
Sample Size Used for the Test Set: 120 cases.
- Data Provenance: The dataset included cases from both South Korea and the USA. It was constructed with various ethnicities (White, Black, Asian, Hispanic, Latino, African, American, etc.), and from four major vendors (GE, Siemens, Toshiba, and Philips).
- Retrospective/Prospective: Not explicitly stated, but the mention of a data set constructed for validation suggests a retrospective collection.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Number of Experts: 3 radiation oncology physicians.
- Qualifications: All were trained by "The Korean Society for Radiation Oncology," board-certified by the "Ministry of Health and Welfare," with a range of 9-21 years of experience in radiotherapy. The experts included attending assistant professors (n=2) and professors (n=1) from three institutions.
-
Adjudication Method for the Test Set:
- The method was a sequential editing process:
- One expert manually delineated the organs.
- The segmentation results from the first expert were then sequentially edited by the other two experts.
- The first expert made corrections.
- The result was then received by another expert who finalized the gold standard.
- This can be considered a form of sequential consensus or collaborative review rather than a strict N+1 or M+N+1 method.
- The method was a sequential editing process:
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- Yes, a comparative effectiveness study was done. The study directly compares the AVIEW RT ACS against a predicate device (MIM-MRT Dosimetry).
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study does not measure the improvement of human readers with AI assistance. Instead, it evaluates the standalone performance of the AI device against the standalone performance of a predicate AI device, both compared to expert-generated ground truth. The "human readers" (the three experts) were used solely to create the ground truth, not to evaluate their performance with or without AI assistance.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The study compares the auto-segmentation results of the AVIEW RT ACS directly to the expert-derived "gold standard" and also compares it to the auto-segmentation of the predicate device. This is purely an algorithm-only evaluation.
-
The Type of Ground Truth Used:
- Expert Consensus. The ground truth was established by three radiation oncology physicians through a sequential delineation and editing process to create a "robust gold standard."
-
The Sample Size for the Training Set:
- Not specified within the provided text. The document refers only to the validation/test set.
-
How the Ground Truth for the Training Set Was Established:
- Not specified within the provided text. Since the training set size and characteristics are not mentioned, neither is the method for establishing its ground truth.
Ask a specific question about this device
(115 days)
Coreline Soft Co., Ltd.
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part, measured in sections perpendicular to the Major axis of the nodule), VDT (Volume doubling time), Lung-RADS (classification proposed to aid with findings) and CAC score and LAA analysis. The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
The provided text does not contain detailed acceptance criteria for specific performance metrics of the AVIEW LCS device, nor does it describe a study proving the device meets particular acceptance criteria with quantitative results.
The document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed performance study like a clinical trial.
However, based on the information provided, here's what can be extracted and inferred regarding performance and testing:
1. Table of Acceptance Criteria and Reported Device Performance
As specific quantitative acceptance criteria and detailed performance metrics are not explicitly stated in the provided text for AVIEW LCS, I cannot create a table of acceptance criteria and reported device performance. The document generally states that "the modified device passed all of the tests based on pre-determined Pass/Fail criteria" for software validation.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size used for any test set or the data provenance (e.g., country of origin, retrospective/prospective). The described "Unit Test" and "System Test" are internal software validation tests rather than clinical performance studies involving patient data.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not mention using experts to establish ground truth for a test set. This type of information would typically be found in a clinical performance study.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method for a test set. This is relevant for clinical studies where multiple readers assess cases.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a multi-reader multi-case (MRMC) comparative effectiveness study was performed. Therefore, no effect size of human readers improving with AI vs. without AI assistance is mentioned.
6. Standalone (Algorithm Only) Performance Study
The document does not explicitly state that a standalone (algorithm only without human-in-the-loop performance) study was conducted. The "Performance Test" section refers to DICOM, integration, and thin client server compatibility reports, which are software performance tests, not clinical efficacy or diagnostic accuracy studies for the algorithm itself. The device description mentions "automatic nodules detection by integration with 3rd party CAD (Mevis Visia FDA 510k Cleared)", suggesting it leverages an already cleared CAD system for detection rather than having a new, independently evaluated detection algorithm as part of this submission.
7. Type of Ground Truth Used
The document does not specify the type of ground truth used for any performance evaluation. Again, this would be characteristic of a clinical performance study.
8. Sample Size for the Training Set
The document does not provide the sample size for any training set. This is typically relevant for AI/ML-based algorithms. The mention of "deep-learning algorithms" for lung and lobe segmentation suggests a training set was used, but its size is not disclosed.
9. How the Ground Truth for the Training Set Was Established
The document does not explain how ground truth for any potential training set was established.
Summary of available information regarding testing:
The "Performance Data" section (8) of the 510(k) summary focuses on nonclinical performance testing and software verification and validation activities.
- Nonclinical Performance Testing: The document states, "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device. The substantial equivalence of the device is supported by the nonclinical testing." This indicates the submission relies on the substantial equivalence argument and internal software testing, not new clinical performance data for efficacy.
- Software Verification and Validation:
- Unit Test: Conducted using Google C++ Unit Test Framework on major software components for functional, performance, and algorithm analysis.
- System Test: Conducted based on "integration Test Cases" and "Exploratory Test" to identify defects.
- Acceptance Criteria for System Test: "Success standard of System Test is not finding 'Major', 'Moderate' defect."
- Defect Classification:
- Major: Impacting intended use, no workaround.
- Moderate: UI/general quality, workaround available.
- Minor: Not impacting intended use, not significant.
- Performance Test Reports: DICOM Test Report, Performance Test Report, Integration Test Report, Thin Client Server Compatibility Test Report.
In conclusion, the provided 510(k) summary primarily addresses software validation and verification to demonstrate substantial equivalence, rather than a clinical performance study with specific acceptance criteria related to diagnostic accuracy, reader performance, or a detailed description of ground truth establishment.
Ask a specific question about this device
(161 days)
Coreline Soft Co., Ltd
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software could be used to support the physician quantitatively in the diagnosis, follow up evaluation of CT lung tissue images by providing image segmentation of sub-structures in lung, lobe, airways and cardiac, registration and expiration which could analyze quantitative information such as air trapped index, and inspiration/ expiration ratio. And also, volumetric and structure analysis, density evaluation and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on premise and as cloud environment as well to allow users to connect by various environment such as mobile devices and chrome browser. Characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major; length of the longest diameter measured in 3D for solid portion of the nodule, Solid 2nd Major: The longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides CAC analysis by segmentation of four main artery (right coronary artery, left main coronary, left anterior descending and left circumflex artery then extracts calcium on coronary artery to provide Agatston score, volume score and mass score by whole and each segmented artery type. Based on the score, provides CAC risk based on age and gender.
The AVIEW is a software product which can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0 which is the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving and sending images by using the software tools. And is intended for use as diagnostic patient imaging which is intended for the review and analysis of CT scanning. Provides following features as semi-automatic nodule management, maximal plane measure, 3D measures and columetric measures, automatic nodule detection by integration with 3rd party CAD. Also provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported. It also automatically analyzes coronary artery calcification which support user to detect cardiovascular disease in early stage and reduce the burden of medical.
The provided FDA 510(k) summary for the AVIEW 2.0 device (K200714) primarily focuses on establishing substantial equivalence to a predicate device (AVIEW K171199, among others) rather than presenting a detailed clinical study demonstrating its performance against specific acceptance criteria.
However, based on the nonclinical performance testing section and the overall description, we can infer some aspects and present the available information regarding the device's capabilities and how it was tested. It is important to note that explicit acceptance criteria and detailed clinical study results are not fully elaborated in the provided text. The document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly state "acceptance criteria" with numerical or performance targets. Instead, it describes general validation methods and "performance tests" that were conducted to ensure functionality and reliability. The "Reported Device Performance" here refers to the successful completion or validation of these functions.
Feature/Function | Acceptance Criteria (Inferred from Validation) | Reported Device Performance (as per 510(k) Summary) |
---|---|---|
Software Functionality & Reliability | Absence of 'Major' or 'Moderate' defects. | All tests passed based on pre-determined Pass/Fail criteria. No 'Major' or 'Moderate' defects found during System Test. Minor defects, if any, did not impact intended use. |
Unit Test (Major Software Components) | Functional test conditions, performance test conditions, algorithm analysis met. | Performed using Google C++ Unit Test Framework; included functional, performance, and algorithm analysis for image processing. Implied successful completion. |
System Test | No 'Major' or 'Moderate' defects identified. | Conducted by installing software to hardware with recommended specifications. New errors from 'Exploratory Test' were managed. Successfully passed as no 'Major' or 'Moderate' defects were found. |
Specific Performance Tests | (Implied: Accurate, reliable, and consistent output) | |
Auto Lung & Lobe Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic lung/lobe segmentation using deep-learning algorithms." |
Airway Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic airway segmentation using deep-learning algorithms." |
Nodule Matching Experiment Using Lung Registration | (Implied: Accurate nodule matching and registration) | Performed. The device features "Follow-up support with nodule matching and comparison." |
Validation on DVF Size Optimization with Sub-sampling | (Implied: Optimized DVF size with sub-sampling) | Performed. |
Semi-automatic Nodule Segmentation | (Implied: Accurate segmentation) | Performed. The device features "semi-automatic nodule management" and "semi-automatic nodule measurement (segmentation)." |
Brock Model (PANCAN) Calculation | (Implied: Accurate malignancy score calculation) | Performed. The device "provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs" and "PANCAN risk calculator." |
VDT Calculation | (Implied: Accurate volume doubling time calculation) | Performed. The device offers "Automatic calculation of VDT (volume doubling time)." |
Lung RADS Calculation | (Implied: Accurate Lung-RADS categorization) | Performed. The device "automatically categorize Lung-RADS score" and integrates with "Lung-RADS (classification proposed to aid with findings)." |
Validation LAA Analysis | (Implied: Accurate LAA analysis) | Performed. The device features "LAA analysis (LAA-950HU for INSP, LAA-856HU for EXP), LAA size analysis (D-Slope), and true 3D analysis of LAA cluster sizes." |
Reliability Test for Airway Wall Measurement | (Implied: Reliable airway wall thickness measurement) | Performed. The device offers "Precise airway wall thickness measurement" and "Robust measurement using IBHB (Integral-Based Half-BAND) method" and "Precise AWT-Pi10 calculation." |
CAC Performance (Coronary Artery Calcification) | (Implied: Accurate Agatston, volume, mass scores, and segmentation) | Performed. The device "automatically analyzes coronary artery calcification," "Extracts calcium on coronary artery to provide Agatston score, volume score and mass score," and "Automatically segments calcium area of coronary artery based on deep learning... Segments and provides overlay of four main artery." Also "Provides CAC risk based on age and gender." |
Air Trapping Analysis | (Implied: Accurate air trapping analysis) | Performed. The device features "Air-trapping analysis using INSP/EXP registration." |
INSP/EXP Registration | (Implied: Accurate non-rigid elastic registration) | Performed. The device features "Fully automatic INSP/EXP registration (non-rigid elastic) algorithm." |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary does not specify the sample size used for the test set(s) used in the performance evaluation, nor does it detail the data provenance (e.g., country of origin, retrospective or prospective). It simply mentions "software verification and validation" and "nonclinical performance testing."
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not provide information on the number of experts used to establish ground truth or their specific qualifications for any of the nonclinical or performance tests mentioned. Given that no clinical study was performed, it is unlikely that medical experts were involved in establishing ground truth for a test set in the conventional sense for clinical performance.
4. Adjudication Method
No information is provided regarding an adjudication method. Since the document states no clinical study was conducted, adjudication by multiple experts would not have been applicable for a clinical performance evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device." Therefore, there is no mention of an effect size for human readers with or without AI assistance.
6. Standalone Performance Study
Yes, a standalone (algorithm only without human-in-the-loop) performance evaluation was conducted, implied by the "Nonclinical Performance Testing" and "Software Verification and Validation" sections. The "Performance Test" section specifically lists several automatic and semi-automatic functions (e.g., "Auto Lung & Lobe Segmentation," "Airway Segmentation," "CAC Performance") that were tested for the device's inherent capability.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for each specific performance test. For software components involving segmentation, it is common to use expert-annotated images (manual segmentation by experts) as ground truth for a quantitative comparison. For calculations like Agatston score, or VDT, the ground truth would likely be mathematical computations based on established formulas or reference standards applied to the segmented regions. However, this is inferred, not explicitly stated.
8. Sample Size for the Training Set
The document does not specify the sample size for any training set. It mentions the use of "deep-learning algorithms" for segmentation, which implies a training phase, but details about the training data are absent.
9. How Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for any training set was established. While deep learning is mentioned for certain segmentation tasks, the methodology for creating the labeled training data is not detailed.
Ask a specific question about this device
(166 days)
Coreline Soft Co., Ltd
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part. measured in sections perpendicular to the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
- -Nodule measurement
- Adding nodule by segmentation or by lines .
- Semi-automatic nodule measurement (segmentation) "
- . Maximal plane measure, 3D measure and volumetric measure.
- . Automatic large vessel removal.
- י Provides various features calculated per each nodule such as size, major(longest diameter measured in 2D/3D), minor (shortest diameter measured in 2D/3D), maximal plane, volume, mean HU, minimum HU, maximum HU for solid nodules and ratio of the longest axis for solid to non solid for paritla solid nodules.
- . Fully supporting Lung-RADS workflow: US Lung-RADS and KR Lung-RADS.
- . Nodule malignancy score (PANCAN model) calculation.
- . Importing from CAD results
- -Follow-up
- ' Automatic retrieving the past data
- י Follow-up support with nodule matching and comparison
- Automatic calculation of VDT (volume doubling time)
- Automatic nodule detection (CADe) -
- Seamless integration with Mevis Visia (FDA 510k Cleared) .
- -Lungs and lobes segmentation
- Better segmentation of lungs and lobes based on deep-learning algorithms.
- -Report
- PDF report generation .
- . It saves or sends the pdf report and captured images in DICOM files.
- . It provides structured report including following items such as nodule location and, also input finding on nodules.
- Reports are generated using the results of all nodules nodules detected so far (Lung RADS) .
- -Save Result
- . It saves the results in internal format
The provided text describes the acceptance criteria and the study conducted to prove the AVIEW LCS device meets these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present a formal table of acceptance criteria with corresponding reported device performance metrics in a single, clear format for all functionalities. Instead, it describes various tests and their success standards implicitly serving as acceptance criteria.
Based on the "Semi-automatic Nodule Segmentation" section, here's a reconstructed table for that specific functionality:
Acceptance Criteria (Semi-automatic Nodule Segmentation) | Reported Device Performance |
---|---|
Measured length should be less than one voxel size compared to the size of the sphere produced. | Implied "standard judgment" met through testing with spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm). |
Measured volume should be within 10% error compared to the volume of the sphere created. | Implied "standard judgment" met through testing with spheres of various radii. |
For "Nodule Matching test with Lung Registration":
Acceptance Criteria (Nodule Matching) | Reported Device Performance |
---|---|
Voxel Distance error between converted position and Nodule position of the Moving image (for evaluation of DVF). | Measured for 28 locations. Implied acceptance as the study "check the applicability of Registry" and "Start-up verification". |
Accuracy evaluation of DVF subsampling for size optimization. | Implied acceptable accuracy for reducing DVF capacity. |
For "Software Verification and Validation - System Test":
Acceptance Criteria (System Test) | Reported Device Performance |
---|---|
Not finding 'Major' defects. | Implied satisfied (device passed all tests). |
Not finding 'Moderate' defects. | Implied satisfied (device passed all tests). |
For "Auto segmentation (based on deep-learning algorithms) test":
Acceptance Criteria (Auto Segmentation - Korean Data) | Reported Device Performance |
---|---|
Auto-segmentation results identified by specialist and radiologist and classified as '2 (very good)'. | "The results of auto-segmentation are identified by a specialist and radiologist and classified as 0 (Not good), 1 (need adjustment), and 2(very good)". This suggests an evaluation was performed to categorize segmentation quality, but the specific percentage or number of "very good" classifications is not provided as a direct performance metric. |
Acceptance Criteria (Auto Segmentation - NLST Data) | Reported Device Performance |
---|---|
Dice similarity coefficient between auto-segmentation and manual segmentation (performed by radiolographer and confirmed by radiologist). | "The dice similarity coefficient is performed to check how similar they are." The specific threshold or result of the DSC is not provided. |
2. Sample Size Used for the Test Set and Data Provenance
-
Semi-automatic Nodule Segmentation:
- Sample Size: Not explicitly stated as a number of nodules or patients. Spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm) were created for testing.
- Data Provenance: Artificially generated spheres.
-
Nodule Matching test with Lung Registration:
- Sample Size: "28 locations" (referring to nodule locations for DVF calculation).
- Data Provenance: "deployed" experimental data, likely retrospective CT scans. Specific country of origin not mentioned but given the company's origin (Republic of Korea), it could involve Korean data.
-
Mevis CAD Integration test:
- Sample Size: Not explicitly stated. The test confirms data transfer and display.
- Data Provenance: Not specified, likely internal test data.
-
Brock Score (aka. PANCAN) Risk Calculation test:
- Sample Size:
- Former paper: "PanCan data set, 187 persons had 7008 nodules, of which 102 were malignant," and "BCCA data set, 1090 persons had 5021 nodules, of which 42 were malignant."
- Latter paper: "4431 nodules (4315 benign nodules and 116 malignant nodules of NLST data)."
- Data Provenance: Retrospective, from published papers. PANCAN data set, BCCA data set, and NLST (National Lung Screening Trial) data.
- Sample Size:
-
VDT Calculation test:
- Sample Size: Not explicitly stated.
- Data Provenance: Unit tests, implying simulated or internally generated data for calculation verification.
-
Lung RADS Calculation test:
- Sample Size: "10 cases were extracted."
- Data Provenance: Retrospective, "from the Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
-
Auto segmentation (based on deep-learning algorithms) test:
- Korean Data: "192 suspected COPD patients." (Retrospective, implicitly from Korea given the source "Korean Society of Thoracic Radiology").
- NLST Data: "80 patient's Chest CT data who were enrolled in NLST." (Retrospective, from the National Lung Screening Trial).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
-
Auto segmentation (based on deep-learning algorithms) test - Korean Data:
- Number of Experts: Not explicitly stated, but "a specialist and radiologist" suggests at least two experts.
- Qualifications: "specialist and radiologist." No specific years of experience or sub-specialty mentioned.
-
Auto segmentation (based on deep-learning algorithms) test - NLST Data:
- Number of Experts: At least two. "experienced radiolographer and confirmed by experienced radiologist."
- Qualifications: "experienced radiolographer" and "experienced radiologist." No specific years of experience or sub-specialty mentioned.
-
Brock Score (aka. PANCAN) Risk Calculation test: Ground truth established through the studies referenced; details on experts for those specific studies are not provided in this document.
-
Lung RADS Calculation test: Ground truth implicitly established by the "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology." The experts who created this survey table are not detailed here.
4. Adjudication Method for the Test Set
The document does not explicitly describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth for any of the tests.
- For the Auto segmentation (based on deep-learning algorithms) test, the "Korean Data" section mentions results are "identified by a specialist and radiologist and classified." This suggests independent review or consensus, but no specific adjudication rule is given. For the "NLST Data" section, manual segmentation was "performed by experienced radiolographer and confirmed by experienced radiologist," indicating a two-step process of creation and verification, rather than a conflict resolution method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was mentioned or conducted in this document. The device is for "review and analysis" and "reporting," and integrates with a third-party CAD, but its direct impact on human reader performance through an MRMC study is not detailed.
6. Standalone Performance Study (Algorithm only without human-in-the-loop performance)
Yes, standalone performance studies were conducted for several functionalities, focusing on the algorithm's performance without direct human-in-the-loop tasks:
- Semi-automatic Nodule Segmentation: The test on artificial spheres evaluates the algorithm's measurement accuracy directly.
- Nodule Matching test with Lung Registration: Evaluates the algorithm's ability to calculate DVF and match nodules.
- Brock Score (aka. PANCAN) Risk Calculation test: Unit tests comparing calculated values from the algorithm against an Excel sheet.
- VDT Calculation test: Unit tests to confirm calculation.
- Lung RADS Calculation test: Unit tests to confirm implementation accuracy against regulations.
- Auto segmentation (based on deep-learning algorithms) test: This is a standalone evaluation of the algorithm's segmentation performance against expert classification or manual segmentation (NLST data).
7. Type of Ground Truth Used
- Semi-automatic Nodule Segmentation: Known physical properties of artificially generated spheres (e.g., precise radius and volume).
- Nodule Matching test with Lung Registration: Nodule positions marked on "Fixed image" and "Moving image," implying expert identification on real CT scans.
- Brock Score (aka. PANCAN) Risk Calculation test: Reference values from published literature (PanCan, BCCA, NLST data) which themselves are derived from ground truth of malignancy (pathology, clinical follow-up).
- VDT Calculation test: Established mathematical formulas for VDT.
- Lung RADS Calculation test: Lung-RADS regulations and a "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
- Auto segmentation (based on deep-learning algorithms) test - Korean Data: Expert classification ("0 (Not good), 1 (need adjustment), and 2(very good)") by a specialist and radiologist.
- Auto segmentation (based on deep-learning algorithms) test - NLST Data: Manual segmentation performed by an experienced radiolographer and confirmed by an experienced radiologist.
8. Sample Size for the Training Set
The document does not explicitly state the sample size used for the training set(s) for the deep-learning algorithms or other components of the AVIEW LCS. It only details the test sets.
9. How the Ground Truth for the Training Set Was Established
Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document. However, given the nature of the evaluation for the test set (expert marking, classification), it is highly probable that similar methods involving expert radiologists or specialists would have been used to establish ground truth for any training data.
Ask a specific question about this device
(142 days)
Coreline Soft Co., Ltd
The AVIEW Modeler is intended for use as an image review and segmentation system that operates on DICOM imaging information obtained from a medical scanner. It is also used as a pre-operative software for surgical planning. 3D printed models generated from the output file are for visualization and educational purposes only and not for diagnostic use.
The AVIEW Modeler is a software product which can be installed on a separate PC, it displays patient medical images on the screen by acquiring it from image Acquisition Device. The image on the screen can be checked edited, saved and received.
- -Various displaying functions
- Thickness MPR., oblique MPR, shaded volume rendering and shaded surface rendering.
- . Hybrid rendering of simultaneous volume-rendering and surface-rendering.
- -Provides easy to-use manual and semi-automatic segmentation methods
- Brush, paint-bucket, sculpting, thresholding and region growing. "
- . Magic cut (based on Randomwalk algorithm)
- -Morphological and Boolean operations for mask generation.
- Mesh generation and manipulation algorithms. -
- Mesh smoothing, cutting, fixing and Boolean operations.
- -Exports 3d-printable models in open formats, such as STL.
- DICOM 3.0 compliant (C-STORE, C-FIND) -
The provided text describes the 510(k) Summary for AVIEW Modeler, focusing on its substantial equivalence to predicate devices, rather than a detailed performance study directly addressing specific acceptance criteria. The document emphasizes software verification and validation activities.
Therefore, I cannot fully complete all sections of your request concerning acceptance criteria and device performance based solely on the provided text. However, I can extract information related to software testing and general conclusions.
Here's an attempt to answer your questions based on the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with corresponding performance metrics like accuracy, sensitivity, or specificity for the segmentation features. Instead, it discusses the successful completion of various software tests.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Functional Adequacy | "passed all of the tests based on pre-determined Pass/Fail criteria." |
Performance Adequacy | Performance tests conducted "according to the performance evaluation standard and method that has been determined with prior consultation between software development team and testing team" to check non-functional requirements. |
Reliability | System tests concluded "not finding 'Major'. 'Moderate' defect." |
Compatibility | STL data created by AVIEW Modeler was "imported into Stratasys printing Software, Object Studio to validate the STL before 3d-printing with Objet260 Connex3." (implies successful validation for 3D printing) |
Safety and Effectiveness | "The new device does not introduce a fundamentally new scientific technology, and the nonclinical tests demonstrate that the device is safe and effective." |
2. Sample sizes used for the test set and the data provenance
The document does not specify the sample size (number of images or patients) used for any of the tests (Unit, System, Performance, Compatibility). It also does not explicitly state the country of origin of the data or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide any information about the number or qualifications of experts used to establish ground truth for a test set. The focus is on internal software validation and comparison to a predicate device.
4. Adjudication method for the test set
The document does not mention any adjudication method for a test set, as it does not describe a clinical performance study involving human readers.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
No, the provided text does not describe an MRMC comparative effectiveness study involving human readers with or without AI assistance. The study described is a software verification and validation, concluding substantial equivalence to a predicate device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document describes various software tests (Unit, System, Performance, Compatibility) which could be considered forms of standalone testing for the algorithm's functionality and performance. However, it does not present quantitative standalone performance metrics typical of an algorithm-only study (e.g., precision, recall, Dice score for segmentation). It focuses on internal software quality and compatibility.
7. The type of ground truth used
The type of "ground truth" used is not explicitly defined in terms of clinical outcomes or pathology. For the software validation, the "ground truth" would likely refer to pre-defined correct outputs or expected behavior of the software components, established by the software development and test teams. For example, for segmentation, it would be the expected segmented regions based on the algorithm's design and previous validation efforts (likely through comparison to expert manual segmentations or another validated method, though not detailed here).
8. The sample size for the training set
The document does not mention a training set or its sample size. This is a 510(k) summary for a medical image processing software (AVIEW Modeler), and while it mentions a "Magic cut (based on Randomwalk algorithm)," it does not describe an AI model that underwent a separate training phase with a specific dataset, nor does it classify the device as having "machine learning" capabilities in the context of FDA regulation. The focus is on traditional software validation.
9. How the ground truth for the training set was established
As no training set is mentioned (see point 8), there is no information on how its ground truth would have been established.
Ask a specific question about this device
Page 1 of 2