Search Results
Found 8 results
510(k) Data Aggregation
(110 days)
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel to soft kernel for quantitative analysis of segmenting low attenuation areas of the lung Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule, and measurements such as size (major axis), estimated effective diameter from the volume of the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major: length of the longest diameter measure in 3D for a solid portion of the nodule. Solid 2nd Maior: The size of the longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.)). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified AVIEW Lung Nodule CAD (Computer aided detection) (K221592). It also provides the Agatston score, volume score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening and management recommendations that are based on type, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries.
The provided text is a 510(k) premarket notification letter and summary for a medical device called "AVIEW." This document primarily asserts substantial equivalence to a predicate device and notes general software changes rather than providing detailed acceptance criteria and study results for specific performance metrics that would typically be found in performance study reports.
Specifically, the document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the substantial equivalence of the device is supported by the non-clinical testing." This means that the submission does not include information about a standalone or MRMC study designed to prove the device meets specific performance acceptance criteria for its analytical functions.
Therefore, I cannot provide the requested information from the given text as the detailed performance study data is not present. The document focuses on regulatory equivalence based on technological characteristics and intended use being similar to a predicate device, rather than providing new performance study data.
Ask a specific question about this device
(77 days)
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast/non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, transferring, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing non-contrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text indicates that the device, AVIEW CAC, calculates the Agatston score for coronary artery calcification from non-contrast/non-gated Chest CT scans. It segments and evaluates the right and left coronary arteries and provides risk stratification based on calcium score, gender, and age, using percentile-based risk categories by established guidelines. The device is for healthcare professionals (radiologists and cardiologists) and analyzes existing CT studies from adult patients over 40 years old, acquired on GE equipment.
The document states that a clinical study was not considered necessary and that non-clinical testing supports the substantial equivalence of the device to its predicate. However, it does not provide specific acceptance criteria or an explicit study description with performance metrics for the AVIEW CAC device. It states that the device is substantially equivalent to a predicate device (K233211, also named AVIEW CAC) and that the substantial equivalence is supported by non-clinical testing.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert involvement, and ground truth establishment are not present in the provided text.
Based on the available information:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly state acceptance criteria or report specific device performance metrics in a tabular format. It generally states that "the results of the software verification and validation tests concluded that the proposed device is substantially equivalent" and "the nonclinical tests demonstrate that the device is safe and effective."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
This information is not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
This information is not provided in the document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A MRMC comparative effectiveness study is not mentioned in the document. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the indications for use is equivalent to the predicate device. The substantial equivalence of the device is supported by the non-clinical testing."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document implies that the "nonclinical tests" evaluated the device's performance, which would typically involve standalone algorithm performance. However, specific details about such a study or its results are not provided. The device's function is centered on automatic analysis (calculation of Agatston score, segmenting and evaluating arteries).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
This information is not provided in the document.
8. The sample size for the training set:
This information is not provided in the document.
9. How the ground truth for the training set was established:
This information is not provided in the document.
Ask a specific question about this device
(183 days)
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing noncontrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text describes the acceptance criteria and the study conducted for the AVIEW CAC device.
Here's the breakdown of the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the quantitative analysis of calcified plaques is primarily based on the Intraclass Correlation Coefficient (ICC) of the Agatston score against a ground truth and a predicate device.
| Acceptance Criteria | Reported Device Performance (AVIEW CAC vs. Ground Truth) | Reported Device Performance (AVIEW CAC vs. Predicate Device) |
|---|---|---|
| P-value > 0.8 for ICC (implied target for strong agreement) | Agatston Score ICC (95% CI): | Agatston Score ICC (95% CI): |
| Total: 0.896 (0.857, 0.925) | Total: 0.939 (0.916, 0.956) | |
| LCA: 0.927 (0.899, 0.947) | LCA: 0.955 (0.938, 0.968) | |
| RCA: 0.840 (0.778, 0.884) | RCA: 0.887 (0.844, 0.918) | |
| All p-values < 0.001 (indicating statistical significance of correlation) | All reported p-values are <.001 | All reported p-values are <.001 |
| Correlation coefficient between AVIEW CAC automatic analysis and Agatston scores calculated from heart CT should be over 90% | Correlation coefficient between AVIEW CAC automatic analysis results of the chest CT based on the heart CT and the Agatston scores was over 90%. | Not applicable |
Note: The acceptance criterion for ICC is implied by the p-value requirement ">0.8", which usually signifies a strong correlation. The reported ICC values are all above this implied threshold.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- 150 CSCT (gated) cases
- 150 Chest CT (non-gated) cases
- Additionally, 280 datasets collected from multiple institutions were used for a separate "MI functionality test report" which also evaluated correlation.
- Data Provenance: The document does not explicitly state the country of origin. The test cases were derived from "multiple institutions". It is implied to be retrospective as the device analyzes "existing" non-contrast/non-gated chest CT studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their detailed qualifications (e.g., "radiologist with 10 years of experience") for establishing the ground truth.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1, 3+1) for establishing the ground truth. It simply refers to "ground truth" without detailing its consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study focuses on the standalone performance of the algorithm against a defined ground truth and comparison against a predicate device, not on human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The performance data section explicitly states, "we evaluated the agreement in A coronary calcium scoring between the subject device and the ground truth" and "the correlation coefficient A between the AVIEW CAC automatic analysis results of the chest CT based on the heart CT and the Agatston scores was over 90%". This indicates the algorithm's performance without human intervention.
7. The Type of Ground Truth Used
The ground truth used was Agatston scores for coronary artery calcification. The document does not specify if this ground truth was established by expert consensus of human readers, pathology, or outcomes data. However, the comparison is made to "Ground Truth" for Agatston Score measurements, which implies a highly reliable, perhaps manually derived or reference Agatston score.
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions the test set sizes.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only refers to deep learning for automatic segmentation but does not detail the process for creating the ground truth data used to train the deep learning model.
Ask a specific question about this device
(115 days)
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part, measured in sections perpendicular to the Major axis of the nodule), VDT (Volume doubling time), Lung-RADS (classification proposed to aid with findings) and CAC score and LAA analysis. The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
The provided text does not contain detailed acceptance criteria for specific performance metrics of the AVIEW LCS device, nor does it describe a study proving the device meets particular acceptance criteria with quantitative results.
The document is a 510(k) premarket notification summary, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed performance study like a clinical trial.
However, based on the information provided, here's what can be extracted and inferred regarding performance and testing:
1. Table of Acceptance Criteria and Reported Device Performance
As specific quantitative acceptance criteria and detailed performance metrics are not explicitly stated in the provided text for AVIEW LCS, I cannot create a table of acceptance criteria and reported device performance. The document generally states that "the modified device passed all of the tests based on pre-determined Pass/Fail criteria" for software validation.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size used for any test set or the data provenance (e.g., country of origin, retrospective/prospective). The described "Unit Test" and "System Test" are internal software validation tests rather than clinical performance studies involving patient data.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not mention using experts to establish ground truth for a test set. This type of information would typically be found in a clinical performance study.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method for a test set. This is relevant for clinical studies where multiple readers assess cases.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not indicate that a multi-reader multi-case (MRMC) comparative effectiveness study was performed. Therefore, no effect size of human readers improving with AI vs. without AI assistance is mentioned.
6. Standalone (Algorithm Only) Performance Study
The document does not explicitly state that a standalone (algorithm only without human-in-the-loop performance) study was conducted. The "Performance Test" section refers to DICOM, integration, and thin client server compatibility reports, which are software performance tests, not clinical efficacy or diagnostic accuracy studies for the algorithm itself. The device description mentions "automatic nodules detection by integration with 3rd party CAD (Mevis Visia FDA 510k Cleared)", suggesting it leverages an already cleared CAD system for detection rather than having a new, independently evaluated detection algorithm as part of this submission.
7. Type of Ground Truth Used
The document does not specify the type of ground truth used for any performance evaluation. Again, this would be characteristic of a clinical performance study.
8. Sample Size for the Training Set
The document does not provide the sample size for any training set. This is typically relevant for AI/ML-based algorithms. The mention of "deep-learning algorithms" for lung and lobe segmentation suggests a training set was used, but its size is not disclosed.
9. How the Ground Truth for the Training Set Was Established
The document does not explain how ground truth for any potential training set was established.
Summary of available information regarding testing:
The "Performance Data" section (8) of the 510(k) summary focuses on nonclinical performance testing and software verification and validation activities.
- Nonclinical Performance Testing: The document states, "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device. The substantial equivalence of the device is supported by the nonclinical testing." This indicates the submission relies on the substantial equivalence argument and internal software testing, not new clinical performance data for efficacy.
- Software Verification and Validation:
- Unit Test: Conducted using Google C++ Unit Test Framework on major software components for functional, performance, and algorithm analysis.
- System Test: Conducted based on "integration Test Cases" and "Exploratory Test" to identify defects.
- Acceptance Criteria for System Test: "Success standard of System Test is not finding 'Major', 'Moderate' defect."
- Defect Classification:
- Major: Impacting intended use, no workaround.
- Moderate: UI/general quality, workaround available.
- Minor: Not impacting intended use, not significant.
- Performance Test Reports: DICOM Test Report, Performance Test Report, Integration Test Report, Thin Client Server Compatibility Test Report.
In conclusion, the provided 510(k) summary primarily addresses software validation and verification to demonstrate substantial equivalence, rather than a clinical performance study with specific acceptance criteria related to diagnostic accuracy, reader performance, or a detailed description of ground truth establishment.
Ask a specific question about this device
(161 days)
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software could be used to support the physician quantitatively in the diagnosis, follow up evaluation of CT lung tissue images by providing image segmentation of sub-structures in lung, lobe, airways and cardiac, registration and expiration which could analyze quantitative information such as air trapped index, and inspiration/ expiration ratio. And also, volumetric and structure analysis, density evaluation and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on premise and as cloud environment as well to allow users to connect by various environment such as mobile devices and chrome browser. Characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major; length of the longest diameter measured in 3D for solid portion of the nodule, Solid 2nd Major: The longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides CAC analysis by segmentation of four main artery (right coronary artery, left main coronary, left anterior descending and left circumflex artery then extracts calcium on coronary artery to provide Agatston score, volume score and mass score by whole and each segmented artery type. Based on the score, provides CAC risk based on age and gender.
The AVIEW is a software product which can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0 which is the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving and sending images by using the software tools. And is intended for use as diagnostic patient imaging which is intended for the review and analysis of CT scanning. Provides following features as semi-automatic nodule management, maximal plane measure, 3D measures and columetric measures, automatic nodule detection by integration with 3rd party CAD. Also provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported. It also automatically analyzes coronary artery calcification which support user to detect cardiovascular disease in early stage and reduce the burden of medical.
The provided FDA 510(k) summary for the AVIEW 2.0 device (K200714) primarily focuses on establishing substantial equivalence to a predicate device (AVIEW K171199, among others) rather than presenting a detailed clinical study demonstrating its performance against specific acceptance criteria.
However, based on the nonclinical performance testing section and the overall description, we can infer some aspects and present the available information regarding the device's capabilities and how it was tested. It is important to note that explicit acceptance criteria and detailed clinical study results are not fully elaborated in the provided text. The document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device."
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly state "acceptance criteria" with numerical or performance targets. Instead, it describes general validation methods and "performance tests" that were conducted to ensure functionality and reliability. The "Reported Device Performance" here refers to the successful completion or validation of these functions.
| Feature/Function | Acceptance Criteria (Inferred from Validation) | Reported Device Performance (as per 510(k) Summary) |
|---|---|---|
| Software Functionality & Reliability | Absence of 'Major' or 'Moderate' defects. | All tests passed based on pre-determined Pass/Fail criteria. No 'Major' or 'Moderate' defects found during System Test. Minor defects, if any, did not impact intended use. |
| Unit Test (Major Software Components) | Functional test conditions, performance test conditions, algorithm analysis met. | Performed using Google C++ Unit Test Framework; included functional, performance, and algorithm analysis for image processing. Implied successful completion. |
| System Test | No 'Major' or 'Moderate' defects identified. | Conducted by installing software to hardware with recommended specifications. New errors from 'Exploratory Test' were managed. Successfully passed as no 'Major' or 'Moderate' defects were found. |
| Specific Performance Tests | (Implied: Accurate, reliable, and consistent output) | |
| Auto Lung & Lobe Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic lung/lobe segmentation using deep-learning algorithms." |
| Airway Segmentation | (Implied: Accurate segmentation) | Performed. The device features "Fully automatic airway segmentation using deep-learning algorithms." |
| Nodule Matching Experiment Using Lung Registration | (Implied: Accurate nodule matching and registration) | Performed. The device features "Follow-up support with nodule matching and comparison." |
| Validation on DVF Size Optimization with Sub-sampling | (Implied: Optimized DVF size with sub-sampling) | Performed. |
| Semi-automatic Nodule Segmentation | (Implied: Accurate segmentation) | Performed. The device features "semi-automatic nodule management" and "semi-automatic nodule measurement (segmentation)." |
| Brock Model (PANCAN) Calculation | (Implied: Accurate malignancy score calculation) | Performed. The device "provides Brocks model which calculated the malignancy score based on numerical or Boolean inputs" and "PANCAN risk calculator." |
| VDT Calculation | (Implied: Accurate volume doubling time calculation) | Performed. The device offers "Automatic calculation of VDT (volume doubling time)." |
| Lung RADS Calculation | (Implied: Accurate Lung-RADS categorization) | Performed. The device "automatically categorize Lung-RADS score" and integrates with "Lung-RADS (classification proposed to aid with findings)." |
| Validation LAA Analysis | (Implied: Accurate LAA analysis) | Performed. The device features "LAA analysis (LAA-950HU for INSP, LAA-856HU for EXP), LAA size analysis (D-Slope), and true 3D analysis of LAA cluster sizes." |
| Reliability Test for Airway Wall Measurement | (Implied: Reliable airway wall thickness measurement) | Performed. The device offers "Precise airway wall thickness measurement" and "Robust measurement using IBHB (Integral-Based Half-BAND) method" and "Precise AWT-Pi10 calculation." |
| CAC Performance (Coronary Artery Calcification) | (Implied: Accurate Agatston, volume, mass scores, and segmentation) | Performed. The device "automatically analyzes coronary artery calcification," "Extracts calcium on coronary artery to provide Agatston score, volume score and mass score," and "Automatically segments calcium area of coronary artery based on deep learning... Segments and provides overlay of four main artery." Also "Provides CAC risk based on age and gender." |
| Air Trapping Analysis | (Implied: Accurate air trapping analysis) | Performed. The device features "Air-trapping analysis using INSP/EXP registration." |
| INSP/EXP Registration | (Implied: Accurate non-rigid elastic registration) | Performed. The device features "Fully automatic INSP/EXP registration (non-rigid elastic) algorithm." |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary does not specify the sample size used for the test set(s) used in the performance evaluation, nor does it detail the data provenance (e.g., country of origin, retrospective or prospective). It simply mentions "software verification and validation" and "nonclinical performance testing."
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not provide information on the number of experts used to establish ground truth or their specific qualifications for any of the nonclinical or performance tests mentioned. Given that no clinical study was performed, it is unlikely that medical experts were involved in establishing ground truth for a test set in the conventional sense for clinical performance.
4. Adjudication Method
No information is provided regarding an adjudication method. Since the document states no clinical study was conducted, adjudication by multiple experts would not have been applicable for a clinical performance evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not reported. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the predicate device." Therefore, there is no mention of an effect size for human readers with or without AI assistance.
6. Standalone Performance Study
Yes, a standalone (algorithm only without human-in-the-loop) performance evaluation was conducted, implied by the "Nonclinical Performance Testing" and "Software Verification and Validation" sections. The "Performance Test" section specifically lists several automatic and semi-automatic functions (e.g., "Auto Lung & Lobe Segmentation," "Airway Segmentation," "CAC Performance") that were tested for the device's inherent capability.
7. Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for each specific performance test. For software components involving segmentation, it is common to use expert-annotated images (manual segmentation by experts) as ground truth for a quantitative comparison. For calculations like Agatston score, or VDT, the ground truth would likely be mathematical computations based on established formulas or reference standards applied to the segmented regions. However, this is inferred, not explicitly stated.
8. Sample Size for the Training Set
The document does not specify the sample size for any training set. It mentions the use of "deep-learning algorithms" for segmentation, which implies a training phase, but details about the training data are absent.
9. How Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for any training set was established. While deep learning is mentioned for certain segmentation tasks, the methodology for creating the labeled training data is not detailed.
Ask a specific question about this device
(166 days)
AVIEW LCS is intended for the review and analysis and reporting of thoracic CT images for the purpose of characterizing nodules in the lung in a single study, or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule and measurements such as size (major axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU (the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures (Solid Major, length of the longest diameter measured in 3D for a solid portion of the nodule. Solid 2nd Major: The length of the longest diameter of the solid part. measured in sections perpendicular to the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, also integrate with FDA certified Mevis CAD (Computer-aided detection) (K043617).
AVIEW LCS is intended for use as diagnostic patient imaging which is intended for the review and analysis of thoracic CT images. Provides following features as semi-automatic nodule measurement (segmentation), maximal plane measure, 3D measure and volumetric measures, automatic nodules detection by integration with 3th party CAD. Also provides cancer risk based on PANCAN risk model which calculates the malignancy score based on numerical or Boolean inputs. Follow up support with automated nodule matching and automatically categorize Lung-RADS score which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that is based on type, size, size change and other findings that is reported.
- -Nodule measurement
- Adding nodule by segmentation or by lines .
- Semi-automatic nodule measurement (segmentation) "
- . Maximal plane measure, 3D measure and volumetric measure.
- . Automatic large vessel removal.
- י Provides various features calculated per each nodule such as size, major(longest diameter measured in 2D/3D), minor (shortest diameter measured in 2D/3D), maximal plane, volume, mean HU, minimum HU, maximum HU for solid nodules and ratio of the longest axis for solid to non solid for paritla solid nodules.
- . Fully supporting Lung-RADS workflow: US Lung-RADS and KR Lung-RADS.
- . Nodule malignancy score (PANCAN model) calculation.
- . Importing from CAD results
- -Follow-up
- ' Automatic retrieving the past data
- י Follow-up support with nodule matching and comparison
- Automatic calculation of VDT (volume doubling time)
- Automatic nodule detection (CADe) -
- Seamless integration with Mevis Visia (FDA 510k Cleared) .
- -Lungs and lobes segmentation
- Better segmentation of lungs and lobes based on deep-learning algorithms.
- -Report
- PDF report generation .
- . It saves or sends the pdf report and captured images in DICOM files.
- . It provides structured report including following items such as nodule location and, also input finding on nodules.
- Reports are generated using the results of all nodules nodules detected so far (Lung RADS) .
- -Save Result
- . It saves the results in internal format
The provided text describes the acceptance criteria and the study conducted to prove the AVIEW LCS device meets these criteria.
1. Table of Acceptance Criteria and Reported Device Performance
The document does not present a formal table of acceptance criteria with corresponding reported device performance metrics in a single, clear format for all functionalities. Instead, it describes various tests and their success standards implicitly serving as acceptance criteria.
Based on the "Semi-automatic Nodule Segmentation" section, here's a reconstructed table for that specific functionality:
| Acceptance Criteria (Semi-automatic Nodule Segmentation) | Reported Device Performance |
|---|---|
| Measured length should be less than one voxel size compared to the size of the sphere produced. | Implied "standard judgment" met through testing with spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm). |
| Measured volume should be within 10% error compared to the volume of the sphere created. | Implied "standard judgment" met through testing with spheres of various radii. |
For "Nodule Matching test with Lung Registration":
| Acceptance Criteria (Nodule Matching) | Reported Device Performance |
|---|---|
| Voxel Distance error between converted position and Nodule position of the Moving image (for evaluation of DVF). | Measured for 28 locations. Implied acceptance as the study "check the applicability of Registry" and "Start-up verification". |
| Accuracy evaluation of DVF subsampling for size optimization. | Implied acceptable accuracy for reducing DVF capacity. |
For "Software Verification and Validation - System Test":
| Acceptance Criteria (System Test) | Reported Device Performance |
|---|---|
| Not finding 'Major' defects. | Implied satisfied (device passed all tests). |
| Not finding 'Moderate' defects. | Implied satisfied (device passed all tests). |
For "Auto segmentation (based on deep-learning algorithms) test":
| Acceptance Criteria (Auto Segmentation - Korean Data) | Reported Device Performance |
|---|---|
| Auto-segmentation results identified by specialist and radiologist and classified as '2 (very good)'. | "The results of auto-segmentation are identified by a specialist and radiologist and classified as 0 (Not good), 1 (need adjustment), and 2(very good)". This suggests an evaluation was performed to categorize segmentation quality, but the specific percentage or number of "very good" classifications is not provided as a direct performance metric. |
| Acceptance Criteria (Auto Segmentation - NLST Data) | Reported Device Performance |
|---|---|
| Dice similarity coefficient between auto-segmentation and manual segmentation (performed by radiolographer and confirmed by radiologist). | "The dice similarity coefficient is performed to check how similar they are." The specific threshold or result of the DSC is not provided. |
2. Sample Size Used for the Test Set and Data Provenance
-
Semi-automatic Nodule Segmentation:
- Sample Size: Not explicitly stated as a number of nodules or patients. Spheres of various radii (2mm, 3mm, 6mm, 7mm, 8mm, 9mm, 10mm) were created for testing.
- Data Provenance: Artificially generated spheres.
-
Nodule Matching test with Lung Registration:
- Sample Size: "28 locations" (referring to nodule locations for DVF calculation).
- Data Provenance: "deployed" experimental data, likely retrospective CT scans. Specific country of origin not mentioned but given the company's origin (Republic of Korea), it could involve Korean data.
-
Mevis CAD Integration test:
- Sample Size: Not explicitly stated. The test confirms data transfer and display.
- Data Provenance: Not specified, likely internal test data.
-
Brock Score (aka. PANCAN) Risk Calculation test:
- Sample Size:
- Former paper: "PanCan data set, 187 persons had 7008 nodules, of which 102 were malignant," and "BCCA data set, 1090 persons had 5021 nodules, of which 42 were malignant."
- Latter paper: "4431 nodules (4315 benign nodules and 116 malignant nodules of NLST data)."
- Data Provenance: Retrospective, from published papers. PANCAN data set, BCCA data set, and NLST (National Lung Screening Trial) data.
- Sample Size:
-
VDT Calculation test:
- Sample Size: Not explicitly stated.
- Data Provenance: Unit tests, implying simulated or internally generated data for calculation verification.
-
Lung RADS Calculation test:
- Sample Size: "10 cases were extracted."
- Data Provenance: Retrospective, "from the Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
-
Auto segmentation (based on deep-learning algorithms) test:
- Korean Data: "192 suspected COPD patients." (Retrospective, implicitly from Korea given the source "Korean Society of Thoracic Radiology").
- NLST Data: "80 patient's Chest CT data who were enrolled in NLST." (Retrospective, from the National Lung Screening Trial).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
-
Auto segmentation (based on deep-learning algorithms) test - Korean Data:
- Number of Experts: Not explicitly stated, but "a specialist and radiologist" suggests at least two experts.
- Qualifications: "specialist and radiologist." No specific years of experience or sub-specialty mentioned.
-
Auto segmentation (based on deep-learning algorithms) test - NLST Data:
- Number of Experts: At least two. "experienced radiolographer and confirmed by experienced radiologist."
- Qualifications: "experienced radiolographer" and "experienced radiologist." No specific years of experience or sub-specialty mentioned.
-
Brock Score (aka. PANCAN) Risk Calculation test: Ground truth established through the studies referenced; details on experts for those specific studies are not provided in this document.
-
Lung RADS Calculation test: Ground truth implicitly established by the "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology." The experts who created this survey table are not detailed here.
4. Adjudication Method for the Test Set
The document does not explicitly describe a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth for any of the tests.
- For the Auto segmentation (based on deep-learning algorithms) test, the "Korean Data" section mentions results are "identified by a specialist and radiologist and classified." This suggests independent review or consensus, but no specific adjudication rule is given. For the "NLST Data" section, manual segmentation was "performed by experienced radiolographer and confirmed by experienced radiologist," indicating a two-step process of creation and verification, rather than a conflict resolution method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was mentioned or conducted in this document. The device is for "review and analysis" and "reporting," and integrates with a third-party CAD, but its direct impact on human reader performance through an MRMC study is not detailed.
6. Standalone Performance Study (Algorithm only without human-in-the-loop performance)
Yes, standalone performance studies were conducted for several functionalities, focusing on the algorithm's performance without direct human-in-the-loop tasks:
- Semi-automatic Nodule Segmentation: The test on artificial spheres evaluates the algorithm's measurement accuracy directly.
- Nodule Matching test with Lung Registration: Evaluates the algorithm's ability to calculate DVF and match nodules.
- Brock Score (aka. PANCAN) Risk Calculation test: Unit tests comparing calculated values from the algorithm against an Excel sheet.
- VDT Calculation test: Unit tests to confirm calculation.
- Lung RADS Calculation test: Unit tests to confirm implementation accuracy against regulations.
- Auto segmentation (based on deep-learning algorithms) test: This is a standalone evaluation of the algorithm's segmentation performance against expert classification or manual segmentation (NLST data).
7. Type of Ground Truth Used
- Semi-automatic Nodule Segmentation: Known physical properties of artificially generated spheres (e.g., precise radius and volume).
- Nodule Matching test with Lung Registration: Nodule positions marked on "Fixed image" and "Moving image," implying expert identification on real CT scans.
- Brock Score (aka. PANCAN) Risk Calculation test: Reference values from published literature (PanCan, BCCA, NLST data) which themselves are derived from ground truth of malignancy (pathology, clinical follow-up).
- VDT Calculation test: Established mathematical formulas for VDT.
- Lung RADS Calculation test: Lung-RADS regulations and a "Lung-RADS survey table provided by the Korean Society of Thoracic Radiology."
- Auto segmentation (based on deep-learning algorithms) test - Korean Data: Expert classification ("0 (Not good), 1 (need adjustment), and 2(very good)") by a specialist and radiologist.
- Auto segmentation (based on deep-learning algorithms) test - NLST Data: Manual segmentation performed by an experienced radiolographer and confirmed by an experienced radiologist.
8. Sample Size for the Training Set
The document does not explicitly state the sample size used for the training set(s) for the deep-learning algorithms or other components of the AVIEW LCS. It only details the test sets.
9. How the Ground Truth for the Training Set Was Established
Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document. However, given the nature of the evaluation for the test set (expert marking, classification), it is highly probable that similar methods involving expert radiologists or specialists would have been used to establish ground truth for any training data.
Ask a specific question about this device
(142 days)
The AVIEW Modeler is intended for use as an image review and segmentation system that operates on DICOM imaging information obtained from a medical scanner. It is also used as a pre-operative software for surgical planning. 3D printed models generated from the output file are for visualization and educational purposes only and not for diagnostic use.
The AVIEW Modeler is a software product which can be installed on a separate PC, it displays patient medical images on the screen by acquiring it from image Acquisition Device. The image on the screen can be checked edited, saved and received.
- -Various displaying functions
- Thickness MPR., oblique MPR, shaded volume rendering and shaded surface rendering.
- . Hybrid rendering of simultaneous volume-rendering and surface-rendering.
- -Provides easy to-use manual and semi-automatic segmentation methods
- Brush, paint-bucket, sculpting, thresholding and region growing. "
- . Magic cut (based on Randomwalk algorithm)
- -Morphological and Boolean operations for mask generation.
- Mesh generation and manipulation algorithms. -
- Mesh smoothing, cutting, fixing and Boolean operations.
- -Exports 3d-printable models in open formats, such as STL.
- DICOM 3.0 compliant (C-STORE, C-FIND) -
The provided text describes the 510(k) Summary for AVIEW Modeler, focusing on its substantial equivalence to predicate devices, rather than a detailed performance study directly addressing specific acceptance criteria. The document emphasizes software verification and validation activities.
Therefore, I cannot fully complete all sections of your request concerning acceptance criteria and device performance based solely on the provided text. However, I can extract information related to software testing and general conclusions.
Here's an attempt to answer your questions based on the available information:
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with corresponding performance metrics like accuracy, sensitivity, or specificity for the segmentation features. Instead, it discusses the successful completion of various software tests.
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Functional Adequacy | "passed all of the tests based on pre-determined Pass/Fail criteria." |
| Performance Adequacy | Performance tests conducted "according to the performance evaluation standard and method that has been determined with prior consultation between software development team and testing team" to check non-functional requirements. |
| Reliability | System tests concluded "not finding 'Major'. 'Moderate' defect." |
| Compatibility | STL data created by AVIEW Modeler was "imported into Stratasys printing Software, Object Studio to validate the STL before 3d-printing with Objet260 Connex3." (implies successful validation for 3D printing) |
| Safety and Effectiveness | "The new device does not introduce a fundamentally new scientific technology, and the nonclinical tests demonstrate that the device is safe and effective." |
2. Sample sizes used for the test set and the data provenance
The document does not specify the sample size (number of images or patients) used for any of the tests (Unit, System, Performance, Compatibility). It also does not explicitly state the country of origin of the data or whether the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not provide any information about the number or qualifications of experts used to establish ground truth for a test set. The focus is on internal software validation and comparison to a predicate device.
4. Adjudication method for the test set
The document does not mention any adjudication method for a test set, as it does not describe a clinical performance study involving human readers.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
No, the provided text does not describe an MRMC comparative effectiveness study involving human readers with or without AI assistance. The study described is a software verification and validation, concluding substantial equivalence to a predicate device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The document describes various software tests (Unit, System, Performance, Compatibility) which could be considered forms of standalone testing for the algorithm's functionality and performance. However, it does not present quantitative standalone performance metrics typical of an algorithm-only study (e.g., precision, recall, Dice score for segmentation). It focuses on internal software quality and compatibility.
7. The type of ground truth used
The type of "ground truth" used is not explicitly defined in terms of clinical outcomes or pathology. For the software validation, the "ground truth" would likely refer to pre-defined correct outputs or expected behavior of the software components, established by the software development and test teams. For example, for segmentation, it would be the expected segmented regions based on the algorithm's design and previous validation efforts (likely through comparison to expert manual segmentations or another validated method, though not detailed here).
8. The sample size for the training set
The document does not mention a training set or its sample size. This is a 510(k) summary for a medical image processing software (AVIEW Modeler), and while it mentions a "Magic cut (based on Randomwalk algorithm)," it does not describe an AI model that underwent a separate training phase with a specific dataset, nor does it classify the device as having "machine learning" capabilities in the context of FDA regulation. The focus is on traditional software validation.
9. How the ground truth for the training set was established
As no training set is mentioned (see point 8), there is no information on how its ground truth would have been established.
Ask a specific question about this device
(555 days)
A VIEW provides CT values for pulmonary tissue from CT thoracic datasets. This software can be used to support the physician quantitatively in the diagnosis, followup evaluation of CT lung tissue images by providing image segmentation of sub-structures in the left and right lung (e.g., the five lobes and airway), volumetric and structural analysis, density evaluations and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data sets. A VIEW is not meant for primary image Interpretation in mammography.
The AVIEW is a software product which can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0 which is the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving and sending images by using the software tools.
The provided text describes the AVIEW software, a medical device for processing CT thoracic datasets, and its substantial equivalence to predicate devices. However, the document does not contain the specific details required to fully address your request regarding acceptance criteria and the study that proves the device meets them.
Here's a breakdown of what information is available and what is missing:
Information Available:
- Indications for Use: AVIEW "provides CT values for pulmonary tissue from CT thoracic datasets. This software can be used to support the physician quantitatively in the diagnosis, followup evaluation of CT lung tissue images by providing image segmentation of sub-structures in the left and right lung (e.g., the five lobes and airway), volumetric and structural analysis, density evaluations and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data sets. AVIEW is not meant for primary image Interpretation in mammography."
- Performance Data: "Verification, validation and testing activities were conducted to establish the performance, functionality and reliability characteristics of the modified devices. The device passed all of the tests based on pre-determined Pass/Fail criteria."
- Tests Conducted:
- Unit test
- System test
- DICOM test
- LAA analysis test
- LAA size analysis test
- Airway wall measurement test
- Reliability test
- CT image compatibility test
- Conclusion: The device is deemed "substantially equivalent to the predicate device" based on "technical characteristics, general functions, application, and intended use," and "nonclinical tests demonstrate that the device is safe and effective."
Information Missing (and why based on the document):
-
A table of acceptance criteria and the reported device performance: While various tests are listed (e.g., LAA analysis test, Airway wall measurement test), the document explicitly states these are "nonclinical tests." It does not provide specific quantitative acceptance criteria or corresponding reported device performance values for these tests. The nature of these tests appears to be functional and reliability-focused rather than clinical performance metrics. For example, it doesn't state "AVIEW achieved X% accuracy for LAA analysis against ground truth Y" or "Airway wall measurement deviation was within Z mm."
-
Sample size used for the test set and the data provenance: The document mentions "CT thoracic datasets" but does not specify the sample size for any test set or the provenance (e.g., country of origin, retrospective/prospective nature) of the data used for testing.
-
Number of experts used to establish the ground truth for the test set and their qualifications: The document states, "Results produced by the software tools are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." However, it does not specify how many, if any, experts were used to establish ground truth for the test set, nor their specific qualifications, for the performance testing cited.
-
Adjudication method for the test set: No information is provided regarding any adjudication methods (e.g., 2+1, 3+1) used for establishing ground truth for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance: The document explicitly states that AVIEW is "not meant for primary image Interpretation in mammography" and that its results "are dependent on the interpretation of trained and licensed radiologists, clinicians and referring physicians as an adjunctive to standard radiology practices for diagnosis." This suggests it's an assistive tool, but no MRMC study comparing human readers with and without AI assistance, or any effect size, is mentioned. The "Performance Data" section focuses on "nonclinical tests" for software functionality.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The provided detail about "LAA analysis test," "LAA size analysis test," and "Airway wall measurement test" implies standalone algorithmic performance was assessed in these nonclinical tests. However, specific performance metrics (e.g., accuracy, precision, recall) from a standalone evaluation are not provided.
-
The type of ground truth used: For the mentioned performance tests (e.g., LAA, airway wall measurement), the type of ground truth used is not explicitly specified. It's implied that these are technical validations against known values or established methods, but whether this involved expert consensus on clinical cases, pathology, or outcomes data is not detailed.
-
The sample size for the training set: No information is provided about a training set or its size, as the document refers to "Verification, validation and testing activities" as "nonclinical tests" demonstrating substantial equivalence, not a machine learning model's development.
-
How the ground truth for the training set was established: Since no training set information is provided, this cannot be answered.
In summary, the document serves as an FDA 510(k) clearance letter and summary, which primarily focuses on demonstrating "substantial equivalence" to predicate devices through technical characteristics and "nonclinical tests" for functionality and reliability. It does not provide the detailed clinical performance study data that would include specific acceptance criteria, sample sizes (for test or training sets), expert qualifications, or ground truth establishment methods typical for AI-based diagnostic/assistive tools evaluated for quantitative clinical outcomes.
Ask a specific question about this device
Page 1 of 1