Search Results
Found 17 results
510(k) Data Aggregation
(110 days)
AVIEW
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel to soft kernel for quantitative analysis of segmenting low attenuation areas of the lung Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include nodule type, location of the nodule, and measurements such as size (major axis), estimated effective diameter from the volume of the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass(mass calculated from the CT pixel value), and volumetric measures(Solid major: length of the longest diameter measure in 3D for a solid portion of the nodule. Solid 2nd Maior: The size of the longest diameter of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.)). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified AVIEW Lung Nodule CAD (Computer aided detection) (K221592). It also provides the Agatston score, volume score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening and management recommendations that are based on type, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries.
The provided text is a 510(k) premarket notification letter and summary for a medical device called "AVIEW." This document primarily asserts substantial equivalence to a predicate device and notes general software changes rather than providing detailed acceptance criteria and study results for specific performance metrics that would typically be found in performance study reports.
Specifically, the document states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the medical device as the indications for use is equivalent to the substantial equivalence of the device is supported by the non-clinical testing." This means that the submission does not include information about a standalone or MRMC study designed to prove the device meets specific performance acceptance criteria for its analytical functions.
Therefore, I cannot provide the requested information from the given text as the detailed performance study data is not present. The document focuses on regulatory equivalence based on technological characteristics and intended use being similar to a predicate device, rather than providing new performance study data.
Ask a specific question about this device
(77 days)
AVIEW CAC
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast/non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, transferring, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing non-contrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text indicates that the device, AVIEW CAC, calculates the Agatston score for coronary artery calcification from non-contrast/non-gated Chest CT scans. It segments and evaluates the right and left coronary arteries and provides risk stratification based on calcium score, gender, and age, using percentile-based risk categories by established guidelines. The device is for healthcare professionals (radiologists and cardiologists) and analyzes existing CT studies from adult patients over 40 years old, acquired on GE equipment.
The document states that a clinical study was not considered necessary and that non-clinical testing supports the substantial equivalence of the device to its predicate. However, it does not provide specific acceptance criteria or an explicit study description with performance metrics for the AVIEW CAC device. It states that the device is substantially equivalent to a predicate device (K233211, also named AVIEW CAC) and that the substantial equivalence is supported by non-clinical testing.
Therefore, many of the requested details about acceptance criteria, specific performance metrics, sample sizes, expert involvement, and ground truth establishment are not present in the provided text.
Based on the available information:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly state acceptance criteria or report specific device performance metrics in a tabular format. It generally states that "the results of the software verification and validation tests concluded that the proposed device is substantially equivalent" and "the nonclinical tests demonstrate that the device is safe and effective."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):
This information is not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience):
This information is not provided in the document.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
A MRMC comparative effectiveness study is not mentioned in the document. The document explicitly states: "This Medical device is not new; therefore, a clinical study was not considered necessary prior to release. Additionally, there was no clinical testing required to support the indications for use is equivalent to the predicate device. The substantial equivalence of the device is supported by the non-clinical testing."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
The document implies that the "nonclinical tests" evaluated the device's performance, which would typically involve standalone algorithm performance. However, specific details about such a study or its results are not provided. The device's function is centered on automatic analysis (calculation of Agatston score, segmenting and evaluating arteries).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
This information is not provided in the document.
8. The sample size for the training set:
This information is not provided in the document.
9. How the ground truth for the training set was established:
This information is not provided in the document.
Ask a specific question about this device
(197 days)
Ambu® aScope 5 Cysto HD (Standard Deflection); Ambu® aScope 5 Cysto HD (Reverse Deflection); Ambu® aView
Ask a specific question about this device
Ambu® aScope 5 Uretero (Standard Deflection); Ambu® aScope 5 Uretero (Reverse Deflection); Ambu® aView
Ambu® aScope™ 5 Uretero is a sterile, single-use, flexible, digital video ureteroscope intended to be used for endoscopic access and visual quidance in the upper urinary tract. Ambu® aScope™ 5 Uretero is intended to be used with the compatible Ambu displaying unit and can be used in conjunction with endoscopic instruments via its working channel. Ambu® aScope™ 5 Uretero is intended for patients requiring retrograde (transurethral) and/or antegrade (percutaneous) ureteroscopy procedures for visualization and examination with a flexible ureteroscope, and for removal of renal and ureter calculi.
Ambu® aView™ 2 Advance is intended to display live imaging data from compatible Ambu visualization devices.
The Ambu® aScope™ 5 Ureteroscopy System is a combination of an endoscope, Ambu® aScope™ 5 Uretero and a displaying unit, the Ambu® aView™ 2 Advance.
The Ambu® aScope™ 5 Uretero is a sterile, digital video ureteroscope. The Ambu® aScope™ 5 Uretero can be used in retrograde (transurethral) and/or antegrade (percutaneous) ureteroscopy procedures for providing endoscopic access and visual quidance to and in the upper urinary tract.
The Ambu® aScope™ 5 Uretero is available in one size and can be operated by either left or right hand. The Ambu® aScope™ 5 Uretero can be used with endoscopic accessories. The working channel system allows the passage of endoscopic instillation/ suction of fluids. The Ambu° aScope™ 5 Uretero is intended to be used with a compatible Ambu displaying unit, Ambu° aView™ 2 Advance.
The Ambu® aView™ 2 Advance, also referred to as displaying unit, is a non-sterile, reusable digital monitor intended to display live imaging data from Ambu visualization devices. The product consists of a 12.8″ LCD screen. The displaying unit is powered by a rechargeable lithium-ion battery and includes a power supply with region-specific power cable.
The Ambu® aView™ 2 Advance has the following physical and performance characteristics:
- · Can process and display live imaging data from Ambu® aScope™ 5 Uretero to a monitor
- · Can record, store and transport image data from Ambu® aScope™ 5 Uretero
- Is a portable device with an integrated monitor, and the possibility to connect to an external monitor
This FDA 510(k) summary (K242108) is a clearance for an updated version of the Ambu® aScope™ 5 Uretero and Ambu® aView™ 2 Advance, primarily focusing on additional compatibility between these two devices. It does not provide a study to prove the device meets acceptance criteria in the traditional sense of a clinical or comparative effectiveness study against a human baseline. Instead, it describes non-clinical performance testing to demonstrate that the updated device combination remains safe and effective and substantially equivalent to its predicate devices.
Therefore, many of the requested categories (like MRMC study, expert ground truth, sample size for test sets/training sets in a clinical context) are not applicable or cannot be extracted from this specific document, as the summary focuses on technical performance and equivalence,
not a study of diagnostic accuracy or human performance improvement.
Here's a breakdown of the information that can be extracted or inferred:
1. Table of Acceptance Criteria and the Reported Device Performance:
The document lists various performance characteristics that were tested. The overall reported performance is that the device "performed as expected and met the test specifications set." Specific numerical acceptance criteria or performance metrics are not
provided in this summary.
Acceptance Criteria Category | Reported Device Performance (as stated in document) |
---|---|
Optical performance | Performed as expected and met test specifications |
- Field of view and Direction of view | Performed as expected and met test specifications |
- Sharpness and Depth of field | Performed as expected and met test specifications |
- Image intensity uniformity | Performed as expected and met test specifications |
- Geometric distortion | Performed as expected and met test specifications |
- Resolution | Performed as expected and met test specifications |
- Color performance | Performed as expected and met test specifications |
- Noise performance | Performed as expected and met test specifications |
- Dynamic range | Performed as expected and met test specifications |
Software verification testing | Performed as expected and met test specifications |
Electrical Safety (IEC 60601-1:2005+A1:2012+A2:2020 Ed. 3.2 and IEC 60601-2-18:2009 Ed. 3.0) | Performed as expected and met test specifications |
Electromagnetic Compatibility (IEC 60601-1-2:2014+A1:2020 Ed. 4.1) | Performed as expected and met test specifications |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
This document describes non-clinical performance testing of the device itself (hardware and software), not a study with patient data. Therefore, the concepts of "sample size" for a test set (in a clinical context) or "data provenance" (country/retrospective/prospective) are not applicable. The testing was likely conducted in a controlled lab environment.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
N/A. As this involved technical performance testing of the device, not a diagnostic or clinical evaluation, the concept of "experts" establishing ground truth in a clinical sense is not relevant here. Engineering and testing personnel would have verified performance against technical specifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
N/A. Adjudication methods like 2+1 or 3+1 are used for clinical studies with expert reviewers. This was technical performance testing.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No. This document does not mention or describe an MRMC comparative effectiveness study. The device is a ureteroscope system, not an AI-assisted diagnostic tool for interpretation.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
N/A. This is a medical device for direct visualization, not an algorithm being evaluated for standalone performance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
The "ground truth" for the non-clinical tests would be the established engineering and optical specifications/standards (e.g., specific measurable values for resolution, field of view, electrical safety limits set by IEC standards, etc.) that the device was designed to meet.
8. The sample size for the training set:
N/A. This document pertains to the clearance of a physical medical device and its compatibility, not a machine learning model that requires a training set.
9. How the ground truth for the training set was established:
N/A. Not applicable for the reasons stated above.
Ask a specific question about this device
(183 days)
AVIEW CAC
AVIEW CAC provides quantitative analysis of calcified plaques in the coronary arteries using non-contrast non-gated Chest CT scans. It enables the calculation of the Agatston score for coronary artery calcification, segmenting and evaluating the right coronary artery and left coronary artery. Also provide risk stratification based on calcium score, gender, and age, offering percentile-based risk categories by established guidelines. Designed for healthcare professionals, including radiologists and cardiologists, AVIEW CAC supports storing, inquiring, and displaying CT data sets on-premises, facilitating access through mobile devices and Chrome browsers. AVIEW CAC analyzes existing noncontrast/non-gated Chest CT studies that include the heart of adult patients above the age of 40. Also, the device's use should be limited to CT scans acquired on General Electric (GE) or its subsidiaries (e.g., GE Healthcare) equipment. Use of the device with CT scans from other manufacturers has not been validated or recommended.
The AVIEW CAC is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries.
The provided text describes the acceptance criteria and the study conducted for the AVIEW CAC device.
Here's the breakdown of the information requested:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the quantitative analysis of calcified plaques is primarily based on the Intraclass Correlation Coefficient (ICC) of the Agatston score against a ground truth and a predicate device.
Acceptance Criteria | Reported Device Performance (AVIEW CAC vs. Ground Truth) | Reported Device Performance (AVIEW CAC vs. Predicate Device) |
---|---|---|
P-value > 0.8 for ICC (implied target for strong agreement) | Agatston Score ICC (95% CI): | Agatston Score ICC (95% CI): |
Total: 0.896 (0.857, 0.925) | Total: 0.939 (0.916, 0.956) | |
LCA: 0.927 (0.899, 0.947) | LCA: 0.955 (0.938, 0.968) | |
RCA: 0.840 (0.778, 0.884) | RCA: 0.887 (0.844, 0.918) | |
**All p-values 0.8", which usually signifies a strong correlation. The reported ICC values are all above this implied threshold. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size:
- 150 CSCT (gated) cases
- 150 Chest CT (non-gated) cases
- Additionally, 280 datasets collected from multiple institutions were used for a separate "MI functionality test report" which also evaluated correlation.
- Data Provenance: The document does not explicitly state the country of origin. The test cases were derived from "multiple institutions". It is implied to be retrospective as the device analyzes "existing" non-contrast/non-gated chest CT studies.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used or their detailed qualifications (e.g., "radiologist with 10 years of experience") for establishing the ground truth.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (such as 2+1, 3+1) for establishing the ground truth. It simply refers to "ground truth" without detailing its consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The study focuses on the standalone performance of the algorithm against a defined ground truth and comparison against a predicate device, not on human reader performance with or without AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The performance data section explicitly states, "we evaluated the agreement in A coronary calcium scoring between the subject device and the ground truth" and "the correlation coefficient A between the AVIEW CAC automatic analysis results of the chest CT based on the heart CT and the Agatston scores was over 90%". This indicates the algorithm's performance without human intervention.
7. The Type of Ground Truth Used
The ground truth used was Agatston scores for coronary artery calcification. The document does not specify if this ground truth was established by expert consensus of human readers, pathology, or outcomes data. However, the comparison is made to "Ground Truth" for Agatston Score measurements, which implies a highly reliable, perhaps manually derived or reference Agatston score.
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions the test set sizes.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only refers to deep learning for automatic segmentation but does not detail the process for creating the ground truth data used to train the deep learning model.
Ask a specific question about this device
(50 days)
Ambu® aScope 5 Broncho 4.2/2.2; Ambu® aScope 5 Broncho 2.7/1.2; Ambu® aView 2 Advance
aScope 5 Broncho is intended for endoscopic procedures and examination within the airways and tracheobronchial tree. aScope 5 Broncho is intended to provide visualization via a compatible Ambu displaying unit, and to allow passing of endotherapy instruments via its working channel.
The Ambu® aView™ 2 Advance is intended to display live imaging data from compatible Ambu visualization devices.
The Ambu® aScope™ 5 Broncho System is a combination of an endoscope, Ambu® a5cope™ 5 Broncho, and a compatible displaying unit, Ambu® aView™ 2 Advance.
Ambu® aScope™ 5 Broncho is a sterile, single-use flexible bronchoscope designed to conduct endoscopic procedures and examination of the airways and tracheobronchial tree. The endoscope is available in two size configurations: Ambu® aScope™ 5 Broncho 4.2/2.2 and Ambu® aScope™ 5 Broncho 2.7/1.2. Apart from the size, the endoscopes share a similar design. The inserted into the patient airway through the mouth, nose, endotracheal tube, tracheostomy tube, etc. There is a working channel system within the endoscope for use with endotherapy instillation of fluids. An introducer (luer lock adaptor), which is supplied together with the endoscopes, can be attached to the working channel port during of blood, saliva, and mucus from airway is possible through the suction system. Ambu® aScope™ 5 Broncho features an integrated camera module with built-in dual LED illumination. The image module provides a cropped 400x400 pixels signal from the 160 Kpixel sensor.
The Ambu® aView™ 2 Advance, also referred to as a displaying unit, is a non-sterile digital monitor intended to display live imaging data from Ambu visualization devices. The product consists of a 12.8" LCD screen. The device is powered through a Lithium-ion (Li-ion) battery or a separate power adapter.
Ambu® aView™ 2 Advance displaying unit has the following physical and performance characteristics:
- · Displays the image from the Ambu® aScope™ 5 Broncho endoscope on the screen.
- · Can record snapshots or video of image from Ambu® aScope™ 5 Broncho endoscope.
- · Can connect to an external monitor.
- · Reusable device.
No information regarding acceptance criteria or a study proving the device meets the acceptance criteria is provided in the document. The document is a 510(k) clearance letter from the FDA for a bronchoscope system. It primarily focuses on demonstrating substantial equivalence to predicate devices based on intended use, technological characteristics, and non-clinical performance testing.
Therefore, I cannot provide a table of acceptance criteria, device performance, sample sizes, data provenance, expert qualifications, adjudication methods, MRMC study details, standalone performance, ground truth types, or training set information. The document merely confirms that the "Ambu® aScope™ 5 Broncho System performed as expected and met the test specifications set." but does not elaborate on these specifications or the studies performed.
Ask a specific question about this device
(217 days)
Ambu® aScope 5 Broncho HD 5.0/2.2, Ambu® aScope 5 Broncho HD 5.6/2.8, Ambu® aView 2 Advance
Ambu® aScope™ 5 Broncho HD is intended for endoscopic procedures and examination within the airways and tracheobronchial tree.
Ambu® aScope™ 5 Broncho HD is intended to provide visualization via a compatible Ambu displaying unit, and to allow passing endotherapy instruments via its working channel.
The Ambu® aView™ 2 Advance is intended to display live imaging data from compatible Ambu visualization devices.
The Ambu® aScope™ 5 Broncho HD system is a combination of the displaying unit, Ambu® aView™ 2 Advance, and a compatible Ambu endoscope, the Ambu® aScope™ 5 Broncho HD 5.0/2.2 or the Ambu® aScope™ 5 Broncho HD 5.6/2.8. The Ambu® aScope™ 5 Broncho HD bronchoscopes are the same devices as cleared in K220606. The only change is that the bronchoscopes are now also compatible to Ambu® aView™ 2 Advance.
The Ambu® aScope™ 5 Broncho HD endoscopes are single-use endoscopes designed to be used with Ambu displaying units, endotherapy instruments and other ancillary equipment for endoscopy within the airways and tracheobronchial tree.
The insertion portion is inserted into the patient airway through the mouth, nose, or a tracheostomy. It is lubricated with a water-soluble medical grade lubricant to ensure the lowest possible friction when inserted into the patient. There is a working channel system within the endoscope for use with endotherapy instruments. An introducer (luer lock adaptor), which is supplied together with the endoscope, can be attached to the working channel port during use. Suctioning of blood, saliva, and mucus from the airway is possible through the suction system.
Ambu® aScope™ 5 Bronco HD features an integrated camera module, with built-in dual LED illumination. The image module provides a cropped 800x800 Pixels signal from the 1280x800 (1 megapixel) sensor.
The Ambu® aView™ 2 Advance, also referred to as displaying unit, is a non-sterile digital monitor intended to display live imaging data from Ambu visualization devices. The product consists of a 12.8" LCD screen. The device is powered through a Lithium-ion (Li-ion) battery or a separate power adapter.
Ambu® aView™ 2 Advance displaying unit has the following physical and performance characteristics:
Displays the image from Ambu® aScope™ 5 Broncho HD endoscope on the screen.
Can record snapshots or video of image from Ambu® aScope™ 5 Broncho HD endoscope.
Can connect to an external monitor.
Reusable device.
The provided text is a 510(k) summary for the Ambu® aScope™ 5 Broncho HD system. It describes the device, its intended use, and compares it to predicate devices to demonstrate substantial equivalence. However, this document does not detail a study proving the device meets an acceptance criterion for an AI/ML-driven medical device, nor does it provide a table of acceptance criteria and reported device performance in the context of an AI/ML study.
The document discusses performance tests for the medical device itself (bronchoscope and display unit), such as optical properties, mechanical performance, biocompatibility, and electrical safety. The "acceptance criteria" here refer to the device meeting these engineering and regulatory standards, not specifically to the performance of an AI/ML algorithm.
Therefore, I cannot answer your request based on the provided input text, as the information you are asking for (acceptance criteria and study details for an AI/ML device) is not present. The document focuses on the hardware aspects of a bronchoscope system.
To address your query, I would need a document that describes the development and testing of an AI/ML-based medical device.
Ask a specific question about this device
(267 days)
AVIEW Lung Nodule CAD
AVIEW Lung Nodule CAD is a Computer-Aided Detection (CAD) software designed to assist radiologists in the detection of pulmonary nodules (with diameter 3-20 mm) during the review of CT examinations of the chest for asymptomatic populations. AVIEW Lung Nodule CAD provides adjunctive information to alert the radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. AVIEW Lung Nodule CAD may be used as a second reader after the radiologist has completed their initial read. The algorithm has been validated using non-contrast CT images, the majority of which were acquired on Siemens SOMATOM CT series scanners; therefore, limiting device use to use with Siemens SOMATOM CT series is recommended.
The AVIEW Lung Nodule CAD is a software product that detects nodules in the lung. The lung nodule detection model was trained by Deep Convolution Network (CNN) based algorithm from the chest CT image. Automatic detection of lung nodules of 3 to 20mm in chest CT images. By complying with DICOM standards, this product can be linked with the Picture Archiving and Communication System (PACS) and provides a separate user interface to provide functions such as analyzing, identifying, storing, and transmitting quantified values related to lung nodules. The CAD's results could be displayed after the user's first read, and the user could select or de-select the mark provided by the CAD. The device's performance was validated with SIEMENS’ SOMATOM series manufacturing. The device is intended to be used with a cleared AVIEW platform.
Here's a breakdown of the acceptance criteria and study details for the AVIEW Lung Nodule CAD, as derived from the provided document:
Acceptance Criteria and Reported Device Performance
Criteria (Standalone Performance) | Acceptance Criteria | Reported Device Performance |
---|---|---|
Sensitivity (patient level) | > 0.8 | 0.907 (0.846-0.95) |
Sensitivity (nodule level) | > 0.8 | Not explicitly stated as separate from patient level, but overall sensitivity is 0.907. |
Specificity | > 0.6 | 0.704 (0.622-0.778) |
ROC AUC | > 0.8 | 0.961 (0.939-0.983) |
Sensitivity at FP/scan 0.8 | 0.889 (0.849-0.93) at FP/scan=0.504 |
Study Details
1. Acceptance Criteria and Reported Device Performance (as above)
2. Sample size used for the test set and data provenance:
- Test Set Size: 282 cases (140 cases with nodule data and 142 cases without nodule data) for the standalone study.
- Data Provenance:
* Geographically distinct US clinical sites.
* All datasets were built with images from the U.S.
* Anonymized medical data was purchased.
* Included both incidental and screening populations.
* For the Multi-Reader Multi-Case (MRMC) study, the dataset consisted of 151 Chest CTs (103 negative controls and 48 cases with one or more lung nodules).
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Three (for both the MRMC study and likely for the standalone ground truth, given the consistency in expert involvement).
- Qualifications: Dedicated chest radiologists with at least ten years of experience.
4. Adjudication method for the test set:
- Not explicitly stated for the "standalone study" ground truth establishment.
- For the MRMC study, the three dedicated chest radiologists "determined the ground truth" in a blinded fashion. This implies a consensus or majority vote, but the exact method (e.g., 2+1, 3+1) is not specified. It does state "All lung nodules were segmented in 3D" which implies detailed individual expert review before ground truth finalization.
5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study:
- Yes, an MRMC study was performed.
- Effect size of human readers improving with AI vs. without AI assistance:
* AUC: The point estimate difference was 0.19 (from 0.73 unassisted to 0.92 aided).
* Sensitivity: The point estimate difference was 0.23 (from 0.68 unassisted to 0.91 aided).
* FP/scan: The point estimate difference was 0.24 (from 0.48 unassisted to 0.28 aided), indicating a reduction in false positives. - Reading Time: "Reading time was decreased when AVIEW Lung Nodule CAD aided radiologists."
6. Standalone (algorithm only without human-in-the-loop performance) study:
- Yes, a standalone study was performed.
- The acceptance criteria and reported performance for this study are detailed in the table above.
7. Type of ground truth used:
- Expert consensus by three dedicated chest radiologists with at least ten years of experience. For the standalone study, it is directly compared against "ground truth," which is established by these experts. For the MRMC study, the experts "determined the ground truth." The phrase "All lung nodules were segmented in 3D" suggests a thorough and detailed ground truth establishment.
8. Sample size for the training set:
- Not explicitly stated in the provided text. The document mentions the lung nodule detection model was "trained by Deep Convolution Network (CNN) based algorithm from the chest CT image," but does not provide details on the training set size.
9. How the ground truth for the training set was established:
- Not explicitly stated in the provided text.
Ask a specific question about this device
(365 days)
AVIEW
AVIEW provides CT values for pulmonary tissue from CT thoracic and cardiac datasets. This software can be used to support the physician providing quantitative analysis of CT images by image segmentation of sub-structures in the lung, lobe, airways, fissures completeness, cardiac, density evaluation, and reporting tools. AVIEW is also used to store, transfer, inquire and display CT data set on-premises and as a cloud environment to allow users to connect by various environments such as mobile devices and Chrome browsers. Converts the sharp kernel for quantitative analysis of segmenting low attenuation areas of the lung. Characterizing nodules in the lung in a single study or over the time course of several thoracic studies. Characterizations include type, location of the nodule, and measurements such as size (major axis, minor axis), estimated effective diameter from the volume of the nodule, the volume of the nodule, Mean HU(the average value of the CT pixel inside the nodule in HU), Minimum HU, Max HU, mass (mass calculated from the CT pixel value), and volumetric measures(Solid major, length of the longest diameter measure in 3D for a solid portion of the nodule, Solid 2nd Major: The size of the solid part, measured in sections perpendicular to the Major axis of the solid portion of the nodule), VDT (Volume doubling time), and Lung-RADS (classification proposed to aid with findings.) ). The system automatically performs the measurement, allowing lung nodules and measurements to be displayed and, integrate with FDA certified Mevis CAD (Computer aided detection) (K043617). It also provides the Agatston score, and mass score by the whole and each artery by segmenting four main arteries (right coronary artery, left main coronary, left anterior descending, and left circumflex artery). Based on the calcium score provides CAC risk based on age and gender. The device is indicated for adult patients only.
The AVIEW is a software product that can be installed on a PC. It shows images taken with the interface from various storage devices using DICOM 3.0, the digital image and communication standard in medicine. It also offers functions such as reading, manipulation, analyzing, post-processing, saving, and sending images by using software tools. And is intended for use as a quantitative analysis of CT scanning. It provides the following features such as segmentation of lung, lobe, airway, fissure completeness, semi-automatic nodule management, maximal plane measures and volumetric measures, automatic nodule detection by integration with 3rd party CAD. It also provides the Brocks model, which calculates the malignancy score based on numerical or Boolean inputs. Follow-up support with automated nodule matching and automatically categorize Lung-RADS score, which is a quality assurance tool designed to standardize lung cancer screening CT reporting and management recommendations that are based on type, size, size, size, size, size, size, size, size, size, size change, and other findings that are reported. It also provides a calcium score by automatically analyzing coronary arteries from the segmented arteries
The provided document does not contain specific acceptance criteria and detailed study results for the AVIEW device that would allow for the construction of the requested table and comprehensive answer. The document primarily focuses on demonstrating substantial equivalence to a predicate device and briefly mentions software verification and validation activities.
However, I can extract the information that is present and highlight what is missing.
Here's an analysis based on the provided text, indicating where information is present and where it is absent:
Acceptance Criteria and Device Performance (Partial)
The document mentions "pre-determined Pass/Fail criteria" for software verification and validation, but it does not explicitly list these criteria or the numerical results for them. It broadly states that the device "passed all of the tests."
Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criterion | Reported Device Performance |
---|---|---|
General Software Performance | Passed all tests based on pre-determined Pass/Fail criteria | Passed all tests |
Unit Test | Successful functional, performance, and algorithm analysis for image processing algorithm components | Tests conducted using Google C++ Unit Test Framework |
System Test (Defects) | No 'Major' or 'Moderate' defects found | No 'Major' or 'Moderate' defects found (implies 'Passed' for this criterion) |
Kernel Conversion (LAA result reliability) | LAA result on kernel-converted sharp image should have higher reliability with soft kernel than LAA results on sharp kernel image not applying Kernel Conversion. | Test conducted on 96 total images (53 US, 43 Korean). (Result stated as 'A', indicating this was a test conducted but no specific performance metric is given for how much higher the reliability was). |
Fissure Completeness | Compared to radiologists' assessment | Evaluated using Bland-Altman plots; Kappa and ICC reported. (Specific numerical results are not provided). |
Detailed Breakdown of Study Information:
-
A table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated with numerical targets. The document mentions "pre-determined Pass/Fail criteria" for software verification and validation and "Success standard of System Test is not finding 'Major', 'Moderate' defect." For kernel conversion, the criterion is stated qualitatively (higher reliability). For fissure completeness, it's about comparison to radiologists.
- Reported Device Performance:
- General: "passed all of the tests."
- System Test: "Success standard... is not finding 'Major', 'Moderate' defect."
- Kernel Conversion: "The LAA result on kernel converted sharp image should have higher reliability with the soft kernel than LAA results on sharp kernel image that is not Kernel Conversion applied." (This is more of a hypothesis or objective rather than a quantitative result here).
- Fissure Completeness: "The performance was evaluated using Bland Altman plots to assess the fissure completeness performance compared to radiologists. Kappa and ICC were also reported." (Specific numerical values for Kappa/ICC are not provided).
-
Sample sizes used for the test set and the data provenance:
- Kernel Conversion: 96 total images (53 U.S. population and 43 Korean).
- Fissure Completeness: 129 subjects from TCIA (The Cancer Imaging Archive) LIDC database.
- Data Provenance: U.S. and Korean populations for Kernel Conversion, TCIA LIDC database for Fissure Completeness. The document does not specify if these were retrospective or prospective studies. Given they are from archives/databases, they are most likely retrospective.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not specified in the provided text. For Fissure Completeness, it states "compared to radiologists," but the number and qualifications of these radiologists are not detailed.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not specified in the provided text.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not specified. The document mentions "compared to radiologists" for fissure completeness, but it does not detail an MRMC study comparing human readers with and without AI assistance for measuring an effect size of improvement.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the performance tests described (e.g., Nodule Matching, LAA Comparative Experiment, Semi-automatic Nodule Segmentation, Fissure Completeness, CAC Performance Evaluation) appear to be standalone evaluations of the algorithm's output against a reference (ground truth or expert assessment), without requiring human interaction during the measurement process by the device itself.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For Fissure Completeness, the ground truth appears to be expert assessment/consensus from radiologists implied by "compared to radiologists."
- For other performance tests like "Nodule Matching," "LAA Comparative Experiment," "Semi-automatic Nodule Segmentation," "Brock Model Calculation," etc., the specific type of ground truth is not explicitly stated. It's likely derived from expert annotations or established clinical metrics but is not detailed.
-
The sample size for the training set:
- Not specified in the provided text. The document refers to "pre-trained deep learning models" for the predicate device, but gives no information on the training data for the current device.
-
How the ground truth for the training set was established:
- Not specified in the provided text.
Summary of Missing Information:
The document serves as an FDA 510(k) summary, aiming to demonstrate substantial equivalence to a predicate device rather than providing a detailed clinical study report. Therefore, specific quantitative performance metrics, detailed study designs (e.g., number and qualifications of readers, adjudication methods for ground truth, specifics of MRMC studies), and training set details are not included.
Ask a specific question about this device
(269 days)
AVIEW RT ACS
AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images which could be used as an initial contour for the clinicians to approve and edit by the radiation oncology department for treatment planning or other professions where a segmented mask of organs is needed.
- a. Deep learning contouring from four body parts (Head & Neck, Breast, Abdomen, and Pelvis)
- b. Generates RT-DICOM structure of contoured organs
- c. Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Note that the Breast (Both right and left lung, Heart) were validated with non-contrast CT. Head & Neck (Both right and left Eyes, Brain and Mandible), Abdomen (Both right and Liver), and Pelvis (Both right and left Femur and Bladder) were validated with Contrast CT only.
The AVIEW RT ACS provides deep-learning-based auto-segmented organs and generates contours in RT-DICOM format from CT images. This software could be used by the radiation oncology department planning, or other professions where a segmented mask of organs is needed.
- Deep learning contouring: it can automatically contour the organ-at-risk (OARs) from four body parts (Head ● & Neck, Breast, Abdomen, and Pelvis)
- . Generates RT-DICOM structure of contoured organs
- . Rule-based auto pre-processing
Receive/Send/Export medical images and DICOM data
Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The general acceptance criterion for the AVIEW RT ACS device appears to be comparable performance to a predicate device (MIM-MRT Dosimetry) in terms of segmentation accuracy, as measured by Dice Similarity Coefficient (DSC) and 95% Hausdorff Distance (HD). While explicit numerical acceptance thresholds are not stated in the provided text (e.g., "DSC must be greater than X"), the study is structured as a comparative effectiveness study. The expectation is that the AVIEW RT ACS performance should be at least equivalent to, if not better than, the predicate device.
The study's tables (Tables 1-30) consistently show the AVIEW RT ACS achieving higher average DSC values (closer to 1, indicating better overlap) and generally lower average 95% HD values (closer to 0, indicating less maximum distance between contours), across various organs, demographic groups, and scanner parameters, compared to the predicate device.
Table of Acceptance Criteria and Reported Device Performance:
Metric / Organ (Examples) | Acceptance Criterion (Implicit) | AVIEW RT ACS Performance (Mean ± SD, [95% CI]) | Predicate Device Performance (Mean ± SD, [95% CI]) | Difference (AVIEW - Predicate) | Meets Criteria? |
---|---|---|---|---|---|
Overall DSC | Should be comparable to or better than predicate device. | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly positive | Yes |
Overall 95% HD (mm) | Should be comparable to or better than predicate device (i.e., lower HD). | (See tables below for individual organ results) | (See tables below for individual organ results) | Mostly negative (indicating better AVIEW) | Yes |
Brain DSC | Comparable to or better than predicate. | 0.97 ± 0.01 (0.97, 0.98) | 0.96 ± 0.01 (0.96, 0.96) | 0.01 | Yes |
Brain 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.92 ± 20.46 (-1.1, 14.94) | 4.61 ± 2.17 (3.76, 5.46) | 2.31 | Mixed (Higher HD for AVIEW, but wide CI) |
Heart DSC | Comparable to or better than predicate. | 0.94 ± 0.03 (0.93, 0.95) | 0.78 ± 1.20 (0.70, 8.56) | 0.16 | Yes (Significantly better) |
Heart 95% HD (mm) | Comparable to or better than predicate (lower HD). | 6.19 ± 4.21 (4.73, 7.65) | 18.90 ± 5.09 (17.14, 20.67) | -12.71 | Yes (Significantly better) |
Liver DSC | Comparable to or better than predicate. | 0.96 ± 0.01 (0.96, 0.97) | 0.87 ± 0.06 (0.85, 0.90) | 0.09 | Yes |
Liver 95% HD (mm) | Comparable to or better than predicate (lower HD). | 7.17 ± 12.07 (2.54, 11.81) | 24.62 ± 15.16 (18.79, 30.44) | -17.44 | Yes (Significantly better) |
Bladder DSC | Comparable to or better than predicate. | 0.88 ± 0.14 (0.84, 0.93) | 0.52 ± 0.26 (0.44, 0.60) | 0.36 | Yes (Significantly better) |
Bladder 95% HD (mm) | Comparable to or better than predicate (lower HD). | 10.55 ± 20.56 (3.74, 17.36) | 30.48 ± 22.76 (22.94, 38.02) | -19.93 | Yes (Significantly better) |
Note: The tables throughout the document provide specific performance metrics for individual organs and sub-groups (race, vendors, slice thickness, kernel types). The general conclusion from these tables is that the AVIEW RT ACS consistently performs as well as or better than the predicate device across most metrics and categories.
Study Details for Acceptance Criteria Proof:
-
Sample Size Used for the Test Set: 120 cases.
- Data Provenance: The dataset included cases from both South Korea and the USA. It was constructed with various ethnicities (White, Black, Asian, Hispanic, Latino, African, American, etc.), and from four major vendors (GE, Siemens, Toshiba, and Philips).
- Retrospective/Prospective: Not explicitly stated, but the mention of a data set constructed for validation suggests a retrospective collection.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:
- Number of Experts: 3 radiation oncology physicians.
- Qualifications: All were trained by "The Korean Society for Radiation Oncology," board-certified by the "Ministry of Health and Welfare," with a range of 9-21 years of experience in radiotherapy. The experts included attending assistant professors (n=2) and professors (n=1) from three institutions.
-
Adjudication Method for the Test Set:
- The method was a sequential editing process:
- One expert manually delineated the organs.
- The segmentation results from the first expert were then sequentially edited by the other two experts.
- The first expert made corrections.
- The result was then received by another expert who finalized the gold standard.
- This can be considered a form of sequential consensus or collaborative review rather than a strict N+1 or M+N+1 method.
- The method was a sequential editing process:
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- Yes, a comparative effectiveness study was done. The study directly compares the AVIEW RT ACS against a predicate device (MIM-MRT Dosimetry).
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- The study does not measure the improvement of human readers with AI assistance. Instead, it evaluates the standalone performance of the AI device against the standalone performance of a predicate AI device, both compared to expert-generated ground truth. The "human readers" (the three experts) were used solely to create the ground truth, not to evaluate their performance with or without AI assistance.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The study compares the auto-segmentation results of the AVIEW RT ACS directly to the expert-derived "gold standard" and also compares it to the auto-segmentation of the predicate device. This is purely an algorithm-only evaluation.
-
The Type of Ground Truth Used:
- Expert Consensus. The ground truth was established by three radiation oncology physicians through a sequential delineation and editing process to create a "robust gold standard."
-
The Sample Size for the Training Set:
- Not specified within the provided text. The document refers only to the validation/test set.
-
How the Ground Truth for the Training Set Was Established:
- Not specified within the provided text. Since the training set size and characteristics are not mentioned, neither is the method for establishing its ground truth.
Ask a specific question about this device
Page 1 of 2