Search Results
Found 18 results
510(k) Data Aggregation
(443 days)
OEB
InferRead Lung CT.AI is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules ≥ 4mm during the review of CT examinations of the chest on an asymptomatic population ≥ 55 years old. InferRead Lung CT.AI requires that both lungs be in the field of view. InferRead Lung CT.AI provides adjunctive information and is not intended to be used without the original CT series.
InferRead Lung CT.AI uses the deep learning (DL) technology to perform nodule detection. It is a dedicated post-processing application that generates CADe marks as an overlay on original CT scans. The software can be installed in a healthcare facility or a cloud-based platform and is comprised of computer-assisted reading tools designed to aid radiologists in detecting, segmenting, measuring and localizing actionable pulmonary nodules that are 4mm or above during the review of chest CT examinations of asymptomatic populations, with enhanced capabilities for pulmonary nodule follow-up comparison and lung analysis. InferRead Lung CT.AI provides auxiliary information and is not intended to be used if the original CT series is not available.
The provided 510(k) clearance letter and summary discuss the InferRead Lung CT.AI device, its indications for use, and a comparison to predicate devices. It also details some standalone performance tests conducted to assess the newly introduced features of the device.
Here's an analysis of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document details performance for newly added features rather than explicitly defined "acceptance criteria" for the overall device's primary function of nodule detection. However, it states that "predetermined testing criteria" were passed and that "validation tests indicated that as required by the risk analysis, designated individuals performed all verification and validation activities and that the results demonstrated that the predetermined acceptance criteria were met."
For the newly introduced functions, specific performance metrics are reported:
Feature Tested | Acceptance Criteria (Implied/Expected) | Reported Device Performance |
---|---|---|
Nodule Registration | High accuracy in matching nodule pairs between current and prior scans. | Overall Nodule Match Rate: 0.970 (95%CI: 0.947-0.994) |
Scan Interval Subgroup: |
- 0-6 months: 0.976 (95%CI: 0.911-1.0)
- 6-12 months: 1.000 (95%CI: N/A)
- 12-24 months: 0.938 (95%CI: 0.880-0.997) |
| Nodule Lobe Localization | High accuracy in identifying the correct lung lobe for detected nodules. | Overall Lobe Localization Accuracy Rate: 0.957 (95%CI: 0.929-0.986) |
| Lung Lobe Segmentation | High geometric similarity between automated segmentation and ground truth. | Average Dice Coefficient: 0.966 (95%CI: 0.962 to 0.969) |
2. Sample Sizes Used for the Test Set and Data Provenance
- Nodule Registration Standalone Test: 98 lung cancer screening cases with 206 nodule pairs.
- Nodule Lobe Localization Standalone Test: 94 lung cancer screening scans with 188 nodules.
- Lung Lobe Segmentation Standalone Test: 22 lung cancer screening cases with 110 lung lobes.
Data Provenance: The document does not explicitly state the country of origin for the data used in these tests, nor does it specify if the data was retrospective or prospective. It refers to "lung cancer screening cases/scans," suggesting these are clinical datasets.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the number of experts or their qualifications who established the ground truth for the standalone performance tests.
4. Adjudication Method for the Test Set
The document does not specify the adjudication method used for establishing the ground truth for the test sets in these standalone performance evaluations.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to assess the improvement of human readers with AI assistance versus without AI assistance. The performance tests described are standalone evaluations of specific AI functions.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the document explicitly describes standalone performance testing for the newly added functions: "For the newly added functions, including nodule registration, nodule localization and lung lobe segmentation, we conducted standalone performance testing." The reported results (Nodule Match Rate, Lobe Localization Accuracy Rate, Dice Coefficient) are all metrics of the algorithm's performance without human interaction.
The document also states: "Regarding the performance of the AI outputs, the nodule detection and segmentation functions were consistent with the predicate product (K192880), as verified through consistency testing." This implies that the primary nodule detection and segmentation capabilities were also assessed in a standalone manner, likely by comparing the AI's output to a ground truth.
7. The Type of Ground Truth Used
The document does not explicitly state the type of ground truth used for the standalone tests (e.g., expert consensus, pathology, outcomes data). However, for metrics like "Nodule Match Rate," "Lobe Localization Accuracy Rate," and "Dice Coefficient," the ground truth would typically be established by expert radiologists or reference standards. For Dice Coefficient in segmentation, it would likely be expert-drawn segmentations.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size of the training set used to develop the InferRead Lung CT.AI device.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
(184 days)
OEB
V5med Lung AI is a Computer-Aided Detection (CAD) software designed to assist radiologists in detecting pulmonary nodules (with diameter of 4-30 mm) during CT examinations of the chest for asymptomatic populations. This software provides adjunctive information to alert radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. It can be used in a concurrent read mode, where the AI analysis results are displayed alongside the original CT images during either the initial review or any subsequent reviews by the radiologist. V5med Lung AI does not replace the radiologist's critical judgment or diagnostic processes and should not be used in isolation from the original CT series.
The V5med Lung AI is a software product designed to detect nodules in the lungs. The detection model is trained using a Deep Convolutional Neural Network (CNN) based algorithm, enabling automatic detection of lung nodules ranging from 4 to 30 mm in chest CT images.
The system integrates algorithm logic and database on the same server, ensuring simplicity and ease of maintenance. It accepts chest CT images from a PACS system, Radiological Information System (RIS), or directly from a CT scanner, analyzes the images, and provides output annotations regarding lung nodules.
This document describes the regulatory acceptance criteria met by the V5med Lung AI device and the study conducted to prove its performance.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria for the V5med Lung AI device are implicitly set by the endpoints measured in the clinical performance evaluation, which aimed to demonstrate improved radiologist performance with the AI tool compared to unaided reads. The reported device performance directly addresses these implicit criteria.
Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Aided vs Un-aided) | Difference (95% CI) | Result |
---|---|---|---|---|
AUC (Localization-Specific ROC) | Significant increase in AUC with aid. | Unaided: 0.734, Aided: 0.830 | 0.0959 (0.0586, 0.1332) | Met (Significant increase, CI entirely above 0) |
Reading Times (seconds) | Significant decrease in reading time with aid. | Unaided: 133.0, Aided: 115.9 | -17.1 (-26.7, -9.0) | Met (Significant decrease, CI entirely below 0) |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 340 chest CT scans.
- Data Provenance:
- Country of Origin: Not explicitly stated, but all screening cases were acquired from the NLST (National Lung Screening Trial) CT arm, implying a US-based origin.
- Retrospective or Prospective: Retrospective. The study was a "retrospective, fully crossed, multi-reader multi-case (MRMC) study."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used to establish the ground truth for the test set. It mentions sixteen board-certified radiologists participated in the reader study. These radiologists were involved in reading the cases with and without the AI aid, and their performance served as the basis for evaluating the AI's effectiveness.
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method for establishing the ground truth for the test set. It mentions a "fully crossed, multi-reader multi-case (MRMC) study" design, where sixteen radiologists read the cases. This implies that the performance metrics (AUC, reading times) were derived from their individual interpretations, likely compared against a pre-established consensus ground truth, though the method for that consensus is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- Yes, a multi-reader multi-case (MRMC) comparative effectiveness study was done.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance:
- AUC: Radiologists using V5med Lung AI showed a 0.0959 increase in AUC (from 0.734 unaided to 0.830 aided). The 95% confidence interval for this difference was (0.0586, 0.1332), indicating a statistically significant improvement.
- Reading Times: Radiologists using V5med Lung AI showed a 17.1-second decrease in reading time (from 133.0 seconds unaided to 115.9 seconds aided). The 95% confidence interval for this difference was (-26.7, -9.0), indicating a statistically significant reduction. This translates to approximately a 13% improvement in reading time.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
- Yes, standalone performance testing was conducted.
- The document states: "Standalone performance testing, which included chest CT scans from lung cancer screening population and non-screening population, was conducted to validate detection accuracy of V5med Lung AI."
- It further notes: "Results showed that V5med Lung AI had similar nodule detection sensitivity compared to those of the predicate device."
7. The Type of Ground Truth Used
The document does not explicitly define the type of ground truth used for the standalone performance testing or the MRMC study's assessment of radiologist performance. However, given the context of lung nodule detection in oncology, common ground truth methods include:
- Expert Consensus: Multiple expert radiologists review cases and reach a consensus on the presence and characteristics of nodules. This is a very common method for CAD device validation. The prompt indicates "consensus" as a possibility.
- Pathology: Biopsy results confirming the nature of lesions, though this is often not feasible for all identified nodules, especially in large screening datasets.
- Outcomes Data: Longitudinal follow-up of patients to see if nodules grow or are confirmed to be malignant over time.
Given the NLST data source, it is highly probable that the ground truth was established through a rigorous process, likely involving expert consensus and potentially correlation with long-term follow-up from the trial, but the specific details are not provided in this excerpt.
8. The Sample Size for the Training Set
The document does not provide the sample size for the training set. It only states that the detection model was "trained using a Deep Convolutional Neural Network (CNN) based algorithm."
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only mentions the use of a Deep Convolutional Neural Network (CNN) based algorithm for training the detection model.
Ask a specific question about this device
(86 days)
OEB
syngo.CT Lung CAD device is a computer-aided detection (CAD) tool designed to assist radiologists in the detection of solid and subsolid pulmonary nodules during review of multi-detector computed tomography (MDCT) from multivendor examinations of the chest. The software is an adjunctive tool to alert the radiologist to regions of interest (ROI) that may be otherwise overlooked.
The syngo.CT Lung CAD device may be used as a concurrent first reader followed by a full review of the case by the radiologist or as second reader after the radiologist has completed his/her initial read.
The syngo.CT Lung CAD device may also be used in "solid-only" mode, where potential (or suspected) sub-solid and/or fully calcified CAD findings are filtered out.
The software device is an algorithm which does not have its own user interface component for displaying of CAD marks. The Hosting Application incorporating syngo. CT Lung CAD is responsible for implementing a user interface.
Siemens Healthcare GmbH intends to market the syngo.CT Lung CAD which is a medical device that is designed to perform CAD processing in thoracic CT examinations for the detection of solid pulmonary nodules (between 3.0 mm and 30.0mm) and subsolid nodules (between 5.0 mm and 30.0mm) in average diameter. The device processes images acquired with multi-detector CT scanners with 16 or more detector rows recommended.
The syngo.CT Lung CAD device supports the full range of nodule locations (central, peripheral) and contours (round, irregular).
The syngo.CT Lung CAD sends a list of nodule candidate locations to a visualization application, such as syngo MM Oncology, or a visualization rendering component, which generates output images series with the CAD marks superimposed on the input thoracic CT images to enable the radiologist's review. syngo MM Oncology (FDA clearanceK211459 and subsequent versions ) is deployed on the syngo.via platform (FDA clearance K191040 and subsequent versions), which provides a common framework for various other applications implementing specific clinical workflows (but are not part of this clearance) to display the CAD marks. The syngo.CT Lung CAD device may be used either as a concurrent first reader, followed by a review of the case, or as a second reader only after the initial read is completed
The provided text describes the Siemens syngo.CT Lung CAD (Version VD30) and its substantial equivalence to its predicate device (syngo.CT Lung CAD Version VD20). The primary change in VD30 is the introduction of a "solid-only" mode. The acceptance criteria and study details are primarily focused on demonstrating that the VD30 in "solid-only" mode is not inferior to VD20 in standard mode, and that VD30 in standard mode is not inferior to VD20 in standard mode. Since the document primarily focuses on demonstrating non-inferiority to a predicate device, explicit acceptance criteria values (e.g., minimum sensitivity thresholds) are not explicitly stated as numerical targets. Instead, the acceptance is based on statistical non-inferiority.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Implied for Non-inferiority) | Reported Device Performance (Summary) |
---|---|
For VD30 (solid-only mode) vs. VD20 (standard mode): | |
- Sensitivity of VD30 in solid-only mode is not inferior to VD20 in standard mode. | The standalone accuracy has shown that the sensitivity of VD30 in solid-only mode is not inferior to VD20 in standard mode. |
- Mean number of false positives (FPs) per subject is significantly lower with VD30 in solid-only mode. | The mean number of false positives per subject is significantly lower with VD30 in solid-only mode. |
- The 2 CAD systems overlap in True Positives (TPs) and FPs. | (Implied as part of showing non-inferiority and lower FPs). |
For VD30 (standard mode) vs. VD20 (standard mode): | |
- Sensitivity of VD30 in standard mode is not inferior to VD20 in standard mode. | The sensitivity of VD30 in standard mode is not inferior to VD20 in standard mode. |
- Mean number of FPs per subject of VD30 in standard mode is not inferior to VD20 in standard mode. | The mean number of FPs per subject of VD30 in standard mode is not inferior to VD20 in standard mode. |
2. Sample size used for the test set and the data provenance
- Sample Size: 712 CT thoracic cases.
- Data Provenance: Retrospectively collected data from 3 sources:
- The UCLA study (232 cases)
- The original PMA study (145 cases)
- Additional cases (335 cases)
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document differentiates ground truth establishment based on the data source:
- UCLA data: Reference standard (ground truth) was determined as part of the reader study for the predicate device (K203258). The number and qualifications of experts are not explicitly stated for this subset in the provided text for VD30, but it refers to the predicate clearance.
- PMA study cases: 18 readers were used. Qualifications are not explicitly stated, but 9 of the 18 readers were needed for declaring a true nodule.
- Additional cases: 7 readers were used. Qualifications are not explicitly stated, but 4 of the 7 readers were needed for declaring a true nodule.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The adjudication method varied based on the data source:
- PMA study cases: 9 out of 18 readers were needed for declaring a true nodule. This suggests a majority consensus from a large panel.
- Additional cases: 4 out of 7 readers were needed for declaring a true nodule. This also suggests a majority consensus.
- UCLA data: "Reference standard for the UCLA data was determined as part of the reader study (K203258)." Specific adjudication details for this subset are not provided in this document but are referenced to the predicate device's clearance.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A MRMC comparative effectiveness study involving human readers with and without AI assistance is not explicitly described in this document. The statistical analysis performed was a standalone performance analysis to demonstrate substantial equivalence between two CAD versions (VD30 vs VD20), focusing on the algorithm's performance metrics (sensitivity, FPs).
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance analysis was done. The document states: "The standalone performance analysis was designed to demonstrate the substantial equivalence between syngo.CT Lung CAD VD30A (VD30) and the predicate device syngo.CT Lung CAD VD20."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The ground truth was established through expert consensus/reader review.
- For PMA cases: 9 out of 18 readers' consensus.
- For additional cases: 4 out of 7 readers' consensus.
- For UCLA data: Reference standard from a reader study.
8. The sample size for the training set
The document does not explicitly state the sample size for the training set. It mentions that the algorithm is based on Convolutional Networks (CNN) and that the lung segmentation algorithm for VD30 in particular is "trained on lung CT data including comorbidities for robustness," but the specific number of cases for this training set is not provided.
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It only states that the lung segmentation algorithm was "trained on lung CT data" and that the overall algorithm uses CNNs, implying supervised learning, which would require ground truth annotations. However, the method of obtaining these annotations is not detailed.
Ask a specific question about this device
(267 days)
OEB
AVIEW Lung Nodule CAD is a Computer-Aided Detection (CAD) software designed to assist radiologists in the detection of pulmonary nodules (with diameter 3-20 mm) during the review of CT examinations of the chest for asymptomatic populations. AVIEW Lung Nodule CAD provides adjunctive information to alert the radiologists to regions of interest with suspected lung nodules that may otherwise be overlooked. AVIEW Lung Nodule CAD may be used as a second reader after the radiologist has completed their initial read. The algorithm has been validated using non-contrast CT images, the majority of which were acquired on Siemens SOMATOM CT series scanners; therefore, limiting device use to use with Siemens SOMATOM CT series is recommended.
The AVIEW Lung Nodule CAD is a software product that detects nodules in the lung. The lung nodule detection model was trained by Deep Convolution Network (CNN) based algorithm from the chest CT image. Automatic detection of lung nodules of 3 to 20mm in chest CT images. By complying with DICOM standards, this product can be linked with the Picture Archiving and Communication System (PACS) and provides a separate user interface to provide functions such as analyzing, identifying, storing, and transmitting quantified values related to lung nodules. The CAD's results could be displayed after the user's first read, and the user could select or de-select the mark provided by the CAD. The device's performance was validated with SIEMENS’ SOMATOM series manufacturing. The device is intended to be used with a cleared AVIEW platform.
Here's a breakdown of the acceptance criteria and study details for the AVIEW Lung Nodule CAD, as derived from the provided document:
Acceptance Criteria and Reported Device Performance
Criteria (Standalone Performance) | Acceptance Criteria | Reported Device Performance |
---|---|---|
Sensitivity (patient level) | > 0.8 | 0.907 (0.846-0.95) |
Sensitivity (nodule level) | > 0.8 | Not explicitly stated as separate from patient level, but overall sensitivity is 0.907. |
Specificity | > 0.6 | 0.704 (0.622-0.778) |
ROC AUC | > 0.8 | 0.961 (0.939-0.983) |
Sensitivity at FP/scan 0.8 | 0.889 (0.849-0.93) at FP/scan=0.504 |
Study Details
1. Acceptance Criteria and Reported Device Performance (as above)
2. Sample size used for the test set and data provenance:
- Test Set Size: 282 cases (140 cases with nodule data and 142 cases without nodule data) for the standalone study.
- Data Provenance:
* Geographically distinct US clinical sites.
* All datasets were built with images from the U.S.
* Anonymized medical data was purchased.
* Included both incidental and screening populations.
* For the Multi-Reader Multi-Case (MRMC) study, the dataset consisted of 151 Chest CTs (103 negative controls and 48 cases with one or more lung nodules).
3. Number of experts used to establish the ground truth for the test set and their qualifications:
- Number of Experts: Three (for both the MRMC study and likely for the standalone ground truth, given the consistency in expert involvement).
- Qualifications: Dedicated chest radiologists with at least ten years of experience.
4. Adjudication method for the test set:
- Not explicitly stated for the "standalone study" ground truth establishment.
- For the MRMC study, the three dedicated chest radiologists "determined the ground truth" in a blinded fashion. This implies a consensus or majority vote, but the exact method (e.g., 2+1, 3+1) is not specified. It does state "All lung nodules were segmented in 3D" which implies detailed individual expert review before ground truth finalization.
5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study:
- Yes, an MRMC study was performed.
- Effect size of human readers improving with AI vs. without AI assistance:
* AUC: The point estimate difference was 0.19 (from 0.73 unassisted to 0.92 aided).
* Sensitivity: The point estimate difference was 0.23 (from 0.68 unassisted to 0.91 aided).
* FP/scan: The point estimate difference was 0.24 (from 0.48 unassisted to 0.28 aided), indicating a reduction in false positives. - Reading Time: "Reading time was decreased when AVIEW Lung Nodule CAD aided radiologists."
6. Standalone (algorithm only without human-in-the-loop performance) study:
- Yes, a standalone study was performed.
- The acceptance criteria and reported performance for this study are detailed in the table above.
7. Type of ground truth used:
- Expert consensus by three dedicated chest radiologists with at least ten years of experience. For the standalone study, it is directly compared against "ground truth," which is established by these experts. For the MRMC study, the experts "determined the ground truth." The phrase "All lung nodules were segmented in 3D" suggests a thorough and detailed ground truth establishment.
8. Sample size for the training set:
- Not explicitly stated in the provided text. The document mentions the lung nodule detection model was "trained by Deep Convolution Network (CNN) based algorithm from the chest CT image," but does not provide details on the training set size.
9. How the ground truth for the training set was established:
- Not explicitly stated in the provided text.
Ask a specific question about this device
(185 days)
OEB
ClearRead CT is comprised of computer-assisted reading tools designed to aid the radiologist in the detection and characterization of pulmonary nodules during the review of screening and surveillance (low-dose) CT examinations of the chest on a non-oncological patient population. ClearRead CT requires both lungs be in the field of view and is not intended for monitoring patients undergoing therapy for lung cancer or limited field of view CT scans. ClearRead CT provides adjunctive information and is not intended to be used without the original CT series.
ClearRead CT Compare is a post-processing application which processes a prior chest CT to determine whether a nodule detected in the current exam was present in the prior exam using the same detection algorithm used on the current exam. ClearRead CT Compare requires both lungs to be in the field of view. ClearRead CT Compare provides adjunctive information and is not intended to be used without the original CT series and is only invoked on those patients where a prior exam exists and if a nodule is detected in the current exam. ClearRead CT Compare receives images according to the DICOM® protocol, processes the Lung CT series, and delivers the resulting information through the same DICOM network interface in conjunction with results provided for the current exam, specifically whether the nodule is present on the prior exam and if so, the percent volume change between the current and prior exam along with the volume doubling time. Series inputs are limited to Computed Tomography (CT). The ClearRead CT Compare Processor processes each prior series received. The ClearRead CT Compare output is sent to a destination device that conforms to the ClearRead CT DICOM Conformance Statement, such as a storage archive. ClearRead CT Compare does not support printing or DICOM media. ClearRead CT Compare is a product extension of our FDA cleared and marketed ClearRead CT device (K161201). The initial device contained ClearRead CT Vessel Suppress as well as ClearRead CT Detect. ClearRead CT (the base system), includes normalization, segmentation, and characterization of nodules, and provides the following key features: ● ClearRead CT Vessel Suppress aids radiologists by suppressing normal structures in the input chest CT series. . ClearRead CT Detect aids radiologists in the detection and characterizations of nodules in the input chest CT series. ● ClearRead CT Compare includes Scan Registration and Nodule Matching functions and adds the following key features: ClearRead CT Compare aids radiologists in ● tracking nodule changes over time, providing additional characterizations per nodule, including percent nodule change and volume doubling time.
Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Nodule Match Rate (Overall Target) | Minimum of 90% for each selected stratum. |
Nodule Match Rate (Solid Nodule Type) | 0.961 (0.952, 0.978) - Exceeds 90% benchmark |
Nodule Match Rate (Part-solid Nodule Type) | 0.957 (0.942, 0.971) - Exceeds 90% benchmark |
Nodule Match Rate (Ground Glass Nodule Type) | 0.946 (0.934, 0.965) - Exceeds 90% benchmark |
Nodule Match Rate (Isolated Nodule Location) | 0.940 (0.934, 0.947) - Exceeds 90% benchmark |
Nodule Match Rate (Juxta-Vascular Nodule Location) | 0.969 (0.963, 0.975) - Exceeds 90% benchmark |
Nodule Match Rate (Juxta-Pleura Nodule Location) | 0.955 (0.949, 0.961) - Exceeds 90% benchmark |
Volume Doubling Time (VDT) and % Change Calculation Accuracy | Manual and automated calculations matched in every instance. |
Vessel Suppress Performance for Thicker Slices (3.5mm - 5mm) | Performance across different slice thickness values (3.5mm, 4.0mm, 4.5mm, 5.0mm) had little impact compared to baseline (1mm data), remaining well within the predefined 10% significance threshold for rejecting a test (non-contrast and contrast cases). Average performance change ranged from -4.5% to 4.6%. |
Nodule Registration Error | Average registration error of 4.46mm, with a standard deviation of 2.69mm, well within the predefined 15mm tolerance. |
New Nodule Identification (Real Cases) | All 3 new nodules were detected and correctly identified as new. |
2. Sample Size Used for the Test Set and Data Provenance
- Quantitative Nodule Matching Performance: A total of 900 nodules were used for assessment.
- Vessel Suppress Thicker Slice Evaluation: The same data previously used to assess vessel suppression performance for 1mm to 3mm slice thickness was used, extended to include 3.5mm, 4.0mm, 4.5mm, and 5.0mm.
- Clinical Performance Testing (Real Nodules): A 25-patient cohort containing 40 real nodules (42 actionable nodules identified by radiologists, with 39 having prior counterparts and 3 being new).
Data Provenance:
The document does not explicitly state the country of origin for the data. It also does not specify whether the data was retrospective or prospective for the 900 nodules or the vessel suppress evaluation. For the "clinical performance testing," the use of "real nodules" from a "25 patient cohort" suggests existing patient data, which is typically retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Quantitative Nodule Matching Performance: The document does not explicitly state how the ground truth for the 900 nodules was established, nor the number or qualifications of experts involved. It only states that "detected nodules were split into three categories based on their attenuation pattern."
- Clinical Performance Testing (Real Nodules): Ground truth for the 42 actionable nodules was established by "the radiologist." The document uses "radiologist" in the singular, implying one expert, but does not provide specific qualifications (e.g., years of experience).
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (like 2+1 or 3+1) for establishing the ground truth of the test sets. For the "clinical performance testing," it states "42 actionable nodules were ground-truthed by the radiologist," implying a single observer established the ground truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study involving human readers' improvement with AI vs. without AI assistance was not explicitly described or presented in the provided text. The studies focused on the performance of the ClearRead CT Compare algorithm itself, not its impact on human reader performance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the studies presented primarily focus on standalone (algorithm only) performance.
- The nodule matching performance tables (Tables 5.2 and 5.3) report the algorithm's match and mismatch rates.
- The VDT and % change calculation accuracy was checked against "manual computation," implying a comparison of algorithm output to a reference, not human interaction.
- The vessel suppress evaluation directly assesses the algorithm's output (residual changes).
- The "clinical performance testing" assessed the algorithm's ability to detect and match nodules and identify new ones, again without a human-in-the-loop component for the reported metrics.
The device's indication for use explicitly states it provides "adjunctive information and is not intended to be used without the original CT series," suggesting it is designed to aid radiologists, but the studies described focus on its internal performance metrics rather than its performance in an assisted reading workflow.
7. The Type of Ground Truth Used
- Quantitative Nodule Matching Performance: The type of ground truth used for the 900 nodules is not explicitly stated beyond classification by attenuation pattern.
- Volume Doubling Time and % Change Calculation: Manual computation of these values was used as ground truth.
- Vessel Suppress Performance for Thicker Slices: The ground truth for this evaluation appears to be based on a baseline performance (1mm data) and a predefined significance threshold for residual changes. The "residual analysis" implies comparing the algorithm's output to an expected or ideal output for vessel suppression.
- Clinical Performance Testing (Real Nodules): Expert radiologist assessment ("ground-truthed by the radiologist") was used to identify "actionable nodules" and their presence in prior scans.
8. The Sample Size for the Training Set
The document does not provide information regarding the sample size used for the training set for any component of ClearRead CT or ClearRead CT Compare.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information regarding how the ground truth for the training set was established, as the training set details are not mentioned.
Ask a specific question about this device
(146 days)
OEB
The syngo.CT Lung CAD device is a Computer-Aided Detection (CAD) tool designed to assist radiologists in the detection of solid and subsolid (part-solid and ground glass) pulmonary nodules during review of multi-detector computed tomography (MDCT) from multivendor examinations of the chest. The software is an adjunctive tool to alert the radiologist to regions of interest (ROI) that may otherwise be overlooked.
The syngo. CT Lung CAD device may be used as a concurrent first reader followed by a full review of the case by the radiologist or as second reader after the radiologist has completed his/her initial read.
The software device is an algorithm which does not have its own user interface component for displaying of CAD marks. The Hosting Application incorporating syngo.CT Lung CAD is responsible for implementing a user interface.
Siemens Healthcare GmbH intends to market the syngo.CT Lung CAD which is a medical device that is designed to perform CAD processing in thoracic CT examinations for the detection of solid pulmonary nodules (between 3.0 mm and 30.0mm) and subsolid (partsolid and ground glass) nodules (between 5.0 mm and 30.0mm) in average diameter. The device processes images acquired with multi-detector CT scanners with 16 or more detector rows.
The syngo.CT Lung CAD device supports the full range of nodule locations (central, peripheral) and contours (round, irregular).
The syngo.CT Lung CAD sends a list of nodule candidate locations to a visualization application, such as syngo MM Oncology, or a visualization rendering component, which generates output images series with the CAD marks superimposed on the input thoracic CT images to enable the radiologist's review. syngo MM Oncology (FDA clearance K191309) is deployed on the syngo.via platform (FDA clearance K191040), which provides a common framework for various other applications implementing specific clinical workflows (but are not part of this clearance) to display the CAD marks. The syngo.CT Lung CAD device may be used either as a concurrent first reader, followed by a review of the case, or as a second reader only after the initial read is completed
The subject device and predicate device have the same basic technical characteristics. This does not introduce new types of safety or effectiveness concerns as demonstrated by the statistical analyses and results of the reader study and additional evaluations results documented in the Statistical Analysis.
Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document primarily focuses on demonstrating the improvement of the new VD20 version over the predicate VC30, rather than explicitly listing fixed "acceptance criteria" with numerical targets in a single table. However, based on the statistical analysis summary and the comparison tables, we can infer the performance goals and the reported outcomes:
Feature/Metric | Acceptance Criteria (Inferred) | Reported Device Performance (VD20) |
---|---|---|
Detection Target | Extension to subsolid (part-solid and ground glass) pulmonary nodules, in addition to solid nodules. | Device successfully assists in detecting solid and subsolid (part-solid and ground glass) pulmonary nodules. |
Nodule Size Range | Solid: Up to 30mm; Subsolid: Up to 30mm | Solid: ≥ 3mm and ≤ 30mm; Subsolid: ≥ 5mm and ≤ 30mm. |
Reader Workflow | Support for concurrent first reader workflow in addition to second reader. | Device supports both concurrent first reader and second reader workflows. |
Multi-vendor Compatibility | Support for multi-vendor CT scanners. | Supports Siemens, GE, Philips, and Toshiba MDCT scanners. |
Detector Rows | Recommended 16 or more detector rows. | Recommendation to use 16 or more detector rows included, matching FDA recommendation. |
Voltage | Expanded range (implied). | 100-140 kVp. |
Slice Thickness | Up to 2.5mm, with recommendation for ≤ 1.25mm for smaller nodules. | Up to and including 2.5mm; recommended that ≤ 1.25mm be used for detection of smaller nodules (e.g., 3.0mm). |
Slice Overlap | 0-50% | 0-50%. |
Kernels | Expanded range of supported kernels. | Consistent with thoracic CT protocols and patient safety guidelines. Typical kernels: Smooth, Medium, Sharp groups validated. |
Dose | Consistent with diagnostic and screening protocols. | CTDIvol |
Ask a specific question about this device
(263 days)
OEB
Veolity is intended to:
-
display a composite view of 2D cross-sections, and 3D volumes of chest CT images,
-
allow comparison between new and previous acquisitions as well as abnormal thoracic regions of interest, such as pulmonary nodules,
-
provide Computer-Aided Detection ("CAD") findings, which assist radiologists in the detection of solid pulmonary nodules between 4-30 mm in size in CT images with or without intravenous contrast. CAD is intended to be used as an adjunct, alerting the radiologist - after his or her initial reading of the scan - to regions of interest that may have been initially overlooked.
The system can be used with any combination of these features. Enabling is handled via licensing or configuration options.
Veolity is a medical imaging software platform that allows processing, review, and analysis of multi-dimensional digital images.
The system integrates within typical clinical workflow patterns through receiving and transferring medical images over a computer network. The software can be loaded on a standard off-theshelf personal computer (PC). It can operate as a stand-alone workstation or in a distributed server-client configuration across a computer network.
Veolity is intended to support the radiologist in the review and analysis of chest CT data. Automated image registration facilitates the synchronous display and navigation of current and previous CT images for follow-up comparison
The software enables the user to determine quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. Veolity automatically performs the measurements for segmented nodules, allowing lung nodules and measurements to be displayed. Afterwards nodule segmentation contour lines can be edited by the user manually with automatic recalculation of geometric measurements post-editing. Further, the application provides a range of interactive tools specifically designed for segmentation and volumetric analysis of findings in order to determine growth patterns and compose comparative reviews.
Veolity requires the user to identify a nodule and to determine the type of nodule in order to use the appropriate characterization tools. Additionally, the software provides an optional/licensable CAD package that analyzes the CT images to identify findings with features suggestive of solid pulmonary nodules between 4-30 mm in size. The CAD is not intended as a detection aid for either part-solid or non-solid lung nodules. The CAD is intended to be used as an adjunct, alerting the radiologist – after his or her initial reading of the scan – to regions of interest that may have been initially overlooked.
The provided text describes the MeVis Medical Solutions AG's Veolity device and its 510(k) submission (K201501). However, the document states: "N/A - No clinical testing has been conducted to demonstrate substantial equivalence." This means that detailed acceptance criteria tables, sample sizes for test sets, expert qualifications, etc., as requested in your prompt, are not explicitly provided in the document for a new clinical study.
The submission claims substantial equivalence based on the device being a combination of previously cleared predicate and reference devices. It asserts that the individual functionalities remain technically unchanged. The performance assessment for the CAD system relies on prior panel review results from the initial submission of the predicate device and a re-evaluation with a multi-center dataset designed to be comparable to the predicate device's clinical study.
Therefore, I cannot fully complete all sections of your request with specific details from this document regarding a new study demonstrating the device meets acceptance criteria. I will instead extract the information that is present and highlight the limitations.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not provide a specific table of quantitative acceptance criteria for the current submission's performance study. It states that the subject device's CAD system performance "provides equal results in terms of sensitivity and false positive rates compared to the primary predicate device."
It re-evaluated performance "in terms of sensitivity and false positive rate per case" and found it "equivalent to the primary predicate device."
- Acceptance Criteria (Implied): Equivalence in sensitivity and false positive rates per case compared to the primary predicate device (ImageChecker CT CAD Software System K043617).
- Reported Device Performance: Stated as "equivalent" to the primary predicate device's performance, which was based on its own initial submission. No specific numerical values for sensitivity or false positive rates are provided for Veolity.
2. Sample size used for the test set and the data provenance
- Test Set Sample Size: Not explicitly stated for the "re-evaluated" performance study. The text mentions it was conducted with "a multi-center dataset."
- Data Provenance: "modern and multivendor CT data." The document does not specify the country of origin or whether it was retrospective or prospective. Given it's a re-evaluation designed to be comparable to a predicate's clinical study, it's likely retrospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not explicitly stated for the "re-evaluated" performance study. The document refers to "panel review results" from the initial submission of the predicate device for its performance assessment. It does not provide details on the number or qualifications of experts for Veolity's re-evaluation.
4. Adjudication method for the test set
Not explicitly stated. Given the reliance on prior predicate studies, this information would likely be found in the predicate's 510(k) submission.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated for this submission.
- Effect Size: Not applicable, as no such study is described. The CAD's indication for use states it is "intended to be used as an adjunct, alerting the radiologist - after his or her initial reading of the scan - to regions of interest that may have been initially overlooked." This implies an assistive role, but no data on human-AI collaboration improvement is presented here.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation of the CAD algorithm itself was done. The document states: "the subject device's CAD system performance provides equal results... in terms of sensitivity and false positive rates." This implies an algorithm-only evaluation against ground truth.
7. The type of ground truth used
The document does not explicitly state the method for establishing ground truth for the re-evaluated dataset. However, given that it's comparing against "sensitivity and false positive rates" for solid pulmonary nodules, the ground truth would typically be established by expert consensus (e.g., highly experienced radiologists, often with follow-up or pathology correlation if available within the original study design of the predicate).
For the predicate device, it mentions "panel review results," which strongly suggests expert consensus.
8. The sample size for the training set
The document does not provide any information about the training set used for the Veolity CAD algorithm. It only discusses performance evaluations.
9. How the ground truth for the training set was established
Not provided in the document.
Ask a specific question about this device
(267 days)
OEB
InferRead Lung CT.AI is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules during the review of CT examinations of the chest on an asymptomatic population. InferRead Lung CT.AI requires that both lungs be in the field of view. InferRead Lung CT.AI provides adjunctive information and is not intended to be used without the original CT series.
InferRead Lung CT.AI uses the Browser/Server architecture, and is provided as Service (SaaS) via a URL. The system integrates algorithm logic and database in the same the simplicity of the system and the convenience of system maintenance. The server is able to accept chest CT images from a PACS system, Radiological Information System) or directly from a CT scanner, analyze the images and provide output annotations regarding lung nodules. Users are an existing PACS system to view the annotations. Dedicated servers can be located at hospitals and are directly connected to the hospital networks. The software consists of 4 modules which are Image reception (Docking Toolbox), Image predictive processing (DLServer), Image storage (RePACS) and Image display (NeoViewer).
Here's a breakdown of the acceptance criteria and study details for InferRead Lung CT.AI, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on a comparative effectiveness study and discusses the device's performance in comparison to unaided human reading. It doesn't explicitly list "acceptance criteria" with numerical targets in the same way a standalone performance study might. However, the objective of the clinical study serves as the de facto acceptance criteria.
Acceptance Criteria (Inferred from Study Objective) | Reported Device Performance |
---|---|
Significantly improve radiologists' nodule detection performance (AUC) | Increase in AUC (Aided - Unaided): 0.073 (95% CI: 0.020, 0.125). The document states this increase was "significant," indicating that the lower bound of the confidence interval (0.020) is above zero, satisfying the improvement criterion. |
Without significantly increasing reading time | Decrease in reading times (Aided - Unaided): -23 seconds (95% CI: -42, -3). The document states this decrease was "significant," meaning the upper bound of the confidence interval (-3) is below zero. This indicates a reduction in reading time, thus satisfying the criterion of not increasing reading time and indeed improving it. |
2. Sample Size and Data Provenance for Test Set
- Sample Size (Test Set): 249 CT scans.
- Data Provenance: The document does not explicitly state the country of origin. It specifies that the data included "chest CT scans from patients who underwent lung cancer screening," implying it's clinical data, and the study was "retrospective." This suggests the data was collected from existing patient records.
3. Number of Experts and Qualifications for Ground Truth
- The document mentions that a "pivotal reader study" was conducted, involving "10 board-certified radiologists." These radiologists were part of the MRMC study, where their consensus or interpretations would contribute to the ground truth.
- However, it does not explicitly state how many of these, or other, experts were specifically used to establish the definitive ground truth for the test set independent of the reader study itself. The ground truth for the reader study is the consensus of the readers, or a reference standard against which their performance is measured (see point 7).
4. Adjudication Method for the Test Set
The document describes a "fully crossed, multi-reader multi-case (MRMC) study." In such studies, all readers review all cases. While it doesn't explicitly state an adjudication method like "2+1" for establishing a separate ground truth, the MRMC setup inherently uses the collective performance of the expert readers (in both aided and unaided modes) to evaluate the device's impact. The ground truth for nodule presence/absence in the cases would have been established prior to the reader study.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
Yes, a MRMC comparative effectiveness study was done.
- Effect Size of Human Readers Improvement with AI vs. without AI:
- Nodule Detection Performance (AUC): The AUC increased by 0.073 (Aided - Unaided), with a 95% confidence interval of (0.020, 0.125). This indicates a statistically significant improvement in detection performance when radiologists used the InferRead Lung CT.AI device.
- Reading Time: Reading times decreased by 23 seconds (Aided - Unaided), with a 95% confidence interval of (-42, -3). This indicates a statistically significant reduction in reading time.
6. Standalone Performance Study (Algorithm Only)
Yes, a standalone performance study was done.
- The document states: "Standalone performance testing which included chest CT scans from patients who underwent lung cancer screening was performed to validate detection accuracy of InferRead Lung CT.AI. Results showed that InferRead Lung CT.AI had similar nodule detection sensitivity and FP/scan compared to those of the predicate device."
- This suggests comparison against the predicate device's standalone performance, which also implies a form of quantitative performance metric (sensitivity, false positives per scan) for the algorithm in isolation.
7. Type of Ground Truth Used
- For the standalone performance study, the document mentions "detection accuracy" based on scans from lung cancer screening, but doesn't explicitly state whether the ground truth was expert consensus, pathology, or outcomes data. However, for nodule detection, expert consensus on the presence and location of nodules from expert radiologists is a common ground truth, often verified or refined.
- For the MRMC study, the ground truth for evaluating individual reader performance (from which the AUC is derived) would typically be an established reference standard (often expert consensus, sometimes supplemented by follow-up or pathology if available for some cases) created prior to the readers' evaluations.
8. Sample Size for the Training Set
The document does not provide the sample size of the training set used for developing the InferRead Lung CT.AI algorithm.
9. How Ground Truth for Training Set Was Established
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
(109 days)
OEB
The syngo.CT Lung CAD VC30 device is a Computer-Aided Detection (CAD) tool designed to assist radiologists in the detection of solid pulmonary nodules during review of multi-detector computed tomography (MDCT) examinations of the chest. The software is an adjunctive tool to alert the radiologist to regions of interest that may have been initially overlooked. The syngo.CT Lung CAD device is intended to be used as a second reader after the radiologist has completed his/her initial read.
Siemens Healthcare GmbH intends to market the syngo.CT Lung CAD which is a medical device that is designed to perform CAD processing in thoracic CT examinations for the detection of solid pulmonary nodules ≥ 3.0 mm in size. The device processes images acquired with Siemens multi-detector CT scanners with 4 or more detector rows.
The syngo.CT Lung CAD device supports the full range of nodule locations (central, peripheral) and contours (round, irregular). The detection performance of the syngo.CT Lung CAD device is optimized for nodules between 3.0 mm and 20.0 mm in size.
The syngo.CT Lung CAD sends a list of nodule candidate locations to a visualization application, such as syngo MM Oncology, or a visualization rendering component, which generates output images series with the CAD marks superimposed on the input thoracic CT images for use in a second reader mode. syngo MM Oncology (FDA clearance K191309) is implemented on the syngo.via platform (FDA clearance K191040), which provides a common framework for various other applications implementing specific clinical workflows (but are not part of this clearance) to display the CAD marks. The syngo.CT Lung CAD device is intended to be used as a second reader only after the initial read is completed.
The subject device and the predicate device has the same basic technical characteristics as the predicate; however, the fundamental technology has been replaced by deep learning technology. Specifically, the predicate VC20 uses feature-based and Machine Learning whereas the current VC30 uses algorithms based on Convolutional Neural Networks. This does not introduce new types of safety or effectiveness concerns. In particular, as demonstrated by the statistical analysis and results of the standalone benchmark evaluations:
i. The standalone accuracy has been shown not only to be non-inferior but actually superior to that of the device and
ii. The marks generated by the two devices have been shown to be reasonably consistent.
This device description holds true for the subject device, syngo.CT Lung CAD, software version VC30, as well as the predicate device, syngo.CT Lung CAD, software version VC20.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for syngo.CT Lung CAD (VC30):
Device Name: syngo.CT Lung CAD (VC30)
Intended Use: A Computer-Aided Detection (CAD) tool to assist radiologists in the detection of solid pulmonary nodules (≥ 3.0 mm) during review of multi-detector computed tomography (MDCT) examinations of the chest. It's an adjunctive tool to alert radiologists to initially overlooked regions of interest, used as a second reader after the radiologist's initial read.
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on demonstrating non-inferiority and superiority to the predicate device rather than explicitly stating acceptance criteria with numerical targets for metrics like sensitivity or specificity. However, based on the conclusions regarding "standalone accuracy" and "false positive rate," we can infer the implicit criteria and the reported performance as comparative to the predicate.
Acceptance Criteria (Inferred from comparison to predicate) | Reported Device Performance (syngo.CT Lung CAD VC30) |
---|---|
Standalone accuracy (sensitivity for nodule detection) is non-inferior to predicate (syngo.CT Lung CAD VC20). | Superior to predicate (syngo.CT Lung CAD VC20). |
False positive rate is not worse than predicate (syngo.CT Lung CAD VC20). | Improved (reduced) compared to predicate (syngo.CT Lung CAD VC20). |
Consistency of marks (location and extent) with predicate (syngo.CT Lung CAD VC20). | Reasonably consistent with marks produced by predicate (syngo.CT Lung CAD VC20). |
Note: The document describes the study as a "standalone benchmark evaluation" focused on comparing VC30's performance to VC20's. Specific numerical metrics for sensitivity, specificity, or FPs are not provided in this summary, but the conclusions about superiority and reduction in FPs serve as the performance statement.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document states that "The endpoints to establish meaningful and statistically relevant performance and equivalence of the device and sample size were considered and defined as part of the test protocols." However, the specific number of cases or nodules in the test set is not provided in this summary.
- Data Provenance: Not explicitly stated regarding country of origin. The document mentions "Non-clinical performance testing was performed at various levels for verification and validation of the device intended use and to ensure safety and effectiveness." It does not specify if the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: Not specified in the provided text.
- Qualifications of Experts: Not specified in the provided text.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified in the provided text. The document refers to "ground truth" but does not detail the method by which it was established.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI vs. without AI assistance. The study described is a "standalone benchmark evaluation" comparing the performance of the new AI algorithm (VC30) to the previous algorithm (VC20).
- Effect Size of Human Improvement: Not applicable, as no MRMC study is detailed here.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study: Yes, a standalone study was done. The document explicitly states: "The standalone performance test proved that the standalone sensitivity of syngo.CT Lung CAD VC30 is superior to that of syngo.CT Lung CAD VC20 (predicate) and the false positive rate improved (reduced)."
7. Type of Ground Truth Used
- Type of Ground Truth: The document refers to "ground truth" for the test set, stating that it was established to define "meaningful and statistically relevant performance." However, the specific method (e.g., expert consensus, pathology, follow-up outcomes) for establishing this ground truth is not detailed in the provided summary.
8. Sample Size for the Training Set
- Sample Size for Training Set: The document does not provide the sample size used for the training set. It only mentions that the "fundamental technology has been replaced by deep learning technology," indicating a training process was involved.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth for Training Set Was Established: The document does not provide details on how the ground truth for the training set was established. It only describes the functional components of the new syngo.CT Lung CAD VC30 as using Convolutional Neural Networks (CNN) for lung segmentation, candidate generation, feature calculation, and candidate classification, which inherently require labeled training data.
Ask a specific question about this device
(134 days)
OEB
ClearRead CT™ is comprised of computer assisted reading tools designed to aid the radiologist in the detection of pulmonary nodules during review of CT examinations of the chest on an asymptomatic population. The ClearRead CT requires both lungs be in the field of view. ClearRead CT provides adjunctive information and is not intended to be used without the original CT series.
ClearRead CT is a dedicated post-processing application that generates a secondary vessel suppressed Lung CT series with CADe marks and associated region descriptors intended to aid the radiologist in the detection of pulmonary nodules.
Here's a breakdown of the acceptance criteria and study details for the ClearRead CT device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Localization Receiver Operating Characteristic (LROC) AUC | ClearRead CT found to significantly increase the AUC compared to the unaided read, indicating superior performance for detecting nodules. |
Radiologists' Interpretation Time | ClearRead CT found to decrease read times with and without outliers. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the exact sample size for the test set (number of cases). It refers to a "multi-reader multi-case (MRMC) study."
The data provenance (country of origin, retrospective/prospective) is not specified in the provided text.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts used to establish ground truth or their qualifications. It mentions a "multi-reader multi-case (MRMC) study," implying multiple readers were involved in the evaluation, but not necessarily in establishing the initial ground truth.
4. Adjudication Method for the Test Set
The document does not explicitly describe the adjudication method used for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
Yes, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done.
- Effect Size of Human Readers Improvement with AI vs. Without AI Assistance: The study found that using ClearRead CT "significantly increase[d] the AUC" of the LROC response. It also found that ClearRead CT "decrease[d] read times with and without outliers." While a specific numerical effect size (e.g., a percentage increase in AUC or specific time reduction) is not provided in this summary, the terms "significantly increase" and "decrease" indicate a positive and measurable improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The document focuses on the multi-reader multi-case study, explicitly stating that ClearRead CT is "designed to aid the radiologist" and "provides adjunctive information and is not intended to be used without the original CT series." This implies the primary evaluation was human-in-the-loop. It also mentions that the device "generates a secondary vessel suppressed Lung CT series with CADe marks," which could be considered a standalone function, but the performance metrics provided are for the combined human-AI workflow.
7. The Type of Ground Truth Used
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, outcomes data).
8. The Sample Size for the Training Set
The sample size for the training set is not mentioned in the provided text.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 2