Search Results
Found 4 results
510(k) Data Aggregation
(53 days)
Shanghai United Imaging Intelligence Co., Ltd.
uAl Easy Triage ICH is a radiological computer-assisted triage and notification software device indicated for analysis of non-enhanced head CT images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected positive findings of Intracranial Hemorrhage (ICH).
uAI Easy Triage ICH is a radiological computer-assisted triage and notification software intended to assist radiologists by flagging potential intracranial hemorrhage (ICH) in non-contrast head CT images. The Triage Software is a component of the uAI Easy Triage platform, a comprehensive medical imaging communication system designed to integrate and deploy specialized image processing applications.
The uAI Easy Triage ICH algorithm uses artificial intelligence CNN (convolutional neural networks) and advanced image processing to triage the non-contrast CT images for suspicious intracranial hemorrhage.
The uAI Easy Triage ICH alerts users to new studies with suspicious ICH findings via pop-up notifications. The software provides both active and passive notification mechanisms. Active notifications are presented as an alert icon with a count of pending cases, and an alert status bar displaying patient details, suspected findings, and the time of examination. Passive notifications are represented by an icon beside the patient's name in the list for cases with detected ICH findings. Additionally, the application offers a DICOM image preview feature for radiologists to review. This preview is strictly informational, devoid of diagnostic markers, and is not to be used for definitive diagnosis.
The uAI Easy Triage ICH embodies the core algorithmic technology that identifies image characteristics consistent with intracranial hemorrhage. The application reads DICOM files, verifies their compatibility with the prescribed acquisition protocols, executes the triage algorithm, and communicates findings in DICOM format, compatible with the uAI Easy Triage Platform.
Acceptance Criteria and Study Details for uAI Easy Triage ICH
1. Acceptance Criteria and Reported Device Performance
Metric | Acceptance Criteria | Reported Device Performance (95% CI) |
---|---|---|
Sensitivity | ≥ 80% | 92% (86%-96%) |
Specificity | ≥ 80% | 95% (90%-98%) |
Time to Notification | Not explicitly stated as a numerical criterion, but noted as "similar to the predicate device's time" | 41.1 seconds (40.1, 42.1) (mean with 95% CI) |
Note: The document does not explicitly state a numerical acceptance criterion for "Time to Notification" but indicates that the device's performance is similar to the predicate device. The 80% acceptance criteria for sensitivity and specificity are mentioned in the text "exceeding the acceptance criteria of 80%."
2. Sample Size for Test Set and Data Provenance
- Sample Size: 295 non-contrast CT scans (studies)
- 147 positive for ICH
- 148 negative for ICH
- Data Provenance: Retrospective data obtained from different zip codes across four U.S. states.
3. Number of Experts and Qualifications for Ground Truth Establishment
- Number of Experts: 3 U.S.-board-certified neuroradiologists.
- Qualifications: U.S.-board-certified neuroradiologists. Specific years of experience are not mentioned.
4. Adjudication Method for the Test Set
- Adjudication Method: Majority read of the 3 U.S.-board-certified neuroradiologists. (Implies a "3+1" or similar consensus approach where at least two out of three experts agreed to establish the ground truth).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to assess human reader improvement with AI assistance. The study focuses on the standalone performance of the AI algorithm.
6. Standalone Algorithm Performance
- Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The reported sensitivity, specificity, and time to notification are for the uAI Easy Triage ICH software itself, compared to the expert-established ground truth.
7. Type of Ground Truth Used
- Expert Consensus: The ground truth was established by the majority read of 3 U.S.-board-certified neuroradiologists.
8. Sample Size for the Training Set
- Sample Size: 9791 data points (cases) were collected for training and internal testing.
9. How Ground Truth for the Training Set was Established
- The ground truth for the training set was established in the form of ICH positive/negative by radiologists. Specific details on the number or qualifications of these radiologists, or the adjudication method, are not provided for the training set.
Ask a specific question about this device
(207 days)
Shanghai United Imaging Intelligence Co., Ltd.
uAI Portal is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation of examinations within healthcare institutions. It has the following additional indications:
The Lower Extremity Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating CTA images of lower extremities.
The Head and Neck Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating imaging datasets acquired with head and neck CTA.
The Coronary Analysis is intended to provide a tool for viewing, manipulating imaging datasets acquired with CCTA.
The Pulmonary Artery Analysis is intended to provide a tool for viewing, manipulating, and evaluating imaging datasets acquired with CTPA.
The Aorta Analysis is intended to provide a tool for viewing, and evaluating imaging datasets acquired with aorta CTA.
uAI Portal is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or local file system. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including VR, MIP, MPR, Probe, CPR, and SCPR. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.
uAI Portal contains the following applications:
- The Lower Extremity Vessel Analysis
- . The Head and Neck Vessel Analysis
- . The Coronary Analysis
- . The Pulmonary Artery Analysis
- . The Aorta Analysis
Here's a breakdown of the acceptance criteria and study details for the uAI Portal device, based on the provided FDA 510(k) summary:
1. Acceptance Criteria and Reported Device Performance
The device's performance was evaluated using the Dice coefficient, which measures the similarity between the algorithm's segmentation result (P) and the ground truth (G). The formula used is: $DICE = \frac{2 * G \cap P}{G + P}$.
Application | Algorithm | Acceptance Criteria (Dice) | Reported Average Dice |
---|---|---|---|
Coronary Artery | Vessels segmentation | 0.85 | 0.920 |
Heart segmentation | 0.90 | 0.980 | |
Head and Neck Vessel | Head vessel segmentation | 0.85 | 0.902 |
Neck vessel segmentation | 0.90 | 0.967 | |
Aorta | Trunk segmentation | 0.90 | 0.946 |
Branches segmentation | 0.80 | 0.846 | |
Pulmonary Artery | Arteries segmentation | 0.85 | 0.953 |
Veins segmentation | 0.85 | 0.933 | |
Lower Extremity Artery | Arteries segmentation | 0.80 | 0.892 |
All reported average Dice values exceed their respective acceptance criteria.
2. Sample Size and Data Provenance
- Test Set Sample Size: 150 images.
- Data Provenance for Test Set: Collected from the US. The data covered a variety of demographics (gender, age), equipment (SIEMENS, GE, TOSHIBA), and image characteristics (with/without artifacts, with/without anatomical variation).
- Data Provenance for Training Set: Images collected from China. The data set ensured a variety of data for different gender, age, equipment, and CT protocol.
- Retrospective/Prospective: Not explicitly stated, but the mention of "images collected from US" and "images collected from China" for testing and training datasets respectively, combined with the ground truth establishment process involving expert annotation of existing images, strongly suggests a retrospective study design.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts:
- Initial Annotation: Two (2) Chinese radiologists.
- Adjudication: One (1) American Board-Certified Radiology adjudicator.
- Qualifications of Experts:
- Initial Annotation: Each radiologist had at least 5 years of clinical experience. They were hospital employees and independent of United Imaging.
- Adjudication: The adjudicator was an American Board-Certified Radiology adjudicator with at least 10 years of clinical experience.
4. Adjudication Method for the Test Set
- Method: A 2+1 adjudication method was used.
- Two (2) Chinese radiologists independently annotated the vessel mask for each patient case, resulting in two sets of annotations.
- An American Board-Certified Radiology adjudicator (the "1" in 2+1) reviewed both sets of segmented images.
- Based on their assessment, the adjudicator selected the most accurate segmentation set as the final ground truth. If necessary, they would make modifications until a satisfactory ground truth was established.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a MRMC comparative effectiveness study was NOT done. The study focused on the standalone algorithmic performance (Dice coefficient) against an expert-established ground truth. There is no mention of human readers assisting or comparing performance with and without AI.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone (algorithm only without human-in-the-loop performance) study was done. The entire "Performance Verification" section details the algorithm's performance in segmenting various anatomical structures by comparing its output (P) to the established ground truth (G) using the Dice coefficient.
7. Type of Ground Truth Used
- The ground truth used was expert consensus / adjudicated expert annotation. Specifically, two radiologists initially annotated cases, and a third, more experienced radiologist adjudicated and finalized the ground truth.
8. Sample Size for the Training Set
- The exact sample size for the training set is not specified, only that images were "collected from China".
9. How the Ground Truth for the Training Set was Established
- The document states that "Algorithm training of uAI Portal software has been conducted on images collected from China as training dataset." However, it does not explicitly detail how the ground truth for this training dataset was established. It implies that these images had associated ground truth data for the algorithm to learn from, but the process of creating that ground truth for the training set is not described in the provided text.
Ask a specific question about this device
(157 days)
Shanghai United Imaging Intelligence Co., Ltd.
The uMR 680 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities. These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
The uMR 680 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 680 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
Here's a breakdown of the acceptance criteria and study details for the DeepRecon algorithm found in the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance (DeepRecon)
Evaluation Item | Acceptance Criteria | Reported Device Performance (Test Result) | Results |
---|---|---|---|
Image SNR | DeepRecon images achieve higher SNR compared to the images without DeepRecon (NADR) | NADR: 137.03; DeepRecon: 186.87 | PASS |
Image Uniformity | Uniformity difference between DeepRecon images and NADR images under 5% | 0.03% | PASS |
Image Resolution | DeepRecon images achieve 10% or higher resolution compared to the NADR images | 15.57% | PASS |
Image Contrast | Intensity difference between DeepRecon images and NADR images under 5% | 1.0% | PASS |
Structure Measurement | Measurements on NADR and DeepRecon images of same structures, measurement difference under 5% | 0% | PASS |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 68 US subjects.
- Data Provenance: The test data was collected from various clinical sites in the US during separate time periods and on subjects different from the training data. The data specifically indicates demographic distributions for US subjects across various genders, age groups, ethnicities, and BMI groups.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
The document mentions that "DeepRecon images were evaluated by American Board of Radiologists certificated physicians." It does not specify the exact number of experts used, nor does it provide details on their years of experience as an example. However, it does state their qualification: American Board of Radiologists certificated physicians.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method (e.g., 2+1, 3+1, none) used for the expert evaluation of the test set. It only mentions that "The evaluation reports from radiologists verified that DeepRecon meets the requirements of clinical diagnosis. All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." This suggests a qualitative review, but the specific consensus method is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly described in the provided text in terms of quantifying human reader improvement with AI assistance. The expert evaluation focused on whether DeepRecon images met clinical diagnosis requirements and were rated equivalent or higher in quality, rather than measuring a specific effect size of AI assistance on human reader performance.
6. Standalone (Algorithm Only) Performance
Yes, a standalone (algorithm only) performance evaluation was done. The "Acceptance Criteria and Reported Device Performance" table directly shows the performance metrics (Image SNR, Uniformity, Resolution, Contrast, Structure Measurement) of the DeepRecon algorithm itself, compared to images without DeepRecon (NADR). This indicates a standalone assessment of the algorithm's output characteristics.
7. Type of Ground Truth Used for the Test Set
For the quantitative metrics (SNR, uniformity, resolution, contrast, structure measurement), the "ground truth" for comparison appears to be images without DeepRecon (NADR) as a baseline, or potentially direct measurements on those images.
For the qualitative assessment by radiologists, the ground truth was expert opinion/consensus by American Board of Radiologists certificated physicians regarding clinical diagnosis quality.
8. Sample Size for the Training Set
The training set for DeepRecon consisted of data from 264 volunteers. This resulted in a total of 165,837 cases.
9. How the Ground Truth for the Training Set Was Established
For the training dataset, the ground truth was established by collecting multiple-averaged images with high-resolution and high SNR. These high-quality images were used as the reference against which the input images (generated by sequentially reducing the SNR and resolution of the ground-truth images) were trained. All data included for training underwent manual quality control.
Ask a specific question about this device
(416 days)
Shanghai United Imaging Intelligence Co., Ltd.
uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing trauma studies with suspected positive findings of multiple (3 or more) acute rib fractures.
uAl EasyTriage-Rib is a radiological computer-assisted triage and notification software device indicated for analysis of CT chest images. The device is intended to assist hospital networks and trained radiologists in workflow triage by flagging and prioritizing studies with suspected positive findings of multiple (3 or more) acute rib fractures. The device consists of the following two modules: (1) uAl EasyTriage-Rib Server; and (2) uAl EasyTriage-Rib Studylist Application that provides the user interface in which notifications from the application are received.
The information provided describes the uAI EasyTriage-Rib
device and its performance study to meet acceptance criteria for identifying multiple (3 or more) acute rib fractures in CT chest images.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by the reported performance metrics. The specific "acceptance criteria" are not explicitly stated as numerical targets in the provided text. However, a general statement is made: "The results show that it can detect rib fractures and reach the preset standard." Given the context of a 510(k) summary, the reported sensitivity, specificity, and AUC, along with a comparable time-to-notification to the predicate device, are the performance benchmarks that demonstrate achievement of that standard.
Performance Metric | Acceptance Criteria (Implied) | Reported Device Performance | Comments |
---|---|---|---|
Sensitivity | High | 92.7% (95% CI: 84.8%-97.3%) | Achieved high sensitivity as a crucial consideration for a time-critical condition. |
Specificity | Adequate | 84.7% (95% CI: 77.0%-90.7%) | Specificity was noted to be affected by the difficulty in distinguishing acute from chronic fractures, but considered acceptable given clinical relevance of reviewing chronic fractures. |
AUC | High | 0.939 (95% CI: 0.906, 0.972) | Indicates high discriminative power. |
Time-to-notification (Average) | Comparable to predicate device | 69.56 seconds | Comparable to predicate device (HealthVCF: 61.36 seconds), suggesting timely notifications. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 200 cases
- Data Provenance:
- Country of Origin: Multiple US clinical sites (explicitly stated).
- Retrospective or Prospective: Retrospective (explicitly stated).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not explicitly stated in the provided text. The document mentions "trained radiologists" being involved in clinical decision-making but does not specify the number or qualifications of experts used to establish the ground truth for the test set.
4. Adjudication Method for the Test Set
This information is not explicitly stated in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done with human readers. The study focused on the standalone performance of the AI algorithm and a comparison of its "time-to-notification" with a predicate device, not on how human readers improve with AI assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone study was done. The reported sensitivity, specificity, and AUC are measures of the algorithm's performance in identifying the target condition without human intervention in the analysis. The device "uses an artificial intelligence algorithm to analyze images and highlight studies with suspected multiple (3 or more) acute rib fractures in a standalone application for study list prioritization or triage in parallel to ongoing standard of care."
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the method used to establish the ground truth for the 200 test cases. It is often implied to be expert consensus by radiologists in such studies, but this is not confirmed in the text.
8. The Sample Size for the Training Set
The sample size for the training set is not provided in the text. The document only mentions that the deep learning algorithm was "trained on medical images."
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the text.
Ask a specific question about this device
Page 1 of 1