Search Results
Found 6 results
510(k) Data Aggregation
(175 days)
uCT ATLAS Astound is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS Astound is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS Astound is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uCT ATLAS is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS has the capability to image a whole organ in a single rotation. Organs include, but not limited to head, heart, liver, kidney, pancreas, joints, etc.
uCT ATLAS is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uWS-CT-Dual Energy Analysis is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The various materials of an anatomical region of interest have different attenuation coefficients, which depend on the used energy. These differences provide information on the chemical composition of the scanned body materials and enable images to be generated at multiple energies within the available spectrum. uWS-CT-Dual Energy Analysis software combines images acquired with low and high energy spectra to visualize this information.
The uCT ATLAS Astound with uWS-CT Dual Energy Analysis and uCT ATLAS with uWS-CT Dual Energy Analysis includes the same intended use and same indications for use as their recent cleared versions (K231482). The reason for this submission is to support the following additional functions:
- CardioXphase (optimized)
- CardioBoost
- CardioCapture (optimized)
- AIIR
- Motion Freeze
- Ultra EFOV
The provided text describes a 510(k) premarket notification for a Computed Tomography X-ray System (uCT ATLAS Astound with uWS-CT-Dual Energy Analysis and uCT ATLAS with uWS-CT-Dual Energy Analysis). The submission focuses on additional software functions beyond what was previously cleared.
However, the document does not contain specific acceptance criteria, detailed study designs, or quantitative performance data to establish "proof" in the typical sense of a rigorous clinical trial with defined endpoints and statistical significance. Instead, it relies on demonstrating substantial equivalence to existing predicate devices.
The "acceptance criteria" appear to be implicit in the non-clinical and reader studies, aiming to show that the performance of the new features is "sufficient for diagnosis," "equal or better," or "can improve" compared to a baseline or predicate. No explicit numerical thresholds for metrics like sensitivity, specificity, accuracy, or effect sizes for reader improvement are provided.
Here's an analysis based on the information provided, highlighting what is present and what is missing concerning acceptance criteria and study details:
Overview of Device Performance and Acceptance Criteria
The submission does not explicitly define acceptance criteria in terms of numerical thresholds for performance metrics. Instead, it describes a "bench test" and "reader study" approach to demonstrate that the new functions do not raise new safety and effectiveness concerns and provide an equivalent or improved performance compared to the predicate/reference devices or established techniques.
The implied "acceptance criteria" are qualitative, such as:
- "passed the basic general IQ test which satisfied the requirement of IEC 61223-3-5."
- "showed better LCD comparing with FBP..."
- "showed better noise comparing with FBP."
- "showed better spatial resolution comparing with FBP..."
- "all indicators have met the verification criteria and have passed the verification." (for CardioXphase)
- "can reduce head motion artifacts." (for Motion Freeze)
- "can improve the CT number..." (for Ultra EFOV)
- "images are sufficient for diagnosis and the image quality... is equal or better than..." (for various reader studies)
- "is helpful for both artifact suppression and clinical diagnosis." (for Motion Freeze reader study)
- "can improve the accuracy of image CT numbers..." (for Ultra EFOV reader study)
- "conclude the effectiveness of CardioCapture function for reducing cardiac motion artifacts as expected."
Table of Acceptance Criteria (Implied) and Reported Device Performance
Since explicit, quantitative acceptance criteria are not provided, this table will rephrase the reported performance as the observed outcome against the implied objective.
Software Function | Implied Acceptance Criteria (Objective) | Reported Device Performance |
---|---|---|
CardioBoost | Bench Test: Meet IEC 61223-3-5 requirements; show better LCD, noise, and spatial resolution than FBP; maintain basic general IQ. | |
Reader Study: Images sufficient for diagnosis; image quality equal or better than KARL 3D. | Bench Test: Passed basic general IQ test (IEC 61223-3-5 satisfied). Showed better LCD, noise, and spatial resolution compared to FBP at same scanning dose. | |
Reader Study: Confirmed CardioBoost images are sufficient for diagnosis and image quality is equal or better than KARL 3D over all evaluation aspects. | ||
AIIR | Bench Test: Meet IEC 61223-3-5 requirements; show better LCD, noise, and spatial resolution than FBP; maintain basic general IQ. | |
Reader Study: Images sufficient for diagnosis; image quality equal or better than FBP. | Bench Test: Passed basic general IQ test (IEC 61223-3-5 satisfied). Showed better LCD, noise, and spatial resolution compared to FBP at same scanning dose. | |
Reader Study: Confirmed AIIR images are sufficient for diagnosis and image quality is equal or better than FBP over all evaluation aspects. | ||
CardioXphase | Bench Test (AI module): Quantitative assessment metrics (DICE, Precision, Recall) for heart mask and coronary artery mask extraction meet verification criteria. | Bench Test (AI module): All quantitative indicators (DICE, Precision, Recall) for heart mask and coronary artery mask extracted by the new AI module have met the verification criteria and passed verification. |
Motion Freeze | Bench Test: Demonstrate effectiveness in reducing head motion artifacts. | |
Reader Study: Images helpful for artifact suppression and clinical diagnosis. | Bench Test: Showed that Motion Freeze can reduce head motion artifacts. | |
Reader Study: Confirmed Motion Freeze is helpful for both artifact suppression and clinical diagnosis. | ||
Ultra EFOV | Bench Test: Demonstrate effectiveness in improving CT value accuracy when scanned object exceeds scan-FOV compared to EFOV. | |
Reader Study: Images confirm improved accuracy of image CT numbers and homogeneity of same tissue when scanned object exceeds scan-FOV. | Bench Test: Showed that Ultra EFOV can improve the CT number in cases where the scanned object exceeds the CT field of scan-FOV, compared to EFOV. | |
Reader Study: Confirmed that images with Ultra EFOV can improve the accuracy of image CT numbers and homogeneity of same tissue, in cases where the scanned object exceeds the CT field of view. | ||
CardioCapture | Reader Study: Effectiveness in reducing cardiac motion artifacts as expected, with clear/continuous contours, tolerable motion artifacts, and sufficient diagnostic (>=50%) coronary segments. | Reader Study: Concluded the effectiveness of CardioCapture function for reducing cardiac motion artifacts as expected, based on evaluation of clear/continuous contours, tolerable motion artifacts, and number of diagnostic coronary segments (reaching at least 50% of total coronary artery segments). (Specific to AI motion correction in uCT ATLAS). |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: The document does not specify the sample sizes (number of cases/studies) used for either the bench tests or the reader studies.
- Data Provenance: Not explicitly stated (e.g., country of origin, whether retrospective or prospective). The use of "clinical images" implies real patient data, but details are missing.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not specified. The document mentions "readers" (plural) for the reader studies but does not state how many participated.
- Qualifications of Experts: Not specified. No details are given about their specialty (e.g., cardiologist, radiologist), experience level, or board certification.
-
Adjudication Method for the Test Set:
- Adjudication Method: Not specified. For the reader studies, it only states that images "were shown to the readers to perform a five-point scale evaluation" or "5-point scale evaluation." There's no mention of how discrepancies or disagreements among readers were handled or if a consensus ground truth was established by independent experts (e.g., 2+1, 3+1).
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- MRMC Study: Reader studies were conducted comparing images reconstructed with the new AI functions (e.g., CardioBoost) to those reconstructed with traditional methods (e.g., KARL 3D, FBP). These appear to be MRMC studies in structure, as multiple readers evaluate multiple cases.
- Effect Size: No quantitative effect sizes are provided. The results are qualitative: "equal or better," "sufficient for diagnosis," "helpful." There are no reported metrics like AUC improvement, sensitivity/specificity gains, or statistical significance of differences in reader performance with and without AI assistance.
-
If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done:
- Standalone Performance: The "bench tests" for CardioBoost, AIIR, Motion Freeze, and Ultra EFOV evaluate the algorithms' image quality metrics (IQ, LCD, noise, spatial resolution, CT value accuracy, artifact reduction) independently of human interpretation. For CardioXphase, the evaluation of the AI module's extraction accuracy (DICE, Precision, Recall) is also a standalone assessment. These can be considered standalone performance evaluations for the image reconstruction/processing algorithms.
-
The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):
- Ground Truth: For the image quality bench tests (CardioBoost, AIIR, Motion Freeze, Ultra EFOV), the "ground truth" is likely defined by physical phantom measurements and adherence to engineering specifications/standards (e.g., IEC 61223-3-5, CTIQ White Paper, AAPM's report).
- For CardioXphase, the ground truth for image segmentation accuracy (heart and coronary artery masks) was "annotated results," which typically implies expert manual annotation on imaging data.
- For the reader studies, the "ground truth" is based on the subjective evaluation of "image quality aspects" by the readers, rather than an objective, clinically validated ground truth for a diagnostic endpoint (e.g., presence/absence of disease confirmed by biopsy or follow-up). The goal was to demonstrate that the image quality generated by the new features is non-inferior or improved for diagnostic purposes.
-
The Sample Size for the Training Set:
- Training Set Sample Size: The document mentions "datasets augmentation and deep learning network optimization" for CardioBoost and AIIR, and "introduction of a new deep learning based coronaries detection algorithm" for CardioXphase, and "introduces a deep learning network" for Ultra EFOV. However, the specific size of the training datasets (number of images/cases) is not provided.
-
How the Ground Truth for the Training Set Was Established:
- Training Set Ground Truth: Not explicitly stated. For deep learning models, training data ground truth is typically established by expert annotation or labels derived from existing clinical reports or imaging features. Given the context of image reconstruction and enhancement, it likely involves high-quality, potentially expert-annotated, imaging data. For instance, for CardioXphase, the ground truth for training the coronary artery detection algorithm would involve expert-labeled coronary anatomy. For features like CardioBoost and AIIR, which optimize image reconstruction, the ground truth for training might involve pairs of raw data and ideal reconstructed images, or image quality metrics derived from expert evaluations on initial datasets.
In summary, the 510(k) submission successfully demonstrates "substantial equivalence" based on qualitative assessments and performance relative to known methods. However, for a detailed "proof" with explicit acceptance criteria and quantitative performance metrics, further information beyond what is presented in this FDA clearance letter summary would be needed. This is characteristic of many 510(k) submissions, which often rely on demonstrating safety and effectiveness relative to existing predicates rather than establishing novel clinical efficacy through large-scale, quantitatively defined trials.
Ask a specific question about this device
(263 days)
The uCT 550 is a computed tomography X-ray system intended to produce crosssectional images of the body by computer reconstruction of X-ray transmission data taken at different angles and planes and indicated for the whole body, including head, neck, cardiac and vascular.
The Computed Tomography X-ray system, uCT 550, is intended to produce crosssectional images of the patient by computer reconstruction of X-ray transmission data taken at different angles and planes. These images may be obtained either with or without contrast. The proposed device consists of CT scan gantry (including high voltage generator, X-ray tube, collimator and detectors), patient table, operating console (including console PC, monitor and Control Box), power supply cabinet (PSC), vital signal monitoring gated control unit (VSM), 3D camera, system software, and accessories.
The provided document describes the uCT 550, a Computed Tomography (CT) X-ray system, and specifically details the performance evaluation of its DELTA (Deep Recon) algorithm.
Here's an analysis of the acceptance criteria and the study that proves the device meets them:
1. Table of Acceptance Criteria and Reported Device Performance (DELTA Algorithm)
Acceptance Criteria (Bench Testing) | Reported Device Performance (Bench Testing) |
---|---|
DELTA should pass the General IQ test according to IEC 61223-3-5. | DELTA passed the basic general IQ test, satisfying IEC 61223-3-5 requirements. |
Compared with FBP, DELTA should have better LCD and noise under same scanning dose, and DELTA can reduce scanning dose keep similar LCD compared with FBP. | DELTA showed better LCD and noise compared with FBP at the same scanning dose AND can reduce scanning dose compared with FBP at the same LCD. |
Compared with FBP, DELTA should have better high contrast spatial resolution. | DELTA showed better spatial resolution compared with FBP at the same scanning dose. |
Acceptance Criteria (Clinical Evaluation) | Reported Device Performance (Clinical Evaluation) |
The mean score of simulated low dose DELTA image is equal or higher than the mean score of normal dose FBP. | Accepted (Results indicate acceptance criteria were met). |
The mean score of DELTA is better than the mean score of FBP under same dose. | Accepted (Results indicate acceptance criteria were met). |
DELTA image should satisfy diagnosis requirements. | DELTA images satisfy diagnosis requirements. |
Overall Conclusion | All study results show that the acceptance criteria were met. |
2. Sample Size Used for the Test Set and Data Provenance
- Clinical Evaluation Test Set:
- Size: 30 cases (10 for abdomen, 10 for chest, 10 for head).
- Data Provenance: Not explicitly stated as retrospective or prospective, nor the country of origin. It mentions "gathering from different scanning doses and a wide range of population" for the overall clinical dataset.
- Clinical Evaluation Validation Set (used to inform the test set, but separate):
- Size: 14 cases (5 for abdomen, 4 for chest, 5 for head).
- Data Provenance: Same as above, not explicitly stated.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- The clinical evaluation mentions that "different board-certified U.S. radiologists" scored the image quality.
- The exact number of radiologists is not specified, but the qualification is "board-certified U.S. radiologists." No experience level (e.g., "10 years of experience") is provided.
4. Adjudication Method for the Test Set
- The document states that the radiologists "perform a five-point scale evaluation of both image sets on three image quality aspects: noise, image structure fidelity and overall image quality."
- It does not specify an adjudication method such as 2+1 or 3+1 for resolving discrepancies among readers. It implies that individual scores were collected and mean scores were calculated.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- A reader study was performed where "different board-certified U.S. radiologists" evaluated image quality. This is inherently a multi-reader study.
- The study compared DELTA images (simulated low dose or same dose) against FBP (normal dose or same dose) to assess improvement in image quality aspects (noise, image structure fidelity, overall image quality).
- Effect Size: The document states that the "mean score of simulated low dose DELTA image is equal or higher than the mean score of normal dose FBP" and "The mean score of DELTA is better than the mean score of FBP under same dose." While it indicates a positive effect (improvement or equivalence with dose reduction), a specific quantifiable effect size (e.g., AUC difference, percentage improvement) is not provided in this summary.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, a standalone study was performed. The "Bench Testing" section (8.1) directly evaluates the DELTA algorithm's image quality metrics (LCD, noise, spatial resolution, HU value, thickness section) using phantoms, comparing it quantitatively to FBP. This is an algorithm-only evaluation.
7. Type of Ground Truth Used (DELTA Algorithm)
- Bench Testing: Phantom-based measurements (MITA CCT189, MITA CCT191, Catphan 700, water phantom) are used, effectively creating a known and reproducible "ground truth" for physical image quality metrics.
- Clinical Evaluation: Expert consensus/opinion from "board-certified U.S. radiologists" through a five-point scale evaluation of image quality aspects (noise, image structure fidelity, overall image quality). It also states that "DELTA image should satisfy diagnosis requirements," implying that clinical utility against diagnostic benchmarks is a component of the ground truth.
8. Sample Size for the Training Set (DELTA Algorithm)
- The document states for the DELTA algorithm: "The dataset was increased to 127 cases compared with the original dataset." This refers to the dataset used for the algorithm's development/training, indicating a training set size of 127 cases.
9. How the Ground Truth for the Training Set Was Established (DELTA Algorithm)
- The document does not explicitly describe how the ground truth for the training set was established. It only states that the dataset size was increased. For deep learning algorithms, ground truth for training data typically involves expert annotations or meticulously prepared reference images, often by comparing with a higher-quality imaging standard or expert-labeled findings. However, this specific detail is not present in the provided text.
Ask a specific question about this device
(73 days)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the requlatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, evaluation, diagnosis, staging, re-staging, monitoring, and/ or follow-up of abnormalities, lesions, tumors, inflammation, infection, disorders, and/ or diseases, in several clinical areas such as oncology, infection and inflammation, neurology. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society. *
- Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed device uMI Panvivo combines a 295 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system, CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The PET system features the following specification and technologies.
- 700 mm patient bore size. ●
- . LYSO detector with Axial Field of Views (AFOV) of 295 mm and corresponding imaging performances.
- . 250 kg maximum table load capacity allows flexible positioning and access for all patients.
- . HYPER Iterative (cleared in K193241), uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function.
- . AIEFOV is an extended field of view algorithm incorporating extrapolation and Deep Learning(DL). In this algorithm, Project domain extrapolation ensures the normal processing in convolution filter in scan field of view to reduce truncation artifact. DL technology using polar coordinate conversion in extending region can enhance the processing efficiency of deep networks and accelerate training test processing. Overall, AIEFOV does not affect the CT values accuracy inside of SFOV, and also increases the accuracy of CT values in the extended region.
The control and reconstruction system contains image acquisition and reconstruction, image display and post processing, data and patient management, CT dose display, networking, filming, etc.
This document is an FDA 510(k) clearance letter and summary for the uMI Panvivo PET/CT system. It does not contain specific acceptance criteria or a dedicated study section detailing how the device meets such criteria in the manner typically found for an AI/ML medical device.
The "Performance Data" section primarily focuses on non-clinical testing for compliance with standards (electrical safety, EMC, software, biocompatibility, risk management) and describes performance evaluations for specific features (HYPER Iterative and AI EFOV) rather than a comprehensive clinical study to prove general acceptance criteria.
However, I can extract the relevant information that is present and highlight what is missing.
1. Table of Acceptance Criteria and Reported Device Performance
Based on the provided text, specific quantitative acceptance criteria for image quality or clinical performance are not explicitly stated in a table format, nor are explicit numerical performance values against such criteria. The document states:
"Image performance test was conducted for uMI Panvivo to verify that the proposed device met all design specifications as it is Substantially Equivalent (SE) to the predicate device."
For the AI-specific features, it notes:
Feature | Indication/Description | Performance (as reported) |
---|---|---|
HYPER Iterative | Uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function. | Performance evaluation report for HYPER Iterative. "Sample clinical images for HYPER Iterative and AI EFOV were reviewed by U.S. board-certified radiologist. It was shown that the proposed device can generate images as intended and the image quality is sufficient for diagnostic use." |
AIEFOV (AI-based) | An extended field of view algorithm incorporating extrapolation and Deep Learning (DL). Project domain extrapolation ensures normal processing in convolution filter in scan field of view to reduce truncation artifact. DL technology using polar coordinate conversion in the extending region can enhance processing efficiency of deep networks and accelerate training test processing. Overall, AIEFOV does not affect CT values accuracy inside SFOV, and also increases the accuracy of CT values in the extended region. | Performance evaluation report for AI EFOV. "Sample clinical images for HYPER Iterative and AI EFOV were reviewed by U.S. board-certified radiologist. It was shown that the proposed device can generate images as intended and the image quality is sufficient for diagnostic use." "AIEFOV does not affect the CT values accuracy inside of SFOV, and also increases the accuracy of CT values in the extended region." |
Missing Information Regarding Acceptance Criteria and Quantified Performance:
The document does not provide specific quantitative acceptance criteria for image quality (e.g., contrast-to-noise ratio, spatial resolution, lesion detectability thresholds) or clinical outcomes. It relies on the qualitative statement that "image quality is sufficient for diagnostic use" and "met all design specifications" in comparison to a predicate device.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: Not explicitly stated. The document mentions "Sample clinical images for HYPER Iterative and AI EFOV were reviewed." The exact number of images, cases, or patients in this "sample" is not provided.
- Data Provenance: Not explicitly stated. The document does not mention the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: "a U.S. board-certified radiologist". This implies one radiologist, although it's possible it refers to a group and uses "radiologist" generically.
- Qualifications of Experts: "U.S. board-certified radiologist". No information on years of experience or specialization is provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable or not described. With a single "U.S. board-certified radiologist" reviewing images, an adjudication method (like 2+1 or 3+1 for consensus) would not be performed. The radiologist's assessment served as the evaluation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, an MRMC comparative effectiveness study is not explicitly mentioned as having been done or used to demonstrate performance. The document describes a review by a single U.S. board-certified radiologist. Therefore, there is no information on the effect size of how much human readers improve with AI vs. without AI assistance.
6. Standalone Performance Study
- Standalone Performance Study: The document implies a form of standalone performance evaluation for the AI EFOV and HYPER Iterative features through "Performance evaluation report for HYPER Iterative and AI EFOV" and the review by a radiologist. However, this is presented as an evaluation of image quality generated by the device, not necessarily a quantitative standalone diagnostic performance study (e.g., sensitivity, specificity) of the AI algorithm itself in a diagnostic task. The AI EFOV is described as an algorithm that improves image quality, specifically accuracy of CT values in the extended region and reduction of truncation artifacts. The evaluation focuses on whether the generated images are "sufficient for diagnostic use" and if CT values outside the SFOV are more accurate.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for the review of "sample clinical images" appears to be the expert opinion of the "U.S. board-certified radiologist" that the images were "sufficient for diagnostic use." For the AIEFOV's claim of increased accuracy of CT values in the extended region, the method for establishing this accuracy (e.g., comparison to a phantom with known values or a gold standard imaging technique) is not detailed.
8. Sample Size for the Training Set
- Training Set Sample Size: Not explicitly stated. The document mentions "Deep Learning(DL) technology" for AIEFOV and says it can "accelerate training test processing," implying a training phase. However, the size of the dataset used for training the DL model is not provided.
9. How the Ground Truth for the Training Set Was Established
- Training Set Ground Truth Establishment: Not explicitly stated. While DL is mentioned, the methodology for creating the ground truth used to train the DL model for the AIEFOV feature is not described in this document.
Ask a specific question about this device
(232 days)
uOmnispace. CT is a software for viewing, manipulating, evaluating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additions: -The uOmnispace. CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon. -The uOmnispace. CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. -The uOmnispace. CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway. -The uOmnispace.CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies. -The uOmnispace.CT Vessel Analysis application is intended to provide a tool for viewing, and evaluating CT vascular images. -The uOmnispace. CT Brain Perfusion is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain. -The uOmnispace.CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function. -The uOmnispace. CT Calcium Scoring application is intended to identify calcifications and calculate the calcium soore. -The uOmnispace. CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes. -The uOmnispace.CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk. -The uOmnispace. CT Liver Evaluation application is intended to processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation. -The uOmnispace. CT Dual Energy is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The u0mnispace.CT Dual Energy application is intended to provide information on the chemical composition of the scanned body materials and/or contrast agents. Additionally, it enables images to be generated at multiple energies within the available spectrum. -The uOmnispace.CT Cardiovascular Combined Analysis is an image analysis software package for evaluating contrast enhanced CT images. The CT Cardiovascular Combined Analysis is intended to analyze vascular and cardiac structures. It can be used in the qualitative and quantitative for the analysis of head-neck, abdomen, multi-body part combined, TAVR (Transcatheter Aortic Valve Replacement) CT data as input for the planning of cardiovascular procedures.
The uOmnispace.CT is a post-processing software based on the uOmnispace platform for viewing, manipulating, evaluating and analyzing medical images, can run alone or with other advanced commercially cleared applications.
The provided text describes the performance data for three AI/ML algorithms integrated into the uOmnispace.CT software: Spine Labeling Algorithm, Rib Labeling Algorithm, and TAVR Analysis Algorithm.
Here's a breakdown of the acceptance criteria and study details for each:
1. Spine Labeling Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Score based on ground truth | The average score of the proposed device results is higher than 4 points. | 5.0 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 120 subjects.
- Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (90 subjects) and Asia (30 subjects) for ethnicity.
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
- Qualifications of Experts: Licensed physicians with U.S. credentials.
- Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to set annotation points and assign anatomical labels. All ground truth was finally evaluated by two licensed physicians with U.S. credentials. This suggests a post-annotation review/adjudication by experts.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
- Type of Ground Truth Used: Expert consensus (annotators + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the spine labeling algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
2. Rib Labeling Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Score based on ground truth | The average score of the proposed device results is higher than 4 points. | 5.0 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 120 subjects.
- Data Provenance: Retrospective, with data collected from five major CT manufacturers (GE, Philips, Siemens, Toshiba, UIH). Clinical subgroups included U.S. (80 subjects) and Asia (40 subjects) for ethnicity.
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials.
- Qualifications of Experts: Licensed physicians with U.S. credentials.
- Adjudication Method: Ground truth annotations were made by "well-trained annotators" using an interactive tool to generate initial rib masks, which were then refined, and anatomical labels assigned. After the first round, annotators "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on a scoring system against ground truth.
- Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the rib labeling algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
3. TAVR Analysis Algorithm
Acceptance Criteria Table:
Validation Type | Acceptance Criteria | Reported Device Performance | Meets Criteria? |
---|---|---|---|
Verify the consistency with ground truth (Mean Landmark Error) | The mean landmark error between the proposed device results and ground truth is less than the threshold, 1 mm. | 0.86 mm | Yes |
Subjective Scoring of doctors with U.S. professional qualifications | The average score of the evaluation criteria is higher than 2. | 3 points | Yes |
Study Proving Device Meets Acceptance Criteria:
- Sample Size for Test Set: 60 subjects.
- Data Provenance: Retrospective. Clinical subgroups included Asia (25 subjects) and U.S. (35 subjects) for ethnicity, including data from U.S. Facility 1 (25 subjects) and U.S. Facility 2 (10 subjects).
- Number of Experts for Ground Truth: At least two licensed physicians with U.S. credentials for the final evaluation of the ground truth.
- Qualifications of Experts: Licensed physicians with U.S. credentials (specifically, "two MD with the American Board of Radiology Qualification" for the subjective scoring).
- Adjudication Method: Ground truth annotations were made by "well-trained annotators." After the first round of annotation, they "checked each other's annotation." Finally, all ground truth was evaluated by two licensed physicians with U.S. credentials. This indicates a multi-step adjudication process.
- MRMC Comparative Effectiveness Study: No, this was a standalone performance evaluation of the algorithm against established ground truth and subjective expert scoring.
- Standalone Performance: Yes, the performance of the algorithm itself was evaluated based on landmark error and subjective expert scoring.
- Type of Ground Truth Used: Expert consensus (annotators + cross-checking + review by licensed physicians).
- Sample Size for Training Set: Not specified, but stated that "The training data used for the training of the post-processing algorithm is independent of the data used to test the algorithm."
- How Ground Truth for Training Set was Established: Not specified beyond the implication that a ground truth process was followed for training data as well.
Ask a specific question about this device
(216 days)
The uEXPLORER is a diagnostic imaging system that combines two existing imaging modalities PET and CT. The quantitative distribution information of PET radiopharmaceuticals within the patient body measured by PET can assist healthcare providers in assessing metabolic and physiological functions. CT provides diagnostic tomographic anatomical information as well as photon attenuation for the scanned region. The accurate registration and fusion of PET and CT images provides anatomical reference for the findings in the PET images.
This system is intended to be operated by qualified healthcare professionals to assist in the detection, diagnosis, staging, restaging, treatment planning and treatment response evaluation for diseases, inflammation, infection and disorders in, but not limited to oncology, cardiology and neurology. The system maintains independent functionality of the CT device, allowing for single modality CT diagnostic imaging.
This CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society. * * Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed device uEXPLORER combines a 194 cm axial field of view (AFOV) PET and multi-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET gantry, CT gantry, patient table, power supply cabinet, console and reconstruction system, chiller, vital signal module.
The uEXPLORER has been previously cleared by FDA via K182938. The mainly modifications performed on the uEXPLORER (K182938) in this submission are due to the addition of HYPER Iterative, HYPER DLR, Digital gating, remote assistance and CT system modification.
Details about the modifications are listed as below:
- HYPER Iterative (has been cleared in K193241), uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function.
- HYPER DLR (has been cleared in K193210), uses a deep learning technique to produce better SNR (signal-to-noise-ratio).
- Digital Gating (has been cleared in K193241), uses motion correction method to ● provide better alternatives to reduce motion effects without sacrificing image statistics.
- Remote assistance.
- PET recon matrix: 1024×1024.
- TG-66 compliant flat tabletop.
- Update the performance according to the NEMA NU 2-2018 standard.
- Update the operation system.
- CT system modification: add Low Dose CT Lung Cancer Screening, Auto ALARA kVp, Organ-Based Auto ALARA mA, EasyRange, Injector Linkage, Shuttle Perfusion, Online MPR and Dual Energy analysis function. All functions have been cleared via K230162.
This document appears to be a 510(k) Premarket Notification from Shanghai United Imaging Healthcare Co., Ltd. for their uEXPLORER device.
Here's an analysis of the provided text to extract information about the acceptance criteria and study that proves the device meets them:
Crucial Observation: The document explicitly states: "No Clinical Study is included in this submission." This means that the information typically found in an FDA submission regarding "acceptance criteria" through a clinical performance study (like an MRMC study or standalone performance) is not present here. Instead, the substantial equivalence relies on non-clinical testing and comparison to predicate devices, particularly regarding modifications to previously cleared components.
Therefore, many of the requested points below cannot be fully answered as they pertain to clinical or human-in-the-loop performance studies that were not conducted or provided in this submission for the specific device being reviewed.
However, I can extract information related to the "non-clinical testing" and the rationale for substantial equivalence.
Acceptance Criteria and Device Performance (Based on Non-Clinical Testing and Substantial Equivalence Rationale):
Given the statement "No Clinical Study is included in this submission," the acceptance criteria are primarily related to non-clinical performance, safety, and functionality demonstrating equivalence to predicate devices and adherence to relevant standards. The "reported device performance" is essentially that it met these non-clinical criteria and maintained safety/effectiveness equivalent to the predicate.
1. Table of acceptance criteria and the reported device performance:
Acceptance Criteria Category | Specific Criteria (Implied from document) | Reported Device Performance (Implied from document) |
---|---|---|
Functional Equivalence | Maintains same basic operating principles/fundamental technologies as predicate. | "The uEXPLORER employs the same basic operating principles and fundamental technologies... The differences above between the proposed device and predicate device do not affect the intended use, technology characteristics, safety and effectiveness." |
Indications for Use Equivalence | Has similar indications for use as predicate. | "The uEXPLORER has ... the similar indications for use as the predicate device." (Indications for Use are listed in detail in section 6 of the document, matching the predicate's intent) |
Physical/Technical Specifications | Key specifications (e.g., gantry bore, scintillator, axial FOV, maximum table load) remain equivalent to predicate device. | Confirmed: Gantry bore (760mm), Scintillator material (LYSO), Number of detector rings (672), Axial FOV (194 cm), Gantry bore (76 cm for PET), Maximum table load (250 kg) are identical to the predicate (K182938). |
Addition of New Features (Non-Clinical Validation) | New features (HYPER Iterative, HYPER DLR, Digital Gating, CT system modifications) are either identical to previously cleared devices or validated through non-clinical testing. | HYPER Iterative: "has been cleared in K193241." "uses a regularized iterative reconstruction algorithm, which allows for more iterations while keeping the image noise at an acceptable level by incorporating a noise penalty term into the objective function." (Implies non-clinical validation of this algorithm in prior submission). |
HYPER DLR: "has been cleared in K193210." "uses a deep learning technique to produce better SNR." (Implies non-clinical validation of this algorithm in prior submission). | ||
Digital Gating: "has been cleared in K193241." "uses motion correction method..." (Implies non-clinical validation in prior submission). | ||
CT system modification: "All functions have been cleared via K230162." (Implies non-clinical validation of these functions in prior submission). Non-clinical tests were conducted for "Algorithm and Image performance." | ||
Safety - Electrical Safety & EMC | Conformance to relevant electrical safety and electromagnetic compatibility (EMC) standards. | Claims conformance to: ANSI AAMI ES60601-1, IEC 60601-1-2, IEC 60601-2-44, IEC 60601-1-3, IEC 60825-1. (Implies positive test results against these standards). |
Safety - Software | Conformance to software development and cybersecurity standards. | Claims conformance to: IEC 60601-1-6 (Usability), IEC 62304 (Software life cycle processes), NEMA PS 3.1-3.20 (DICOM), FDA Guidance for Software Contained in Medical Devices, FDA Guidance for Cybersecurity. (Implies software development and testing followed these standards). |
Safety - Biocompatibility | Conformance to biocompatibility standards for patient contact materials (if applicable, which for a large imaging system is less direct but still relevant for patient tables/touch points). | Claims conformance to: ISO 10993-1, ISO 10993-5, ISO 10993-10. (Implies positive results for relevant components). |
Performance - PET | Conformance to PET performance measurement standards. | Claims conformance to: NEMA NU 2-2018 (Performance Measurements of Positron Emission Tomographs). "Update the performance according to the NEMA NU 2-2018 standard." (Implies the device meets or exceeds the specifications in this standard). |
Risk Management | Application of risk management processes. | Claims conformance to: ISO 14971: 2019 (Application of risk management to medical devices). (Implies risks were identified, assessed, and mitigated). |
Quality System | Compliance with Quality System Regulation. | Claims conformance to: 21 CFR Part 820 Quality System Regulation. (This is a general requirement for all medical device manufacturers). |
Radiological Health | Compliance with radiological health regulations. | Claims conformance to: Code of Federal Regulations, Title 21, Subchapter J - Radiological Health. (This is a general requirement for X-ray emitting devices). |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not applicable in the context of clinical data. For non-clinical performance and algorithm testing, the "sample size" would refer to the types and number of phantoms/datasets used. The document states "Algorithm and Image performance tests were conducted," but does not specify the number or nature of these test sets.
- Data Provenance: Not specified for any test data. The company is based in China.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable, as no clinical study with expert ground truth establishment was conducted or presented in this submission.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not applicable, as no clinical study requiring adjudication was conducted or presented.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was explicitly NOT done. The submission states: "No Clinical Study is included in this submission." The new features (HYPER Iterative, HYPER DLR, Digital Gating, and CT modifications) had "been cleared" in other predicate devices via non-clinical performance evaluations, not human reader studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in essence, standalone performance validation of the algorithms was done, but as part of prior submissions for the predicate components. The document states "Algorithm and Image performance tests were conducted for the uEXPLORER during the product development." The key new features, HYPER Iterative, HYPER DLR, and Digital Gating, as well as the CT system modifications, are explicitly stated as having been "cleared" in previous 510(k) submissions (K193241, K193210, K230162). This implies their standalone performance was evaluated and accepted in those prior submissions through non-clinical means (e.g., phantom studies, image quality metrics like SNR, spatial resolution, noise reduction). The details of those prior standalone studies are not provided here, but the current submission leverages their previous clearance.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the non-clinical "Algorithm and Image performance tests," the ground truth would typically be established based on well-defined physical phantoms with known properties or simulated data, rather than expert consensus, pathology, or outcomes data, which are associated with clinical studies. The specific details are not provided.
8. The sample size for the training set
- Not applicable directly to this submission. The algorithms (HYPER DLR being deep learning) would have had training data, but those details pertain to their original development and previous clearances (K193210, K193241), not this particular 510(k) submission.
9. How the ground truth for the training set was established
- Not applicable directly to this submission. This information would be found in the documentation for the previous 510(k) clearances for the HYPER DLR and Digital Gating algorithms if they involved supervised learning that required established ground truth. Typically, for medical imaging algorithms, this could involve large datasets with expertly annotated images, but no specifics are in this document.
Ask a specific question about this device
(190 days)
uCT ATLAS Astound is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS Astound is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS Astound is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society. * Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uWS-CT-Dual Energy Analysis is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The various materials of an anatomical region of interest have different attenuation coefficients, which depend on the used energy. These differences provide information on the chemical composition of the scanned body materials and enable images to be generated at multiple energies within the available spectrum. uWS-CT-Dual Energy Analysis software combines images acquired with low and high energy spectra to visualize this information.
uCT ATLAS is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS has the capability to image a whole organ in a single rotation. Organs include, but not limited to head, heart, liver, kidney, pancreas, joints, etc.
uCT ATLAS is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of protocols that have been approved and published by either a governmental body or professional medical society.
- Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uWS-CT-Dual Energy Analysis is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The various materials of an anatomical region of interest have different attenuation coefficients, which depend on the used energy. These differences provide information on the chemical composition of the scanned body materials and enable images to be generated at multiple energies within the available spectrum. uWS-CT-Dual Energy Analysis software combines images acquired with low and high energy spectra to visualize this information.
The proposed device CT system with uWS-CT-Dual Energy Analysis includes image acquisition hardware, image acquisition, reconstruction and dual energy analysis software, and associated accessories.
The proposed CT system is designed to use less radiation dose. Further, the fast scanning capability benefits the clinical applications, especially for cardiac imaging, dynamic whole organ imaging and fast body and vascular imaging.
The computer system delivered with the CT scanner is able to run post processing applications optionally.
The Computed Tomography System family scanners referenced in this submission are comparable in indications for use, and are substantially equivalent in design, material, functionality, technology, energy source and are substantially equivalent to the predicate devices. The reason for this submission is to support the following additional function:
CT intervention provides real-time CT fluoroscopy at 12 IPS with in-room view and in-room X-ray control. It allows the user to adjust the scan parameters during operation, and scan modes can be switched according to technician's operation requirements. Entry path planning based on 2D and 3D images.
The CT guided intervention will be applicable for the UIH qualified CT systems. This indication will also be applicable for future qualified UIH CT systems.
I am sorry, but the provided text does not contain sufficient information to answer your request regarding the acceptance criteria and the study that proves the device meets them. The document is primarily a 510(k) premarket notification letter from the FDA, outlining regulatory compliance and substantial equivalence to predicate devices, and includes device descriptions, indications for use, and a comparison of technological characteristics.
It mentions "Non-clinical testing including dosimetry and image performance tests were conducted...to verify that the proposed device met all design specifications," and lists relevant standards and guidance documents. It also states "The features described in this premarket submission are supported with the results of the testing mentioned above, the proposed device was found to have a safety and effectiveness profile that is similar to the predicate device." However, it does not provide:
- A table of specific acceptance criteria and reported device performance.
- Details about the study's design, such as sample size, data provenance, number or qualifications of experts, or adjudication methods for ground truth creation.
- Information about multi-reader multi-case (MRMC) comparative effectiveness studies or standalone algorithm performance.
- The type of ground truth used, or the sample size and ground truth establishment methods for a training set.
Therefore, I cannot populate the requested table or provide the specific study details you've asked for based on the given input.
Ask a specific question about this device
Page 1 of 1