Search Results
Found 19 results
510(k) Data Aggregation
(175 days)
uCT ATLAS Astound is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS Astound is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS Astound is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uCT ATLAS is a computed tomography x-ray system, which is intended to produce cross-sectional images of the whole body by computer reconstruction of x-ray transmission data taken at different angles and planes. uCT ATLAS is applicable to head, whole body, cardiac, and vascular x-ray Computed Tomography.
uCT ATLAS has the capability to image a whole organ in a single rotation. Organs include, but not limited to head, heart, liver, kidney, pancreas, joints, etc.
uCT ATLAS is intended to be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
uWS-CT-Dual Energy Analysis is a post-processing software package that accepts UIH CT images acquired using different tube voltages and/or tube currents of the same anatomical location. The various materials of an anatomical region of interest have different attenuation coefficients, which depend on the used energy. These differences provide information on the chemical composition of the scanned body materials and enable images to be generated at multiple energies within the available spectrum. uWS-CT-Dual Energy Analysis software combines images acquired with low and high energy spectra to visualize this information.
The uCT ATLAS Astound with uWS-CT Dual Energy Analysis and uCT ATLAS with uWS-CT Dual Energy Analysis includes the same intended use and same indications for use as their recent cleared versions (K231482). The reason for this submission is to support the following additional functions:
- CardioXphase (optimized)
- CardioBoost
- CardioCapture (optimized)
- AIIR
- Motion Freeze
- Ultra EFOV
The provided text describes a 510(k) premarket notification for a Computed Tomography X-ray System (uCT ATLAS Astound with uWS-CT-Dual Energy Analysis and uCT ATLAS with uWS-CT-Dual Energy Analysis). The submission focuses on additional software functions beyond what was previously cleared.
However, the document does not contain specific acceptance criteria, detailed study designs, or quantitative performance data to establish "proof" in the typical sense of a rigorous clinical trial with defined endpoints and statistical significance. Instead, it relies on demonstrating substantial equivalence to existing predicate devices.
The "acceptance criteria" appear to be implicit in the non-clinical and reader studies, aiming to show that the performance of the new features is "sufficient for diagnosis," "equal or better," or "can improve" compared to a baseline or predicate. No explicit numerical thresholds for metrics like sensitivity, specificity, accuracy, or effect sizes for reader improvement are provided.
Here's an analysis based on the information provided, highlighting what is present and what is missing concerning acceptance criteria and study details:
Overview of Device Performance and Acceptance Criteria
The submission does not explicitly define acceptance criteria in terms of numerical thresholds for performance metrics. Instead, it describes a "bench test" and "reader study" approach to demonstrate that the new functions do not raise new safety and effectiveness concerns and provide an equivalent or improved performance compared to the predicate/reference devices or established techniques.
The implied "acceptance criteria" are qualitative, such as:
- "passed the basic general IQ test which satisfied the requirement of IEC 61223-3-5."
- "showed better LCD comparing with FBP..."
- "showed better noise comparing with FBP."
- "showed better spatial resolution comparing with FBP..."
- "all indicators have met the verification criteria and have passed the verification." (for CardioXphase)
- "can reduce head motion artifacts." (for Motion Freeze)
- "can improve the CT number..." (for Ultra EFOV)
- "images are sufficient for diagnosis and the image quality... is equal or better than..." (for various reader studies)
- "is helpful for both artifact suppression and clinical diagnosis." (for Motion Freeze reader study)
- "can improve the accuracy of image CT numbers..." (for Ultra EFOV reader study)
- "conclude the effectiveness of CardioCapture function for reducing cardiac motion artifacts as expected."
Table of Acceptance Criteria (Implied) and Reported Device Performance
Since explicit, quantitative acceptance criteria are not provided, this table will rephrase the reported performance as the observed outcome against the implied objective.
| Software Function | Implied Acceptance Criteria (Objective) | Reported Device Performance |
|---|---|---|
| CardioBoost | Bench Test: Meet IEC 61223-3-5 requirements; show better LCD, noise, and spatial resolution than FBP; maintain basic general IQ. Reader Study: Images sufficient for diagnosis; image quality equal or better than KARL 3D. | Bench Test: Passed basic general IQ test (IEC 61223-3-5 satisfied). Showed better LCD, noise, and spatial resolution compared to FBP at same scanning dose. Reader Study: Confirmed CardioBoost images are sufficient for diagnosis and image quality is equal or better than KARL 3D over all evaluation aspects. |
| AIIR | Bench Test: Meet IEC 61223-3-5 requirements; show better LCD, noise, and spatial resolution than FBP; maintain basic general IQ. Reader Study: Images sufficient for diagnosis; image quality equal or better than FBP. | Bench Test: Passed basic general IQ test (IEC 61223-3-5 satisfied). Showed better LCD, noise, and spatial resolution compared to FBP at same scanning dose. Reader Study: Confirmed AIIR images are sufficient for diagnosis and image quality is equal or better than FBP over all evaluation aspects. |
| CardioXphase | Bench Test (AI module): Quantitative assessment metrics (DICE, Precision, Recall) for heart mask and coronary artery mask extraction meet verification criteria. | Bench Test (AI module): All quantitative indicators (DICE, Precision, Recall) for heart mask and coronary artery mask extracted by the new AI module have met the verification criteria and passed verification. |
| Motion Freeze | Bench Test: Demonstrate effectiveness in reducing head motion artifacts. Reader Study: Images helpful for artifact suppression and clinical diagnosis. | Bench Test: Showed that Motion Freeze can reduce head motion artifacts. Reader Study: Confirmed Motion Freeze is helpful for both artifact suppression and clinical diagnosis. |
| Ultra EFOV | Bench Test: Demonstrate effectiveness in improving CT value accuracy when scanned object exceeds scan-FOV compared to EFOV. Reader Study: Images confirm improved accuracy of image CT numbers and homogeneity of same tissue when scanned object exceeds scan-FOV. | Bench Test: Showed that Ultra EFOV can improve the CT number in cases where the scanned object exceeds the CT field of scan-FOV, compared to EFOV. Reader Study: Confirmed that images with Ultra EFOV can improve the accuracy of image CT numbers and homogeneity of same tissue, in cases where the scanned object exceeds the CT field of view. |
| CardioCapture | Reader Study: Effectiveness in reducing cardiac motion artifacts as expected, with clear/continuous contours, tolerable motion artifacts, and sufficient diagnostic (>=50%) coronary segments. | Reader Study: Concluded the effectiveness of CardioCapture function for reducing cardiac motion artifacts as expected, based on evaluation of clear/continuous contours, tolerable motion artifacts, and number of diagnostic coronary segments (reaching at least 50% of total coronary artery segments). (Specific to AI motion correction in uCT ATLAS). |
Study Details
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: The document does not specify the sample sizes (number of cases/studies) used for either the bench tests or the reader studies.
- Data Provenance: Not explicitly stated (e.g., country of origin, whether retrospective or prospective). The use of "clinical images" implies real patient data, but details are missing.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not specified. The document mentions "readers" (plural) for the reader studies but does not state how many participated.
- Qualifications of Experts: Not specified. No details are given about their specialty (e.g., cardiologist, radiologist), experience level, or board certification.
-
Adjudication Method for the Test Set:
- Adjudication Method: Not specified. For the reader studies, it only states that images "were shown to the readers to perform a five-point scale evaluation" or "5-point scale evaluation." There's no mention of how discrepancies or disagreements among readers were handled or if a consensus ground truth was established by independent experts (e.g., 2+1, 3+1).
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:
- MRMC Study: Reader studies were conducted comparing images reconstructed with the new AI functions (e.g., CardioBoost) to those reconstructed with traditional methods (e.g., KARL 3D, FBP). These appear to be MRMC studies in structure, as multiple readers evaluate multiple cases.
- Effect Size: No quantitative effect sizes are provided. The results are qualitative: "equal or better," "sufficient for diagnosis," "helpful." There are no reported metrics like AUC improvement, sensitivity/specificity gains, or statistical significance of differences in reader performance with and without AI assistance.
-
If a Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Was Done:
- Standalone Performance: The "bench tests" for CardioBoost, AIIR, Motion Freeze, and Ultra EFOV evaluate the algorithms' image quality metrics (IQ, LCD, noise, spatial resolution, CT value accuracy, artifact reduction) independently of human interpretation. For CardioXphase, the evaluation of the AI module's extraction accuracy (DICE, Precision, Recall) is also a standalone assessment. These can be considered standalone performance evaluations for the image reconstruction/processing algorithms.
-
The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):
- Ground Truth: For the image quality bench tests (CardioBoost, AIIR, Motion Freeze, Ultra EFOV), the "ground truth" is likely defined by physical phantom measurements and adherence to engineering specifications/standards (e.g., IEC 61223-3-5, CTIQ White Paper, AAPM's report).
- For CardioXphase, the ground truth for image segmentation accuracy (heart and coronary artery masks) was "annotated results," which typically implies expert manual annotation on imaging data.
- For the reader studies, the "ground truth" is based on the subjective evaluation of "image quality aspects" by the readers, rather than an objective, clinically validated ground truth for a diagnostic endpoint (e.g., presence/absence of disease confirmed by biopsy or follow-up). The goal was to demonstrate that the image quality generated by the new features is non-inferior or improved for diagnostic purposes.
-
The Sample Size for the Training Set:
- Training Set Sample Size: The document mentions "datasets augmentation and deep learning network optimization" for CardioBoost and AIIR, and "introduction of a new deep learning based coronaries detection algorithm" for CardioXphase, and "introduces a deep learning network" for Ultra EFOV. However, the specific size of the training datasets (number of images/cases) is not provided.
-
How the Ground Truth for the Training Set Was Established:
- Training Set Ground Truth: Not explicitly stated. For deep learning models, training data ground truth is typically established by expert annotation or labels derived from existing clinical reports or imaging features. Given the context of image reconstruction and enhancement, it likely involves high-quality, potentially expert-annotated, imaging data. For instance, for CardioXphase, the ground truth for training the coronary artery detection algorithm would involve expert-labeled coronary anatomy. For features like CardioBoost and AIIR, which optimize image reconstruction, the ground truth for training might involve pairs of raw data and ideal reconstructed images, or image quality metrics derived from expert evaluations on initial datasets.
In summary, the 510(k) submission successfully demonstrates "substantial equivalence" based on qualitative assessments and performance relative to known methods. However, for a detailed "proof" with explicit acceptance criteria and quantitative performance metrics, further information beyond what is presented in this FDA clearance letter summary would be needed. This is characteristic of many 510(k) submissions, which often rely on demonstrating safety and effectiveness relative to existing predicates rather than establishing novel clinical efficacy through large-scale, quantitatively defined trials.
Ask a specific question about this device
(180 days)
The system is used to perform image guidance in diagnostic, intervention and surgical procedures. Procedures that can be performed with the system include cardiac angiography, neuro-angiography, vascular and non-vascular angiography, rotational angiography and whole body radiographic/fluoroscopic procedures.
The uAngio AVIVA CX is an angiographic X-ray system that generates X-rays through the X-ray tube, receives the signal through the flat panel detector and presents the image after D/A conversion and image post-processing.
The uAngio AVIVA CX is designed to provide intelligent, safe, and precise image guidance in cardiac, neuro, oncology, peripheral interventional, and surgical procedures.
The main components of the uAngio AVIVA CX include a C-arm stand, patient table, generator, X-ray tube, flat panel detector, collimator, grid, monitors, control module, control panel, foot switch, hand switch, V-box, and intercom.
The main software characteristics of the uAngio AVIVA CX include patient registration, patient administration, 2D&3D image viewing and post-processing, data import/archiving, filming, camera-assisted recognition function (uSpace), and voice control function (uLingo).
The provided FDA 510(k) clearance letter and summary for the uAngio AVIVA CX device do not contain detailed information about specific acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy) typically associated with AI-driven diagnostic tools. Instead, the document focuses on non-clinical testing to ensure the device meets safety and operational specifications, as well as satisfactory image quality for clinical use.
However, based on the non-clinical testing section, we can infer some performance criteria and how the device reportedly meets them.
Here's an attempt to extract and synthesize the information provided, keeping in mind the limitations of the input regarding specific "acceptance criteria" as they would apply to an AI-driven image interpretation system:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a fluoroscopic X-ray system, the "acceptance criteria" are related to technical performance and image quality rather than diagnostic AI algorithm metrics like sensitivity or specificity.
| Feature/Metric | Acceptance Criterion (Implied/Stated) | Reported Device Performance |
|---|---|---|
| Cone Beam CT (CBCT) | ||
| C-arm positioning accuracy | Acceptable and repeatable position error. | "The position error of the C-arm is acceptable and the position is repeatable." |
| CBCT imaging performance | Fulfills requirements for in-plane uniformity, spatial resolution, reconstruction section thickness, noise, contrast to noise ratio, artifact analyses. | "The imaging performance of CBCT fulfills the requirements." |
| Radiation dose (CBCT) | CTDI of CBCT protocols fulfills the requirement. | "CTDI of CBCT protocols fulfills the requirement." |
| Fusion (Image Registration Algorithm) | ||
| Accuracy of Image Registration Algorithm (mTRE) | Mean target registration error (mTRE) less than the pixel diagonal distance. | "All the mTREs of the test cases are smaller than the voxel diagonal distance, which can achieve sub-pixel accuracy registration. Image Registration Algorithm is able to meet the verification criteria, the verification result is passed." |
| DSA (Digital Subtraction Angiography) | ||
| Dynamic range (submillimeter vessel visibility) | Submillimeter vessel simulation component is visible across all copper step wedges of the subtracted image. | "submillimeter sized blood vessels can be seen in all copper step wedges" (for four typical groups: Body, Head, Vascular, Pediatric). |
| Contrast sensitivity (low millimeter vessel visibility) | Low millimeter vascular simulation component should be visible in a copper step wedge of sufficient thickness in the subtraction image. | "the thickness of blood vessels visible in sufficiently thick copper step wedges meets the requirements." |
| 3D Roadmap (Neuro Registration Algorithm) | ||
| Accuracy of neuro registration (mTRE) | A precision of at least 1 mm (mTRE < 1mm). | "The registration results of neuro registration algorithm is that the mTRE is less than 1mm. The neuro registration was able to meet a precision of at least 1 mm, the verification was passed." |
| uChase (Stitching Algorithm) | ||
| Accuracy of Image Registration Algorithm in Stitching (mTRE) | Mean target registration error (mTRE) less than the pixel diagonal distance. | "All the mTREs of the test cases are smaller than the pixel diagonal distance" |
| Clinical utility of stitching results | Average score from clinical specialists higher than 2. | "the average score of stitching results is 2.95, which is higher than 2.The Stitching Algorithm was able to meet the verification criteria and the verification of the algorithm was passed." |
| uSpace (Camera-assisted recognition) | ||
| Human key point detection accuracy | Required accuracy for not deviating from the range of the acquisition field of view. | "Human key point detection achieves the required accuracy for not deviating from the range of the acquisition field of view." |
| Collision detection rate | Specified detection rate. | "Collision detection achieves the specified detection rate." |
| Auto SID adjustment accuracy | Required accuracy for maintaining a distance from detector to the patient during C-arm rotation or patient table lifting. | "Auto SID adjustment achieves the required accuracy for maintaining a distance from detector to the patient during C-arm rotation or patient table lifting." |
| Radiation safety (different SID) | Tested within allowable adjustment range; meets requirements. | "The radiation safety test results of the uAngio AVIVA CX under different SID. [...] meet the requirements." |
| Image quality (different SID) | Meets requirements based on clinical evaluation and objective physical characteristics. | "The image quality evaluation was tested through the clinical evaluation and objective physical characteristics. According to the acceptable criteria in the regulations, the image quality test results of the uAngio AVIVA CX under different SID meet the requirements." |
| uLingo (Voice control) | ||
| Wake-up algorithm recognition rate (SNR ≥ 15 dB(A)) | ≥95% accuracy rate. | "The recognition rate of voice wake-up is tested as below: When the environmental signal-to-noise ratio (SNR) is ≥15 dB(A), the wake-up accuracy rate reaches ≥95%" |
| Wake-up algorithm recognition rate (SNR = 10 dB(A)) | ≥85% accuracy rate. | "When environmental SNR is 10 dB(A), the wake-up accuracy rate reaches ≥85%." |
| Voice command recognition rate (SNR ≥ 15 dB(A)) | ≥95% success rate for each command. | "When the environmental SNR is ≥15 dB(A), the recognition success rate for each command reaches ≥95%" |
| Voice command recognition rate (SNR = 10 dB(A)) | ≥85% success rate for each command. | "When the environmental SNR is 10 dB(A), the recognition success rate for each command reaches ≥85%" |
| Clinical Image Evaluation | Image quality fulfills the needs for diagnostic, intervention, and surgical procedures (score ≥ 3 on a five-point scale for spatial detail, contrast-noise performance, clinical motion, and clinical features of interest). | "Each image received a score of ≥ 3 and received a PASS result, indicating that image quality fulfills the need for diagnostic, intervention, and surgical procedures." |
2. Sample size(s) used for the test set and the data provenance
- CBCT, Fusion, DSA, 3D Roadmap, uChase (Stitching Algorithm): These refer to various "test cases" or "test data." The exact number of images/cases is not specified for most, but:
- Fusion: "Each group has three sets of test data." (Groups refer to DSA mask registered with DSA mask, CBCT, CT, CTA, and MR).
- DSA: Four typical groups (Body, Head, Vascular, Pediatric) were tested, with "each group applying two sets of test data."
- 3D Roadmap: "It has eight sets of test data in the group."
- uChase: Refers to "test cases," specific number not listed.
- uLingo (Voice control): "For U.S. Participants, 18 talkers were invited to record the voice commands for verification."
- Clinical Image Evaluation: "Sample images were obtained from hospitals to meet the following requirements: contained images of the head, cardiac, body, and extremities with different acquisition protocols representative of US clinical use and populations. Typical cases have been selected such as Internal carotid artery angioplasty and stenting, internal carotid artery stenosis and aneurysm, PCI, radiofrequency ablation, TACE, TIPS, lower extremity arterial angioplasty and so on." The document does not specify the exact number of images or cases in this clinical image evaluation.
- Data Provenance:
- uLinkgo: "U.S. Participants."
- Clinical Image Evaluation: "obtained from hospitals," "representative of US clinical use and populations." This suggests retrospective data from US clinical settings. Other technical tests do not explicitly state data provenance but imply internal testing environments.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- uChase (Stitching Algorithm): "average score from clinical specialists." The number of specialists and their specific qualifications are not provided, only stating "clinical specialists."
- Clinical Image Evaluation: "[images] evaluated by an ADR certified radiologist." This indicates one expert. The qualification specified is "ADR certified radiologist."
4. Adjudication method for the test set
- uChase (Stitching Algorithm): Ground truth seems to be implicitly established by "clinical specialists" providing an "average score." The method of combining scores (e.g., majority vote, consensus) or a formal adjudication process is not detailed.
- Clinical Image Evaluation: "evaluated by an ADR certified radiologist." This implies a single reader assessment rather than an adjudication process involving multiple experts for consensus.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe a multi-reader, multi-case (MRMC) comparative effectiveness study involving human readers with and without AI assistance for device functionality like image interpretation. The AI components mentioned (uSpace for auto-positioning, uLingo for voice control, image quality algorithms) are primarily aimed at workflow enhancement and physical control of the device, not direct diagnostic interpretation or assistive reading. Therefore, no effect size of human reader improvement with AI assistance is provided.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, various standalone algorithm performances were evaluated:
- CBCT imaging performance metrics.
- Fusion algorithm accuracy (mTRE).
- DSA algorithm performance (vessel visibility).
- 3D Roadmap algorithm accuracy (mTRE).
- uChase Stitching Algorithm accuracy (mTRE).
- uSpace performance (human key point detection, collision detection, Auto SID adjustment).
- uLingo algorithm performance (wake-up and command recognition rates).
These evaluations assess the algorithms' intrinsic performance without direct human-in-the-loop diagnostic interpretation.
7. The type of ground truth used
- CBCT, Fusion, DSA, 3D Roadmap, uChase (Stitching Algorithm): For these technical measurements, the ground truth is established through physical phantoms (Geometric phantom, Catphan700 phantom, chest phantom, head phantom, CTDI dosimetry phantom, body phantom, digital subtraction angiography phantom with compensation test step wedge) and objective quantitative metrics (e.g., mTRE, pixel diagonal distance, stated requirements for vessel visibility, accuracy thresholds).
- uSpace (Camera-assisted recognition): Inferred to be based on objective measurements of physical distance, field of view, and collision events.
- uLingo (Voice control): Based on objective recording and analysis of recognition rates against known voice commands and environmental conditions.
- Clinical Image Evaluation: "image quality was evaluated by an ADR certified radiologist using a five-point scale," indicating expert opinion/consensus on image quality, rather than pathology or outcomes data.
8. The sample size for the training set
The document does not specify the sample size for the training set for any of the machine learning components (uSpace, uLingo, image registration algorithms). It only mentions "machine learning methods" are used.
9. How the ground truth for the training set was established
The document does not describe how the ground truth for the training set was established for any of the machine learning components. It only mentions the "machine learning methods" are used for uSpace and uLingo but provides no details on their training data or labels.
Ask a specific question about this device
(27 days)
The uMI Panvivo is a PET/CT system designed for providing anatomical and functional images. The PET provides the distribution of specific radiopharmaceuticals. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. PET and CT scans can be performed separately. The system is intended for assessing metabolic (molecular) and physiologic functions in various parts of the body. When used with radiopharmaceuticals approved by the regulatory authority in the country of use, the uMI Panvivo system generates images depicting the distribution of these radiopharmaceuticals. The images produced by the uMI Panvivo are intended for analysis and interpretation by qualified medical professionals. They can serve as an aid in detection, localization, evaluation, diagnosis, staging, re-staging, monitoring, and/or follow-up of abnormalities, lesions, tumors, inflammation, infection, organ function, disorders, and/or diseases, in several clinical areas such as oncology, infection and inflammation, neurology. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
The CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical societv. *
- Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The proposed device uMI Panvivo combines a 235/295 mm axial field of view (FOV) PET and 160-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET system. CT system, patient table, power distribution unit, control and reconstruction system (host, monitor, and reconstruction computer, system software, reconstruction software), vital signal module and other accessories.
The uMI Panvivo was previously cleared by the FDA via K241596. The modification in this submission involves the addition of a new model. The previous uMI Panvivo(K241596) is designed with scalable PET rings and uMI Panvivo S is scaling to 80 PET rings compare to the uMI Panvivo 100 PET rings.
This document does not contain the detailed acceptance criteria and study information that would typically be found in a FDA Summary of Safety and Effectiveness Data (SSED) report or a more comprehensive clinical study report. The provided text is a 510(k) Summary, which primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing extensive details on novel performance studies for acceptance criteria.
However, based on the limited information available in the "Performance Verification" section on page 8, I can infer some points related to image quality and the type of evaluation performed.
Here's a breakdown of what can and cannot be extracted from the provided text according to your requested categories:
1. A table of acceptance criteria and the reported device performance
The document states:
"A Sample clinical images were reviewed by U.S. board-certified radiologist. It was shown that the proposed device can generate images as intended and the image quality is sufficient for diagnostic use."
This implies that the acceptance criteria for image quality were met, as determined by a qualified professional. However, the specific quantitative acceptance criteria (e.g., minimum spatial resolution, signal-to-noise ratio, contrast-to-noise ratio, lesion detection sensitivity/specificity targets) and the reported device performance against these specific criteria are not detailed in this summary.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The document mentions "Sample clinical images." The exact sample size of images or cases used in this review is not specified. The data provenance (country of origin, retrospective/prospective nature) is also not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
The document states that images were "reviewed by U.S. board-certified radiologist."
- Number of experts: Singular ("radiologist") suggests one radiologist, but it could also implicitly mean "radiologists" as a group of experts. The exact number is unclear.
- Qualifications: "U.S. board-certified radiologist" is a qualification. Specific experience (e.g., "10 years of experience") is not provided.
- Role in ground truth: Based on the text, the radiologist(s) reviewed images to confirm "image quality is sufficient for diagnostic use." This implies they evaluated the image quality itself, rather than strictly establishing a ground truth for a diagnostic task (e.g., confirming presence/absence of a lesion against a gold standard).
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Since the number of experts is unclear or potentially singular, and the nature of the review was for "image quality is sufficient for diagnostic use," an explicit adjudication method like 2+1 or 3+1 is not mentioned and likely not applied in the traditional sense for a diagnostic ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not describe an MRMC comparative effectiveness study, nor does it mention AI assistance. The device is described as a PET/CT system, and the performance verification mentions evaluation of image quality by a radiologist. This is not an MRMC study comparing human readers with and without AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This section describes a PET/CT imaging system, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance does not directly apply to the described device in this context. The study described is a human evaluation of the device's output (images).
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The verification states that images were reviewed by a radiologist to determine if "image quality is sufficient for diagnostic use." This implies a subjective expert evaluation of image quality rather than a definitive "ground truth" established by pathology, clinical outcomes, or expert consensus for a diagnostic task. The ground truth here is essentially the radiologist's assessment of image diagnostic sufficiency.
8. The sample size for the training set
The document does not describe any machine learning or AI components that would require a "training set." It focuses on verification of a hardware imaging system. Therefore, a sample size for a training set is not applicable and not mentioned.
9. How the ground truth for the training set was established
As there is no mention of a training set, this information is not applicable and not provided.
Summary of what is available and what is missing:
The provided 510(k) Summary focuses on demonstrating substantial equivalence of the uMI Panvivo with a new model (uMI Panvivo S) to its predicate device (uMI Panvivo K241596). The "Performance Verification" section mentions a review of sample clinical images by a U.S. board-certified radiologist to confirm that the device generates images as intended and that the image quality is sufficient for diagnostic use. This is a very high-level statement and lacks the quantitative details typically associated with detailed acceptance criteria and study results. The document does not provide specifics on:
- Quantitative acceptance criteria for image quality or diagnostic performance.
- Specific device performance metrics against these criteria.
- The exact sample size of images/cases.
- The data provenance (country, retrospective/prospective).
- The precise number of experts or their detailed experience.
- Any formal adjudication method for ground truth.
- MRMC studies for AI assistance or standalone algorithm performance.
- Details on how "ground truth" was established beyond general expert review of image sufficiency.
- Training set information, as it's not an AI/ML device per se in this context.
Ask a specific question about this device
(48 days)
uDR 380i Pro is a mobile digital radiography device intended to acquire X-ray images of the human anatomy for medical diagnosis. uDR 380i Pro can be used on both adult and pediatric patient by a qualified and trained operator. This device is not intended for mammography.
uDR 380i Pro is a diagnostic mobile x-ray system utilizing digital radiography (DR) technology. It can be moved to different environments for an examination, like emergency room. ICU and ward. It mainly consists of a lifting column - telescopic cantilever frame system, system motion assembly, X-ray System (high voltage generator, x-ray tube, collimator and wireless flat panel detectors which have been cleared in K230175), power supply system and software for acquiring and processing the clinical images.
The provided text is a 510(k) summary for the uDR 380i Pro mobile X-ray system. This document primarily focuses on establishing substantial equivalence to a predicate device (K222339) and does not contain detailed information about acceptance criteria or a comprehensive study demonstrating the device's performance against specific acceptance criteria.
The key change in this 510(k) submission is the addition of new flat panel detectors (CXDI-710C and CXDI-810C) and associated control software (CXDI Control Software NE). The document explicitly states: "The modifications performed on the uDR 380i Pro (K222339) in this submission are due to the addition of flat panel detectors, including CXDI-710C and CXDI-810C, and CXDI Control Software NE which were cleared in K230175." and "The device software is unchanged from the predicate device, except for the addition of CXDI Control Software NE."
Therefore, the performance data provided is primarily to demonstrate that these new components do not adversely affect the safety and effectiveness or alter the fundamental scientific technology of the device compared to the predicate.
Here's an analysis of the provided information concerning acceptance criteria and study details:
1. A table of acceptance criteria and the reported device performance:
The document does not present a formal table of "acceptance criteria" for the device's overall performance. Instead, it compares the technological characteristics of the proposed device to the predicate device in Table 1: Comparison of Technology Characteristics. This table implicitly defines the acceptance (or "sameness") criteria based on the predicate device's specifications.
| ITEM | Predicate Device: uDR 380i Pro (K222339) | Proposed Device: uDR 380i Pro | Remark |
|---|---|---|---|
| Product Code | IZL | IZL | Same |
| Regulation No. | 21 CFR 892.1720 | 21 CFR 892.1720 | Same |
| Class | II | II | Same |
| Indications Use | Mobile digital radiography device for X-ray images of human anatomy for medical diagnosis for adult and pediatric patients. Not for mammography. | Mobile digital radiography device for X-ray images of human anatomy for medical diagnosis for adult and pediatric patients. Not for mammography. | Same |
| Specifications (Selected) | |||
| DQE (Flat Panel Detector) | Typical: 58% @3uGy,0.5lp/mm | Typical: 0.58±10% @3uGy,0.5lp/mm (for AR-C3543W&AR-B2735W), Typical: 0.58±10% @2.5uGy,0.5lp/mm (for CXDI-710C & CXDI-810C) | Note 1: DQE Performance is similar. When operated under the intended use, it did not raise new safety and effectiveness concerns. |
| Disk Size | 500GB | ≥500GB | Note 2: The disk size of the proposed device is a range value, which is only a descriptive update, however the disk size can satisfy its intended use. So it did not raise new safety and effectiveness concerns. |
Acceptance is generally implied if the new device's specifications are "Same" or the differences are justified as not raising new safety/effectiveness concerns (as indicated by "Note 1" and "Note 2").
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: The document states: "Sample image of Head, chest, abdomen, spine, pelvis, upper extremity and lower extremity were provided with a board certified radiologist to evaluate the image quality in this submission." It does not specify the exact number of images or cases in this sample set. It's described as "Sample image," implying a representative, but not quantitatively defined, set.
- Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. Given that it's a 510(k) submission for a Chinese manufacturer (Shanghai United Imaging Healthcare Co.,Ltd.), the data could be from China, but this is not confirmed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: "a board certified radiologist" - This indicates one expert.
- Qualifications: "board certified radiologist"
4. Adjudication method for the test set:
- Adjudication Method: "Each image was reviewed with a statement indicating that image quality are sufficient for clinical diagnosis." This suggests a single-reader review without an explicit adjudication process involving multiple readers. It does not mention a 2+1, 3+1, or similar multi-reader adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC Study: The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study. There is no mention of AI assistance or human readers improving with AI vs. without AI. The device is a mobile X-ray system, and the changes relate to its hardware (detectors) and basic control software, not an AI-powered diagnostic tool requiring such a study for a 510(k).
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not Applicable in the traditional sense: This device is an X-ray imaging system, not an AI algorithm for diagnosis. The "performance data" provided ("Clinical Image Evaluation") is about the quality of the images produced, which are then interpreted by a human. It's not a standalone diagnostic algorithm.
7. The type of ground truth used:
- Expert Consensus (single expert, effectively): The "ground truth" for image quality assessment was established by a single "board certified radiologist" who determined if the "image quality [is] sufficient for clinical diagnosis." This is effectively expert opinion/assessment rather than a gold standard like pathology or long-term outcomes data.
8. The sample size for the training set:
- The document does not specify a sample size for a training set. This is generally because the submission is for a conventional imaging device with new detectors, rather than an AI/Machine Learning device that requires explicit training data and validation sets. The "software" mentioned (CXDI Control Software NE) is for detector control and image acquisition/processing, not a deep learning algorithm.
9. How the ground truth for the training set was established:
- Not applicable/Not provided: As no training set is mentioned in the context of AI/ML, there is no discussion of how ground truth for such a set was established. The "Clinical Image Evaluation" section focuses on verification of image quality for the new detectors.
Ask a specific question about this device
(88 days)
The uMI Panorama is a diagnostic imaging system that combines two existing imaging modalities PET and CT. The quantitative distribution information of PET radiopharmaceuticals within the patient body measured by PET can assist healthcare providers in assessing metabolic and physiological functions. CT provides diagnostic tomographic anatomical information as well as photon attenuation information for the scanned region. The accurate registration and fusion of PET and CT images provides anatomical reference for the findings in the PET images.
This system is intended to be operated by qualified healthcare professionals to assist in the detection, localization, diagnosis, staging, restaging, treatment planning and treatment response evaluation for diseases, inflammation, infection and disorders in, but not limited to oncology, cardiology and neurology. The system maintains independent functionality of the CT device, allowing for single modality CT diagnostic imaging.
This CT system can be used for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
The proposed device uMI Panorama GS combines a 148 cm axial field of view (FOV) PET and multi-slice CT system to provide high quality functional and anatomical images, fast PET/CT imaging and better patient experience. The system includes PET gantry, CT gantry, patient table, power supply cabinet, console and reconstruction system, chiller, vital signal module.
The uMI Panorama GS has been previously cleared by FDA via K231572. The mainly modifications performed on the uMI Panorama GS (K231572) in this submission are due to the algorithm update of AIIR, the addition of HYPER Iterative, uExcel DPR, RMC, AIEFOV, Motion Management, CT-less AC, uKinetics, Retrospective Respiratory-gated Scan, uExcel Unity and uExcel iQC.
The provided text describes the performance data for the uMI Panorama device, focusing on the AIEFOV algorithm. Here's a breakdown based on your request:
Acceptance Criteria and Reported Device Performance for AIEFOV Algorithm
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Bench Tests: | Bench Tests: |
| 1. AI EFOV shall improve the accuracy of CT value, and improve the accuracy and uniformity of PET image SUV by performing attenuation correction with CT generated with AIEFOV algorithm when scanned object exceed CT field of view. | Bench tests showed that performing attenuation correction with AIEFOV can improve the CT number and the accuracy of SUV, in cases where the scanned object exceeds the CT field of scan-FOV. |
| 2. AI EFOV shall have consistent CT value, and PET image SUV by performing attenuation correction with CT generated with AIEFOV algorithm when scanned object does not exceed the CT field of view. | Meanwhile, when the scanned object did not exceed the CT scan-FOV, either AIEFOV or EFOV results in consistent SUV and CT number. |
| Clinical Evaluation: | Clinical Evaluation: |
| Image quality of PET images attenuated with AIEFOV should provide sufficient diagnostic confidence, with blind comparison regarding image Artifacts and homogeneity of same tissue by qualified clinical experts. | Clinical evaluation concluded the image quality of PET attenuated with AIEFOV provides sufficient diagnostic confidence. (Implied that artifacts and homogeneity were acceptable, as confidence was sufficient). |
| Overall Summary: The performing attenuation correction with AIEFOV CT can improve the accuracy of image SUV in cases where the scanned object exceeds the CT field of view. | Based on the bench tests and clinical evaluation, the performing attenuation correction with AIEFOV CT can improve the accuracy of image SUV, in cases where the scanned object exceeds the CT field of view. |
Study Details Proving Device Meets Acceptance Criteria:
-
Sample Size and Data Provenance for Test Set:
- Test Set Sample Size: 9303 images from 13 patients.
- Data Provenance: Not explicitly stated regarding country of origin, but described as "clinical images" scanned in uMI Panorama GS. The study appears retrospective or a controlled prospective study for validation.
- Patient Characteristics (N=13):
- Age: 62 ± 14 years (range: 35-79)
- Sex: 7 male, 6 female
- BMI: 25.0 ± 3.5 kg/m² (range: 21.2-31.4)
- Injected activity: 0.10 ± 0.01 mCi/kg (range: 0.04-0.11)
-
Number of Experts and Qualifications for Ground Truth for Test Set:
- Number of Experts: Two (2)
- Qualifications: "American Board qualified clinical experts"
-
Adjudication Method for Test Set:
- The experts performed a "blind comparison" regarding image Artifacts, homogeneity of same tissue, and diagnostic confidence in PET images. Details of how disagreements were resolved (e.g., 2+1, 3+1, or if consensus was required) are not specified.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Not explicitly stated as a formal MRMC study comparing human readers with AI vs. without AI assistance. The clinical evaluation involved two experts reviewing images generated with AIEFOV for diagnostic confidence, rather than a comparative trial measuring improvement in human reader performance aided by AI. Therefore, an effect size of human reader improvement with AI vs. without AI assistance is not provided.
-
Standalone (Algorithm Only) Performance:
- Yes, the "Bench tests" portion of the performance evaluation appears to assess the algorithm's performance directly on quantitative metrics (CT value, SUV accuracy and uniformity) using phantoms and patient studies in different truncation situations. The clinical evaluation also assessed the quality of images produced by the algorithm, implying a standalone assessment of its output for diagnostic confidence.
-
Type of Ground Truth Used:
- For bench tests: Quantitative measurements from phantom scans and potentially patient studies where the "true" CT values and SUV could be established or inferred relative to known conditions (e.g., non-truncated scans serving as reference).
- For clinical evaluation: Expert consensus/assessment by "American Board qualified clinical experts" regarding subjective image quality metrics (artifacts, homogeneity, diagnostic confidence).
-
Sample Size for Training Set:
- The training data for the AIEFOV algorithm contained 506,476 images.
-
How Ground Truth for Training Set was Established:
- "All data were manually quality controlled before included for training." This suggests a process of human review and verification to ensure the accuracy and suitability of the training images. Further details on the specific criteria or expert involvement for this manual QC are not provided.
- It is explicitly stated that "The training dataset used for the training of AIEFOV algorithm was independent of the dataset used to test the algorithm."
Ask a specific question about this device
(59 days)
The uMR 570 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.
These images and the physical parameters derived from the interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
The uMR 570 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 570 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
The modification performed on the uMR 570 (K201540) in this submission is due to the following changes that include:
- (1) Addition and modification of pulse sequences
- (2) Addition of MR imaging processing methods: Inline T2 mapping, Cardiac T1 mapping, Cardiac T2 mapping, Cardiac T2* mapping, Flow Quantification, Arterial Spin Labeling (3D ASL).
- (3) Addition of Spectroscopy Sequences and Post Processing Features: SVS MRS (Liver), Prostate MRS, SVS MRS (Breast).
- (4) Addition of New function: Implant mode, Remote Assistance.
- (5) Addition of New Workflow Features: EasyScan.
This FDA 510(k) summary for the uMR 570 Magnetic Resonance Diagnostic Device details hardware and software modifications compared to a predicate device (K201540). The document asserts substantial equivalence without providing detailed acceptance criteria or study results for novel features.
Here's the breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state acceptance criteria in a quantitative manner for most of the new features. It generally confirms that the device "performs as expected" and is "substantially equivalent" to the predicate. For the specific item where a standard is mentioned:
| Acceptance Criteria (from predicate/standard alignment) | Reported Device Performance |
|---|---|
| NEMA MS 14-2019 (RF coil heating) | "perform as well as predicate devices." |
For other features, the acceptance criterion is implicitly that the features function as intended and yield diagnostically useful information, consistent with the foundational technology of MRI.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document does not provide information on the sample size used for the test set or the data provenance (country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The general statement is that "physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis."
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify any adjudication method for establishing ground truth or evaluating test results.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention any multi-reader multi-case (MRMC) comparative effectiveness study. The device itself is a diagnostic imaging system, not an AI-assisted interpretation tool, so this type of study would not be applicable in this context. The enhancements are primarily in imaging sequences and post-processing capabilities.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is an MRI system, not an AI algorithm for standalone diagnosis. Therefore, a standalone (algorithm only) performance study as typically understood for AI devices would not be relevant or performed. The performance evaluation is for the imaging capabilities and derived parameters.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used. For an MRI system, the "ground truth" for image quality and quantitative parameter accuracy would typically involve phantom studies, comparison to established imaging techniques, and potentially correlation with other diagnostic findings or clinical outcomes, but this is not detailed. The phrase "physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis" implies clinical utility rather than a specific ground truth mechanism for validation.
8. The sample size for the training set
This information is not applicable as the device is an MRI diagnostic system and not an AI/ML model that would typically have a "training set" in the context of machine learning. The "training" for the system refers to its engineering and calibration.
9. How the ground truth for the training set was established
This information is not applicable for the reasons stated above (not an AI/ML model with a "training set").
Summary of the Study that Proves the Device Meets Acceptance Criteria:
The provided 510(k) summary indicates that non-clinical testing and a clinical performance evaluation were conducted.
- Non-clinical testing: This included evaluation against NEMA MS 14-2019 for RF coil heating, which demonstrated the proposed device performs "as well as predicate devices."
- Clinical performance evaluation: This was done for new features such as Flow Quantification, 3D ASL, Inline T2 Mapping, Cardiac T1/T2/T2* Mapping, MARS+, MultiBand, QScan, and Silicon-Only Imaging.
- Performance testing for Spectroscopy: Specific tests were performed for Prostate MRS, SVS MRS (Breast), and SVS MRS (Liver).
- System Validation: Reports for Implant mode, Remote Assistance, and EasyScan were also provided.
The conclusion states that "The test results demonstrated that the device performs as expected and thus, it is substantial equivalent to the predicate devices to which it has been compared." This implies that the performance was deemed acceptable in comparison to the previously cleared predicate device, without detailing specific metrics or thresholds for acceptance for each new feature beyond the general statement of "as expected" and "substantial equivalence."
Ask a specific question about this device
(265 days)
The uMR Omega system is indicated for use as a magnetic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.
These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of interest of the scan.
u WS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:
The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.
The Dynamic application is intended to provide a general post-processing tool for time course studies.
The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.
MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.
The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.
The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.
The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.
MR Vessel Analysis is intended to provide a tool for viewing, and evaluating MR vascular images.
The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.
The United Neuro is intended to view, manipulate, and evaluate MR neurological images.
The MR Cardiac Analysis application is intended to be used for viewing, post-processing and quantitative evaluation of cardiac magnetic resonance data.
The uMR Omega is a 3.0T superconducting magnetic resonance diagnostic device with a 75cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR Omega Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can be used as a stand-alone SaMD or a post processing application option for cleared UIH (Shanghai United Imaging Healthcare Co.,Ltd.) MR Scanners.
The uMR 780 is a 3.0T superconducting magnetic resonance diagnostic device with a 65cm size patient bore. It consists of components such as magnet, RF power amplifier. RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 780 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
The document describes the performance testing for the "DeepRecon" feature, an artificial intelligence (AI)-assisted image processing algorithm, of the uMR Omega with uWS-MR-MRS device.
Here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
| Evaluation Item | Acceptance Criteria | Reported Device Performance (Test Result) | Results |
|---|---|---|---|
| Image SNR | DeepRecon images achieve higher SNR compared to the images without DeepRecon (NADR) | NADR: 209.41±1.08, DeepRecon: 302.48±0.78 | PASS |
| Image uniformity | Uniformity difference between DeepRecon images and NADR images under 5% | 0.15% | PASS |
| Image contrast | Intensity difference between DeepRecon images and NADR images under 5% | 0.9% | PASS |
| Structure Measurements | Measurements on NADR and DeepRecon images of same structures, measurement difference under 5% | 0% | PASS |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: 77 US subjects.
- Data Provenance: The testing data was collected from various clinical sites in the US, ensuring diverse demographic distributions covering various genders, age groups, ethnicities, and BMI groups. The data was collected during separated time periods and on subjects different from the training data, making it completely independent and having no overlap with the training data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not explicitly stated, but the document mentions "American Board of Radiologists certificated physicians" evaluated the DeepRecon images. This implies a group of such experts.
- Qualifications of Experts: American Board of Radiologists certificated physicians.
4. Adjudication method for the test set
- The document states that "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality" by the radiologists. This suggests a consensus or rating process, but the specific adjudication method (e.g., majority vote, sequential review) is not detailed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document implies a human-in-the-loop evaluation as "DeepRecon images were evaluated by American Board of Radiologists certificated physicians," and they "verified that DeepRecon meets the requirements of clinical diagnosis." It also states "All DeepRecon images were rated with equivalent or higher scores in terms of diagnosis quality." However, this is not explicitly described as a formal MRMC comparative effectiveness study designed to quantify human reader improvement with vs. without AI assistance. The focus seems to be on the diagnostic quality of the DeepRecon images themselves. No specific effect size is provided for human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, a standalone (algorithm only) performance evaluation was conducted based on objective metrics like Image SNR, Image Uniformity, Image Contrast, and Structure Measurements, as detailed in Table b. The radiologist evaluation appears to be a subsequent step to confirm clinical utility.
7. The type of ground truth used
- For the objective performance metrics (SNR, uniformity, contrast, structure measurements), the ground truth for comparison appears to be the images "without DeepRecon (NADR)".
- For the expert evaluation, the ground truth is implicitly based on the expert consensus of the American Board of Radiologists certificated physicians regarding the diagnostic quality of the images.
- For the training data ground truth (see point 9), it was established using "multiple-averaged images with high-resolution and high SNR."
8. The sample size for the training set
- The training data for DeepRecon was collected from 264 volunteers.
9. How the ground truth for the training set was established
- The ground truth for the training dataset was established by collecting "multiple-averaged images with high-resolution and high SNR" from each subject. The input images for training were then generated by sequentially reducing the SNR and resolution of these high-quality ground-truth images. All data used for training underwent manual quality control.
Ask a specific question about this device
(116 days)
HYPER AiR is an image processing function intended to be used by radiologists and nuclear medicine physicians to reduce noise and improve contrast of fluorodeoxyglucose (FDG) PET images.
HYPER AiR is a software-only device. HYPER AiR is an image reconstruction technique which incorporates pre-trained neural networks in the iteration reconstruction process to control image noise and contrast. It is intended to be implemented on previously cleared PET/CT devices uMI 550 (K193241) and uMI 780 (K172143). HYPER AiR serves as an alternative to the existing image reconstruction algorithm that are available on the predicate devices.
The provided text describes the 510(k) summary for the HYPER AiR device, a software-only image processing function for FDG PET images. Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the information provided:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the performance tests and clinical image evaluation described. The device's performance is reported in terms of improvement over the conventional OSEM (Ordered Subset Expectation Maximization) algorithm.
| Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|
| Non-Clinical (Bench Testing): | |
| Performance on noise reduction improvement | HYPER AiR can improve image contrast while suppressing background noise. |
| Performance on image contrast improvement | HYPER AiR can improve image contrast while suppressing background noise. |
| Performance on contrast to noise ratio improvement | Performed, indicating improvement. |
| Clinical Image Evaluation: | |
| Better image contrast compared to OSEM | HYPER AiR produces images with better image contrast than OSEM. |
| Lower image noise compared to OSEM | HYPER AiR produces images with lower image noise than OSEM. |
| Image quality sufficient for clinical diagnosis | The image quality was sufficient for clinical diagnosis. |
| Overall similar performance to predicate devices (for SE) | Based on comparison and analysis, the proposed device has similar performance, equivalent safety and effectiveness as the predicate devices. Differences do not affect indications for use, safety, and effectiveness. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The exact number of cases or images in the test set for the clinical image evaluation is not specified. It only states "The clinical image evaluation was performed by comparing HYPER AiR with OSEM."
- Data Provenance: The raw datasets used for evaluation were "obtained on UH's uMI 780 and uMI 550," which are devices from United Imaging Healthcare. The country of origin of this data is not explicitly stated, but given the sponsor's location (Shanghai, China), it can be inferred that the data likely originated from China. The data was retrospective as it involved applying two different reconstruction algorithms (HYPER AiR and OSEM) to the identical raw datasets already obtained.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: "Each image was read by three board-certified nuclear medicine physicians."
- Qualifications of Experts: "board-certified nuclear medicine physicians." No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
The adjudication method is not explicitly stated. It says "Each image was read by three board-certified nuclear medicine physicians who provided an assessment of image contrast, image noise and image quality." It does not describe how discrepancies among the three readers were resolved or if a consensus mechanism was used.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and Effect Size
- MRMC Study: A comparative evaluation was done with human readers comparing HYPER AiR reconstructed images to OSEM reconstructed images. This is akin to an MRMC study if the multiple readers evaluated the same cases under both conditions.
- Effect Size: The document states that "HYPER AiR produces images with better image contrast and lower image noise than OSEM while the image quality was sufficient for clinical diagnosis." However, a quantitative effect size (e.g., statistical significance of improvement, specific metrics like AUC difference, or reader confidence scores) is not provided in this summary. It's a qualitative statement of improvement. The study focuses on the standalone performance of the image processing rather than human readers improving with AI assistance vs without, although improved image quality implies potential for human improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, standalone performance was evaluated through "Engineering bench testing" where the "evaluation and analysis used the identical raw datasets obtained on UH's uMI 780 and uMI 550, and then applies both HYPER AiR and OSEM to do image reconstruction. The resultant images were then compared for: Performance on noise reduction, Performance on image contrast, Performance on contrast to noise ratio." The aim was to show HYPER AiR's intrinsic ability to improve image characteristics compared to OSEM.
7. The Type of Ground Truth Used
The ground truth used for the evaluation was expert consensus/reader assessment by three board-certified nuclear medicine physicians for the clinical image evaluation. For the non-clinical bench testing, the "ground truth" was essentially the quantitative improvement in objective image metrics (noise reduction, contrast, CNR) based on the algorithm's output compared to OSEM. This is not a "true" clinical ground truth like pathology, but rather a technical performance measure.
8. The Sample Size for the Training Set
The document does not specify the sample size used for the training set of the neural networks integrated into HYPER AiR. It only mentions "pre-trained neural networks."
9. How the Ground Truth for the Training Set was Established
The document does not provide information on how the ground truth for the training set was established. It merely states that HYPER AiR "incorporates pre-trained neural networks in the iteration reconstruction process to control image noise and contrast."
Ask a specific question about this device
(57 days)
HYPER Focus can be used to correct respiratory motion in PET images. Relative to non - corrected images, HYPER Focus can reduce respiratory motion effects and thus improve the measurement accuracy of SUV and lesion volume.
HYPER Focus is a software-only device. It is intended to be implemented on previously cleared PET/CT devices uMI 550 (K193241) and uMI 780 (K172143). HYPER Focus serves as an additional function for uMI 550 and uMI 780 to carry the respiratory correction. It uses the similar respiratory motion correction technique, non-rigid image registration, as the predicate device.
The provided text describes the regulatory clearance of a medical device called "HYPER Focus" (K210418), a software-only device designed to correct respiratory motion in PET images.
Based on the information provided, here's a breakdown of the acceptance criteria and the study that proves the device meets them:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly state a table of quantifiable acceptance criteria with corresponding performance metrics like a typical clinical study report would. Instead, the acceptance criteria are implicitly tied to the device's ability to achieve "substantial equivalence" to a predicate device (GE Q.Freeze software, K113408) in terms of its ability to reduce respiratory motion effects and improve the accuracy of SUV and lesion volume.
The reported device performance is described qualitatively as:
| Acceptance Criterion (Implicit) | Reported Device Performance/Conclusion |
|---|---|
| Reduce respiratory motion effects in PET images. | "HYPER Focus can reduce respiratory motion effects..." |
| Improve the measurement accuracy of SUV. (Standardized Uptake Value) | "...and thus improve the measurement accuracy of SUV..." |
| Improve the measurement accuracy of lesion volume. | "...and lesion volume." |
| Technological characteristics equivalent to predicate device's respiratory motion correction function. | "HYPER Focus has the equivalent technological characteristic to the function of respiratory motion correction of predicate device." "Both devices are based on non-rigid image registration technique." "HYPER Focus also utilizes 100% of the acquired data counts, similar to the predicate device." |
| No new restrictions on use compared to predicate. | "...and does not introduce any new restrictions on use." |
| As safe and effective as the predicate. | "HYPER Focus is as safe and effective as the predicate." "HYPER Focus is substantially equivalent as safe as the legally marketed predicate device." "Design verification, along with bench testing demonstrates that HYPER Focus is substantially equivalent as effective as the legally marketed predicate device." |
| Software documentation and cybersecurity conformance. | "Software documentation for a Moderate Level of Concern software per FDA Guidance Document... is included as a part of this submission." "Cybersecurity information in accordance with guidance document... is included in this submission." |
| Risk analysis completed and risk control implemented. | "The risk analysis was completed and risk control was implemented to mitigate identified hazards." |
| All software specifications met acceptance criteria. | "The testing results show that all the software specifications have met the acceptance criteria." |
| Verification and validation testing acceptable to support substantial equivalence. | "Verification and validation testing of the proposed device was found acceptable to support the claim of substantial equivalence." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: The document does not specify a numerical sample size for the test set. It mentions "identical raw datasets obtained from UIH's uMI 780 (K172143) and uMI 550 (K193241)." This suggests that existing datasets were used, but the quantity of these datasets or individual patient cases is not provided.
- Data Provenance: The data was obtained from UIH's uMI 780 and uMI 550 devices. Given that the company, Shanghai United Imaging Healthcare Co., Ltd., is based in China, it is highly probable that the data originated from China. The document does not explicitly state whether the data was retrospective or prospective, but given they are "identical raw datasets obtained" and "existing data" for bench testing, it strongly implies retrospective use of previously acquired patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The document describes "engineering bench testing" and "performance verification" using "identical raw datasets," which suggests a technical analysis rather than an expert-read clinical study to establish ground truth for the test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided in the document. The study described appears to be a technical bench test comparing reconstructed images with and without motion correction, rather than a reader study requiring adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, an MRMC comparative effectiveness study involving human readers is not described in this document. The study focuses on the device's quantitative performance (SUV and lesion volume accuracy) and its ability to reduce motion effects in comparison to non-corrected images, and on demonstrating substantial equivalence to a predicate device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation was done. The "Performance Verification" section states: "Engineering bench testing was performed to support substantial equivalence and product performance claims. The evaluation and analysis used the identical raw datasets obtained from UIH's uMI 780 (K172143) and uMI 550 (K193241), and then respectively performed image reconstruction with/without HYPER Focus." This indicates that the algorithm's performance was assessed independently of human interpretation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The document primarily refers to "bench testing" and "analysis" of quantitative metrics like SUV (Standardized Uptake Value) and sphere/lesion volume accuracy. The "ground truth" for this type of test is typically based on:
- Known physical properties of phantoms: For sphere volume and potentially SUV accuracy, phantom studies with known dimensions and activity concentrations are commonly used. While not explicitly stated, "bench test" often implies phantom studies.
- Comparison to "ideal" or "reference" motion-corrected images: The document states a comparison "in comparison with no motion correction." This implies an assessment against a baseline reference, where the ground truth is the improved accuracy obtained by the algorithm. For motion correction, perfect motion-free images are the ideal ground truth, which is often approximated or modeled.
- The document implies that the "ground truth" for proving efficacy is the demonstrated improvement in SUV and lesion volume accuracy and reduction of motion effects when HYPER Focus is applied, compared to images without motion correction.
The document does not suggest the use of expert consensus, pathology, or outcomes data as a ground truth for this particular submission, which is focused on validating the technical performance of motion correction software for PET images in the context of substantial equivalence.
8. The sample size for the training set
The document does not provide any information about the sample size used for the training set of the HYPER Focus algorithm.
9. How the ground truth for the training set was established
The document does not provide any information about how the ground truth for the training set was established. It focuses solely on the performance verification (testing) of the final algorithm.
Ask a specific question about this device
(27 days)
The uMR 570 system is indicated for use as a magnetic resonance diagnostic device (MRDD) that produces sagittal, transverse, coronal, and oblique cross sectional images, and that display internal anatomical structure and/or function of the head, body and extremities.
These images and the physical parameters derived from the images when interpreted by a trained physician yield information that may assist the diagnosis. Contrast agents may be used depending on the region of the scan.
The uMR 570 is a 1.5T superconducting magnetic resonance diagnostic device with a 70cm size patient bore. It consists of components such as magnet, RF power amplifier, RF coils, gradient power amplifier, gradient coils, patient table, spectrometer, computer, equipment cabinets, power distribution system, internal communication system, and vital signal module etc. The uMR 570 Magnetic Resonance Diagnostic Device is designed to conform to NEMA and DICOM standards.
The predicate device is uMR 570 (K200024) and the modification to the predicate device in this submission is the addition of a new pulse sequence, gre senc spiral which also exists in reference device of Philips Ingenia, Ingenia CX, Ingenia Elition, and Ingenia Ambition MR Systems (K183063) .
The modification performed on the predicate uMR 570 (K200024) in this submission is due to the addition of a new pulse sequence, gre_senc_spiral. The modification, which does not affect the intended use nor alters the fundamental scientific technology of the device, is as the following:
Introduce gre senc spiral as a new pulse sequence
The provided text is a 510(k) Summary for the uMR 570 Magnetic Resonance Diagnostic Device. It focuses on demonstrating substantial equivalence to a predicate device, primarily due to the addition of a new pulse sequence called "gre_senc_spiral."
Here's an analysis of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state quantitative acceptance criteria or detailed device performance metrics for the new pulse sequence. Instead, it focuses on qualitative comparisons and safety compliance. The claim is that the device "performs as expected" and is "substantially equivalent" to the predicate.
| Acceptance Criteria Category | Acceptance Criteria (Not explicitly stated quantitatively for the new feature, but implied by "substantial equivalence" and safety compliance) | Reported Device Performance (as stated or implied) |
|---|---|---|
| For new pulse sequence (gre_senc_spiral): | Intended use remains the same as predicate. | The modification (gre_senc_spiral) "does not affect the intended use nor alters the fundamental scientific technology of the device." Images are stored in DICOM format and suitable for 3rd party strain analysis. |
| Does not introduce new potential hazards or safety risks. | "It does not introduce new potential hazards or safety risks." Tested for Electrical Safety (Comply with ES60601-1), EMC (Comply with IEC60601-1-2), Max SAR (Comply with IEC 60601-2-33), Max dB/dt (Comply with IEC 60601-2-33), Biocompatibility (ISO 10993-5, ISO 10993-10). | |
| Overall Device: | Maintain technological characteristics similar to the predicate. | "has the same technological characteristics" as the predicate uMR 570 (K200024) in terms of magnet, gradient, RF system, coils, patient table, and safety, except for the new pulse sequence. |
| Performs as expected. | "The test results demonstrated that the device performs as expected." |
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The "Performance Evaluation Report SENC A" is mentioned but no details about its contents or the number of cases/patients used are provided.
- Data Provenance: Not explicitly stated. Given it's a performance evaluation for a 510(k) submission, it's likely internal company data, but the country of origin and whether it was retrospective or prospective are not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The statement mentions "images... when interpreted by a trained physician," implying physician involvement in diagnosis, but not specifically for establishing ground truth of a test set for the device's performance evaluation.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- This information is not provided in the document.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done. The device itself is a Magnetic Resonance Diagnostic Device (MRDD), an imaging system, not an AI-based interpretation tool. The new pulse sequence (
gre_senc_spiral) is for acquiring SENC cardiac images, which are then "processed by 3rd party software for strain analysis and report." This submission is for the imaging device and the new acquisition sequence, not for the AI analysis of these images.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Not applicable / Not done for this device as an AI algorithm. This submission is for an MRI system and a new pulse sequence. The "3rd party software for strain analysis" is separate and not part of this 510(k) submission.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- This information is not provided for the performance evaluation of the
gre_senc_spiralsequence. The purpose of this 510(k) is to demonstrate that the new sequence produces images that can be interpreted like those from an equivalent predicate, and that it's safe. It's not about evaluating diagnostic accuracy against a specific ground truth.
8. The sample size for the training set:
- Not applicable. This submission is for a hardware (MRI system) and a new pulse sequence (software component for acquisition). There is no mention of an AI algorithm being trained by the manufacturer for this device, nor a training set for the performance testing mentioned.
9. How the ground truth for the training set was established:
- Not applicable. As above, there is no mention of a training set or AI algorithm training within this submission.
Ask a specific question about this device
Page 1 of 2