Search Results
Found 100 results
510(k) Data Aggregation
(128 days)
The Photonova Spectra, Photonova Spectra Select system is a silicon-based spectral photon counting detector X-ray Computed Tomography scanner.
The system is intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission projection data from the same axial plane taken at different angles.
The system acquires multi-energy data in every scan and natively generates high resolution monochromatic images and material density maps to facilitate visualizing and analyzing information about anatomical and pathological structures.
The system is indicated for head, whole body, cardiac, and vascular CT applications. The system is indicated for patients of all ages. The images can be post-processed to produce additional imaging planes or analysis results.
The system is indicated for lung cancer screening for patients meeting the established inclusion criteria of programs/protocols that have been published by either a governmental body or professional medical society.*
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011;365:395-409) and subsequent literature, for further information.
Photonova Spectra is the next iteration of the predicate, the Revolution Apex platform (K213715), introducing a new Deep Silicon (dSi) photon counting detector for CT imaging. Photonova Spectra aims to realize an improvement in both spatial resolution and spectral imaging performance relative to traditional Energy Integrating Detector (EID) systems for diagnostic CT. With photon-counting detectors that can better discriminate energies, spectral CT imaging can natively provide valuable information about tissue composition and material density without the need for active filtration or kVp modulation by performing material decomposition directly from native multi-energy data.
The Photonova Spectra system is an ultra-premium multi-slice CT scanning system comprised of a gantry, a detector, an x-ray tube, a power distribution unit (PDU), a table, a system cabinet, a scanner desktop computer and user interface, and associated accessories. It is designed as a volumetric CT scanner to provide advanced imaging capability for a range of clinical applications.
Compared to the predicate Revolution Apex, the key differences of the Photonova Spectra System consist of a Deep Silicon (dSi) X-ray detector capable of directly converting X-ray photons to electrical signals, advanced detector data acquisition hardware for managing and processing of large volumes of data, advanced computer hardware and an enhanced image chain for generating High Definition (HD) Spectral and Ultra High Definition (UHD) image series.
The Photonova Spectra image chain is developed to calibrate, pre-process, reconstruct, and post-process images for use in medical imaging applications. Customized for photon counting detection physics and capability, Photonova Spectra does not require user to choose between single kV and dual energy acquisition modes. With Photonova Spectra, all acquisitions are spectral with 8 energy bins over the full high-resolution detector, and the data is stored real-time on the rotating side as the acquisition completes over the full scan sequence.
The system will be offered with either an 80 mm dSi detector and 40 mm dSi detector model configurations, commercialized as Photonova Spectra and Photonova Spectra Select, respectively. The detector size is the key differentiator, but all core technology and functionality are identical.
The provided FDA 510(k) clearance letter and summary for the Photonova Spectra CT System do not contain detailed information about specific acceptance criteria for device performance or the full study design typically expected for such information. The document focuses on regulatory compliance, technological characteristics compared to a predicate, and a general overview of verification and validation testing.
However, based on the information provided, we can infer some aspects and present them to the best of our ability, while noting the missing details.
Missing Information:
- Specific quantitative acceptance criteria: The document describes the types of tests performed (e.g., image quality metrics, LCD studies) but does not provide numerical thresholds that the device had to meet.
- Specific quantitative reported device performance: While it states "substantial equivalence of image quality was demonstrated," it doesn't provide the actual measured values for metrics like CT number accuracy, resolution, or noise texture.
- Detailed sample size for the test set: It mentions a "sample clinical covering a wide range of clinical scenarios" for the reader study but no specific number of cases.
- Data provenance for the test set: The document does not specify the country of origin of the data for the reader study's test set or whether it was retrospective or prospective.
- Detailed qualifications of experts for ground truth: It states "US board-certified Radiologists" but doesn't specify years of experience or subspecialty.
- Adjudication method for the test set.
- Effect size for MRMC study: It implies a reader study was done to compare DL levels, but doesn't quantify improvement with AI assistance.
- Sample size for the training set.
- How ground truth for the training set was established.
Acceptance Criteria and Study for Photonova Spectra CT System
Given the limitations of the provided document, the following is constructed based on the available information and educated inferences regarding CT system clearances.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion (Inferred from regulatory context) | Reported Device Performance (Inferred from document) |
|---|---|
| Image Quality (various metrics, e.g., low contrast detectability, spatial resolution, noise power spectrum, CT number accuracy, water accuracy, mean CT number over spectral tasks) | "Substantial equivalence of image quality was demonstrated for the system's DL baseline level of denoising with FBP-based reconstruction." "Evaluated using standard IQ, QA, ACR, and anthropomorphic pediatric phantoms." |
| Diagnostic Interpretability | "No reader identified any added, removed, or reduced diagnostic information in any DLIR setting, and all pathologies were consistently visualized across all DL reconstructions." |
| Safety and Effectiveness | "Photonova Spectra is safe and effective for its intended use." (Conclusion of reader study) "No new questions of safety or effectiveness, hazards, unexpected results, or adverse effects stemming from the changes to the predicate." |
| Compliance with Standards | "In compliance with AAMI/ANSI ES 60601-1 and IEC60601-1 Ed. 3.2 and its associated collateral and particular standards, 21 CFR Subchapter J, and NEMA standards XR 25, and XR 28." |
| Low Contrast Detectability (LCD) | "LCD studies were conducted incorporating a model observer approach." (Outcome implies acceptable performance) |
| Dose Performance | "Dose performance evaluation using well established metrics and methods." (Outcome implies acceptable performance) |
2. Sample Size Used for the Test Set and Data Provenance
The document states "a reader study of sample clinical covering a wide range of clinical scenarios, including Neuro, Body, and Cardiac/Chest." It also mentions "challenging cases from the above-mentioned reader study."
- Sample Size: Not explicitly stated (e.g., number of cases or images).
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document mentions: "Images were evaluated by US board-certified Radiologists."
- Number of Experts: Not explicitly stated.
- Qualifications of Experts: US board-certified Radiologists. Specific years of experience or subspecialty (e.g., Neuroradiologist, Cardiothoracic Radiologist) are not provided.
4. Adjudication Method for the Test Set
The document does not explicitly state the adjudication method used for the reader study.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Its Effect Size
A reader study was conducted to compare various levels of user-prescribed denoising. It implies a comparative evaluation between the proposed DL reconstruction and FBP-based reconstruction.
- MRMC Study: Yes, a comparative clinical evaluation of challenging cases was performed by "US board certified Radiologists."
- Effect Size: Not quantified. The qualitative finding was: "No reader identified any added, removed, or reduced diagnostic information in any DLIR setting, and all pathologies were consistently visualized across all DL reconstructions." This suggests that the diagnostic interpretability was maintained, implying no negative effect and potential maintenance or improvement in visualization where denoising was effective, though specific metrics of improvement are not provided.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, extensive standalone performance testing was done, referred to as "Image Performance Testing (Verification)" and "Summary of Non-Clinical Testing."
- This included "evaluation of a comprehensive set of image quality metrics" and "acquisitions at varying dose levels and phantom sizes."
- Metrics like "CT number, water accuracy, mean CT number over a range of spectral tasks, in-plane resolution, cross-plane resolution and noise texture (as measured by the noise power spectrum)" were assessed.
- "Low contrast detectability (LCD) studies were conducted incorporating a model observer approach."
7. The Type of Ground Truth Used
Based on the description of the studies:
- For standalone (non-clinical) testing: Phantoms (standard IQ, QA, ACR, anthropomorphic pediatric phantoms) and model observer approaches for objective metrics.
- For clinical (reader) testing: Expert consensus/interpretation by US board-certified Radiologists was used to determine diagnostic utility and whether pathologies were consistently visualized across different reconstructions.
8. The Sample Size for the Training Set
The document states that the "proposed TrueFidelity DL for PCCT is intended for routine clinical use and based on the same framework and training methodology as the reference devices (DLIR and DLIR-GSI)." However, the specific sample size for the training set (e.g., number of images, patients) for the Photonova Spectra's TrueFidelity DL for PCCT is not provided.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established for the TrueFidelity DL for PCCT, beyond mentioning it uses the "same framework and training methodology" as previously cleared DLIR products. Typically, for deep learning reconstructions in CT, the "ground truth" during training refers to high-quality, often low-noise or high-dose, reference images from which the algorithm learns to denoise or reconstruct lower-quality/lower-dose inputs. These reference images are usually generated from the CT scanner itself (e.g., by repeating scans at very high doses or using iterative reconstruction techniques to establish a cleaner image for comparison). Specific details are not provided.
Ask a specific question about this device
(72 days)
The SIGNA™ Bolt system is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, TMJ, spine, breast, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body. Depending on the region of interest being imaged, contrast agents may be used.
The images produced by the SIGNA™ Bolt system reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
SIGNA™ Bolt is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times, and is designed for improved patient comfort and workflow. The system features a 3.0T superconducting magnet with a 70 cm bore size and can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. SIGNA™ Bolt is designed to conform to NEMA DICOM standards.
The SIGNA™ Bolt system will be offered as two commercial configurations with the following features:
- Magnet: 3.0T superconducting magnet with a wide (70 cm) bore size and active shielding
- Maximum Gradient Strength: 80 mT/m (SuperXG Gradient), 65 mT/m (SuperXF Gradient)
- Maximum Slew Rate: 200 T/m/s (SuperXG Gradient and SuperXF Gradient)
- RF Transmit: A liquid cooled In-Scan-Room RF transmit architecture with a peak power capability of 36 kW and 3.0T Platform Body Coil
- RF Receive Chain: 162 Ch available (SuperXG Gradient), 130 Ch available (SuperXF Gradient)
- Patient Table: Detachable SIGNA One Patient Table with embedded 3.0T AIR PA XL coil and up to four 32-channel high density auto-coil sensing connection ports
- Power Rating: 113 kVA (SuperXG Gradient), 90 kVA (SuperXF Gradient)
- Software: Software platform featuring various productivity enhancement features, designed to improve workflow and reduce scan time
- AIRx (previously cleared in K183231) – AI-based automated slice prescription tool now extended with new deep learning models for spine and prostate imaging
- SIGNA One Camera – Real-time AI-enabled image guidance that assists with automated patient positioning
- Gating Options: Wired, wireless, and contactless physiological gating options
This document outlines the acceptance criteria and supporting studies for the SIGNA™ Bolt device, based on the provided FDA 510(k) clearance letter.
Key Features and AI/ML Components of SIGNA™ Bolt:
The SIGNA™ Bolt system includes several AI/Machine Learning components:
- AIRx: An AI-based automated slice prescription tool, previously cleared for brain and knee imaging (K183231), now extended with new deep learning models for spine and prostate imaging.
- SIGNA One Camera: Real-time AI-enabled image guidance that assists with automated patient positioning.
- Contactless Gating: This feature leverages underlying physiological signal detection that might involve advanced signal processing or AI techniques, though the document primarily describes its functional outcome.
Acceptance Criteria and Reported Device Performance
The following table summarizes the acceptance criteria and reported performance for the AI/ML components of the SIGNA™ Bolt device:
| Feature/Component | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| SIGNA One Camera | Landmark Inference Accuracy: 90% successful detection of camera-predicted anatomical landmarks compared to ground truth annotations. | Landmark Inference Accuracy: Achieved up to 99% successful detection across all evaluated anatomical regions. |
| Landmark Acceptance (with obstructions): 95% success rate. | Landmark Acceptance (with obstructions): Achieved 97% success rate. | |
| AIRx Spine | All deep learning models met their predefined acceptance criteria (specific criteria not detailed, but implied to be related to accuracy, variability reduction, and successful adaptation to spinal curvatures and complex scan setups). | Model Performance: All models met their predefined acceptance criteria. |
| Reduced scan prescription times and minimized inter-operator variability compared to manual workflows. | Demonstrated reduced scan prescription times and minimized inter-operator variability (confirmed by SSIM analysis and visual comparisons). Successfully adapted prescriptions to patient-specific spinal curvatures and automated Pars Interarticularis and Cervical Foramina scans. | |
| AIRx Prostate | All deep learning models met predefined acceptance criteria (specific criteria not detailed, but implied to be related to accuracy and robustness to variations in anatomy, pathology, and implants). | Model Performance: All models met predefined acceptance criteria, confirming robustness to variations in anatomy, pathology, and presence of implants. |
| Contactless Gating | Accurately detecting and displaying respiratory and peripheral cardiac waveforms without physical accessories. Supporting use of these waveforms for triggering MR acquisitions across multiple anatomical regions. | Verified and validated to accurately detect and display respiratory and peripheral cardiac waveforms without physical accessories. Supports use of these waveforms for triggering MR acquisitions across multiple anatomical regions (meeting performance specifications). |
Study Details for AI/ML Components:
1. SIGNA One Camera
- Sample Size for Test Set: Data collected from 80 volunteers.
- Data Provenance: US and China (to ensure diverse datasets).
- Number of Experts & Qualifications for Ground Truth: Not explicitly stated for this component. Ground truth is described as "MR system coordinates of the camera-predicted anatomical landmarks against ground truth annotations," suggesting a technical or measurement-based ground truth rather than expert reads.
- Adjudication Method: Not specified.
- MRMC Comparative Effectiveness Study: Yes, a "time on task study" was conducted with 11 MR Scan Operators comparing the AI-powered workflow to conventional laser landmarking.
- Effect Size: The camera workflow "consistently enabled faster setup times for landmarking." Specific quantitative improvement (e.g., % reduction in time) is not provided in text.
- Standalone Performance: Yes, "Accuracy was evaluated by comparing the MR system coordinates of the camera-predicted anatomical landmarks against ground truth annotations." This indicates an algorithm-only evaluation.
- Type of Ground Truth: MR system coordinates.
- Sample Size for Training Set: Not explicitly stated, but the test dataset was "entirely separate from the training and validation datasets."
- Ground Truth for Training Set: Not specified, but likely established in a similar manner to the test set (MR system coordinates or similar technical measurements).
2. AIRx Spine
- Sample Size for Test Set: 376 subjects.
- Data Provenance: Multiple clinical sites and internal GE HealthCare sites.
- Number of Experts & Qualifications for Ground Truth: Not explicitly stated. Ground truth is implied to be established for "accurate multi-slice, multi-angle prescriptions."
- Adjudication Method: Not specified.
- MRMC Comparative Effectiveness Study: Yes, "Comparative studies demonstrated that AIRx Spine reduced scan prescription times compared to manual workflows and minimized inter-operator variability."
- Effect Size: "Reduced scan prescription times" and "minimized inter-operator variability" (confirmed by Structural Similarity Index (SSIM) analysis and visual comparisons). Specific quantitative improvement is not provided.
- Standalone Performance: Yes, "Performance testing was conducted on the AIRx Spine deep learning models," indicating an algorithm-only evaluation.
- Type of Ground Truth: Not explicitly stated but implied to be based on accurate anatomical prescriptions suitable for diagnostic imaging. SSIM analysis and visual comparisons suggest a comparison against an ideal or expert-defined prescription.
- Sample Size for Training Set: Not explicitly stated, but the test dataset was "held separate from training and validation data."
- Ground Truth for Training Set: Not specified, but likely established to enable the model to learn "patient-specific spinal curvatures" and "accurate multi-slice, multi-angle prescriptions."
3. AIRx Prostate
- Sample Size for Test Set: 785 exams.
- Data Provenance: Clinical sites in the US and Europe.
- Number of Experts & Qualifications for Ground Truth: Not explicitly stated.
- Adjudication Method: Not specified.
- MRMC Comparative Effectiveness Study: Not explicitly mentioned for this specific feature in the provided text.
- Standalone Performance: Yes, "Performance testing was conducted on the six deep learning models that comprise the AIRx Prostate feature," evaluating automated prostate scan plane prescription, indicating an algorithm-only evaluation.
- Type of Ground Truth: Not explicitly stated but implied to be based on accurate anatomical prescriptions for the prostate, using SSFSE localizer images.
- Sample Size for Training Set: Not explicitly stated, but the test dataset was "kept separate from the training and validation data."
- Ground Truth for Training Set: Not specified, but likely established to enable the model to learn "automated prostate scan plane prescription."
4. Contactless Gating
- Sample Size for Test Set: Not explicitly stated for this particular feature's performance validation.
- Data Provenance: Not specified.
- Number of Experts & Qualifications for Ground Truth: Not specified.
- Adjudication Method: Not specified.
- MRMC Comparative Effectiveness Study: Not mentioned.
- Standalone Performance: Yes, "Verification and validation testing confirmed that the contactless gating feature meets its performance specifications by accurately detecting and displaying respiratory and peripheral cardiac waveforms," indicating a system performance evaluation.
- Type of Ground Truth: Underlying physiological waveforms (respiratory and cardiac).
- Sample Size for Training Set: Not specified.
- Ground Truth for Training Set: Not specified, but likely established from physiological signal data.
Overall Conclusion from Performance Testing:
GE HealthCare concludes that the SIGNA™ Bolt is as safe and effective, with performance substantially equivalent to the predicate device, based on the nonclinical testing, including extensive software verification and validation, as well as specific performance evaluations for its new AI-enabled features. No clinical studies were required to support substantial equivalence.
Ask a specific question about this device
(71 days)
The SIGNA™ Sprint Select is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, TMJ, spine, breast, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body. Depending on the region of interest being imaged, contrast agents may be used.
The images produced by SIGNA™ Sprint Select reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
SIGNA™ Sprint Select is a whole-body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan time. The system uses a combination of time-varying magnet fields (Gradients) and RF transmissions to obtain information regarding the density and position of elements exhibiting magnetic resonance. The system can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. The system features a 1.5T superconducting magnet with 70cm bore size. The system is designed to conform to NEMA DICOM standards (Digital Imaging and Communications in Medicine).
N/A
Ask a specific question about this device
(146 days)
AIR Recon DL is a deep learning based reconstruction technique that is available for use on GE HealthCare 1.5T, 3.0T, and 7.0T MR systems. AIR Recon DL reduces noise and ringing (truncation artifacts) in MR images, which can be used to reduce scan time and improve image quality. AIR Recon DL is intended for use with all anatomies, and for patients of all ages. Depending on the anatomy of interest being imaged, contrast agents may be used.
AIR Recon DL is a software feature intended for use with GE HealthCare MR systems. It is a deep learning-based reconstruction technique that removes noise and ringing (truncation) artifacts from MR images. AIR Recon DL is an optional feature that is integrated into the MR system software and activated through purchasable software option keys. AIR Recon DL has been previously cleared for use with 2D Cartesian, 3D Cartesian, and PROPELLER imaging sequences.
The proposed device is a modified version of AIR Recon DL that includes a new deep-learning phase correction algorithm for applications that create multiple intermediate images and combine them, such as Diffusion Weighted Imaging where multiple NEX images are collected and combined. This enhancement is an optional feature that is integrated into the MR system software and activated through an additional purchasable software option key (separate from the software option keys of the predicate device).
This document describes the acceptance criteria and the studies conducted to prove the performance of the AIR Recon DL device, as presented in the FDA 510(k) clearance letter.
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Metric/Description | Acceptance Criteria Details | Reported Device Performance |
|---|---|---|---|
| Nonclinical Testing | DLPC Model: Accuracy of Phase Correction | Provides more accurate phase correction | Demonstrates more accurate phase correction |
| DLPC Model: Impact on Noise Floor | Effectively reduce signal bias | Effectively reduces signal bias and lowers the noise floor | |
| PC-ARDL Model: SNR | Improve SNR | Improves SNR | |
| PC-ARDL Model: Image Sharpness | Improve image sharpness | Improves image sharpness | |
| PC-ARDL Model: Low Contrast Detectability | Improve low contrast detectability | Does not adversely impact retention of low contrast features | |
| Overall Image Quality/Safety/Performance | No adverse impacts to image quality, safety, or performance | No adverse impacts to image quality, safety, or performance identified | |
| In-Vivo Performance Testing | DLPC & PC-ARDL: ADC Accuracy (Diffusion Imaging) | Accurate and unbiased ADC values, especially at higher b-values | Achieved accurate and unbiased ADC values across all b-values tested (whereas predicate showed significant reductions) |
| DLPC & PC-ARDL: Low-Contrast Detectability | Retention of low-contrast features | Significant improvement in contrast-to-noise ratio, "not adversely impacting the retention of low contrast features" | |
| Quantitative Post Processing | ADC Measurement Repeatability | Similar repeatability to conventional methods | Coefficient of variability for ADC values closely matched those generated with product reconstruction |
| Effectiveness of Phase Correction (Real/Imaginary Channels) | Signal primarily in the real channel, noise only in the imaginary channel | For DLPC, all signal was in the real channel, imaginary channel contained noise only (outperforming conventional methods) | |
| Clinical Image Quality Study | Diagnostic Quality | Excellent diagnostic quality without loss of diagnostic quality, even in challenging situations | Produces images of excellent diagnostic quality, delivering overall exceptional image quality across all organ systems, even in challenging situations |
2. Sample Size Used for the Test Set and Data Provenance
- Nonclinical Testing:
- Phantom testing was conducted for the DLPC and PC-ARDL models. No specific sample size (number of phantom scans) is provided, but it implies a sufficient number for evaluation.
- In-Vivo Performance Testing:
- ADC Accuracy: Diffusion-weighted brain images were acquired at 1.5T with b-values = 50, 400, 800, 1200 s/mm². The number of subjects is not explicitly stated, but it's referred to as "diffusion images" and "diffusion-weighted brain images."
- Low-Contrast Detectability: Raw data from 4 diffusion-weighted brain scans were used.
- Quantitative Post Processing (Repeatability Study):
- 6 volunteers were recruited. 2 volunteers scanned on a 1.5T scanner, 4 on a 3T scanner.
- Scanned anatomical regions included brain, spine, abdomen, pelvis, and breast.
- Each sequence was repeated 4 times.
- Data Provenance: The document states "in-vivo data" and "volunteer scanning was performed simulating routine clinical workflows." This suggests prospective scanning of human subjects, likely in a controlled environment. The country of origin is not specified, but given the FDA submission, it's likely U.S. or international data meeting U.S. standards. The statement "previously acquired de-identified cases" for the Clinical Image Quality Study refers to retrospective data for that specific study, but the volunteer scanning for repeatability appears prospective.
- Clinical Image Quality Study:
- 34 datasets of previously acquired de-identified cases.
- Data Provenance: "previously acquired de-identified cases" indicates retrospective data. The country of origin is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Nonclinical Testing: Ground truth established through phantom measurements and expected physical properties (e.g., signal bias, noise floor). No human experts involved in establishing ground truth here.
- In-Vivo Performance Testing:
- ADC Accuracy: "Average ADC values were measured from regions of interest in the lateral ventricles." This implies expert selection of ROIs, but the number of experts is not specified. The ground truth for ADC is the expected isotropic Gaussian diffusion in these regions.
- Low-Contrast Detectability: "The contrast ratio and contrast-to-noise ratio for each of the inserts were measured." This is a quantitative measure, not explicitly relying on expert consensus for ground truth on detectability, but rather on the known properties of the inserted synthetic objects.
- Quantitative Post Processing:
- ADC Repeatability: Ground truth for repeatability is based on quantitative measurements and statistical analysis (coefficient of variability). ROI placement would typically be done by an expert, but the number is not specified.
- Phase Correction Effectiveness: Ground truth is based on the theoretical expectation of signal distribution in real/imaginary channels after ideal phase correction.
- Clinical Image Quality Study:
- One (1) U.S. Board Certified Radiologist was used.
- Qualifications: "U.S. Board Certified Radiologist." No explicit number of years of experience is stated, but Board Certification indicates a high level of expertise.
4. Adjudication Method for the Test Set
- Nonclinical/Phantom Testing: No explicit adjudication method described beyond passing defined acceptance criteria for quantitative metrics.
- In-Vivo Performance Testing: Quantitative measurements (ADC values, contrast ratios, CNR) were used. Paired t-tests were conducted, which is a statistical comparison method, not an adjudication process as typically defined for expert readings.
- Quantitative Post Processing: Quantitative measurements and statistical analysis (coefficient of variability, comparison of real/imaginary channels).
- Clinical Image Quality Study: A single U.S. Board Certified Radiologist made the assessment. There is no stated adjudication method described, implying a single-reader assessment for clinical image quality.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- An MRMC comparative effectiveness study was not explicitly described as a formal study design in the provided text.
- The "Clinical Image Quality Study" involved only one radiologist, so it does not qualify as an MRMC study.
- There is no reported effect size of how much human readers improve with AI vs. without AI assistance. The study rather focused on the AI-reconstructed images' standalone diagnostic quality.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, performance was evaluated in a standalone manner.
- Nonclinical Testing: Phantom studies directly evaluate the algorithm's output against known physical properties and defined metrics.
- In-Vivo Performance Testing: ADC accuracy and low-contrast detectability were measured directly from the reconstructed images, which is a standalone evaluation of the algorithm's quantitative output.
- Quantitative Post Processing: Repeatability and effectiveness of phase correction in real/imaginary channels are algorithm-centric evaluations.
- Even the clinical image quality study, while involving a human reader, assessed the standalone output of the algorithm (AIR Recon DL with Phase Correction) for diagnostic quality.
7. Type of Ground Truth Used
- Expert Consensus: Not explicitly stated as the primary ground truth for quantitative metrics, but one radiologist's assessment served as the primary clinical ground truth.
- Pathology: Not used as ground truth in the provided study descriptions. While some datasets "included pathological features such as prostate cancer... hepatocellular carcinoma," the assessment by the radiologist was on "diagnostic quality" of the images, not a comparison against pathology reports for definitive disease identification.
- Outcomes Data: Not used as ground truth.
- Other:
- Physical Properties/Known Standards: For phantom testing (e.g., signal bias, noise floor, SNR, sharpness), and for theoretical expectations of ADC values in specific regions (lateral ventricles).
- Known Synthetic Inserts: For low-contrast detectability.
- Theoretical Expectations: For phase correction effectiveness (signal in real, noise in imaginary).
8. Sample Size for the Training Set
- The document does not provide any specific sample size for the training set used for the deep learning models (DLPC and PC-ARDL). It only states that the models are "deep learning-based."
9. How the Ground Truth for the Training Set Was Established
- The document does not provide any information on how the ground truth for the training set was established. It only describes the testing of the final, trained models.
Ask a specific question about this device
(128 days)
The SIGNA™ Sprint is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, TMJ, spine, breast, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body. Depending on the region of interest being imaged, contrast agents may be used.
The images produced by SIGNA™ Sprint reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
SIGNA™ Sprint is a whole-body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan time. The system uses a combination of time-varying magnet fields (Gradients) and RF transmissions to obtain information regarding the density and position of elements exhibiting magnetic resonance. The system can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. The system features a 1.5T superconducting magnet with 70cm bore size. The system is designed to conform to NEMA DICOM standards (Digital Imaging and Communications in Medicine).
Key aspects of the system design:
- Uses the same magnet as a conventional whole-body 1.5T system, with integral active shielding and a zero boil-off cryostat.
- A gradient coil that achieves up to 65 mT/m peak gradient amplitude and 200 T/m/s peak slew rate.
- An embedded body coil that reduces thermal and enhance intra-bore visibility.
- A newly designed 1.5T AIR Posterior Array.
- A detachable patient table.
- A platform software with various PSD and applications, including the following AI features:
The provided text is a 510(k) clearance letter and summary for a new MRI device, SIGNA™ Sprint. It states explicitly that no clinical studies were required to support substantial equivalence. Therefore, the information requested regarding acceptance criteria, study details, sample sizes, ground truth definitions, expert qualifications, and MRMC studies is not available in this document.
The document highlights the device's technical equivalence to a predicate device (SIGNA™ Premier) and reference devices (SIGNA™ Artist, SIGNA™ Champion) and relies on non-clinical tests and sample clinical images to demonstrate acceptable diagnostic performance.
Here's a breakdown of what can be extracted from the document regarding testing, and why other requested information is absent:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria (Implicit): The document states that the device's performance is demonstrated through "bench testing and clinical testing that show the image quality performance of SIGNA™ Sprint compared to the predicate device." It also mentions "acceptable diagnostic image performance... in accordance with the FDA Guidance 'Submission of Premarket Notifications for Magnetic Resonance Diagnostic Devices' issued on October 10, 2023."
- Specific quantitative acceptance criteria (e.g., minimum SNR, CNR, spatial resolution thresholds) are not explicitly stated in this document.
- Reported Device Performance: "The images produced by SIGNA™ Sprint reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis."
- No specific quantitative performance metrics (e.g., sensitivity, specificity, accuracy, or detailed image quality scores) are provided in this regulatory summary. The statement "The image quality of the SIGNA™ Sprint is substantially equivalent to that of the predicate device" is the primary performance claim.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: Not applicable/Not provided. The document explicitly states: "The subject of this premarket submission, the SIGNA™ Sprint, did not require clinical studies to support substantial equivalence."
- Data Provenance: Not applicable/Not provided for a formal clinical test set. The document only mentions "Sample clinical images have been included in this submission," but does not specify their origin or nature beyond being "sample."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. Since no formal clinical study was conducted for substantial equivalence, there was no "test set" requiring ground truth established by experts in the context of an effectiveness study. The "interpretation by a trained physician" is mentioned in the Indications for Use, which is general to MR diagnostics, not specific to a study.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. No clinical test set requiring adjudication was conducted for substantial equivalence.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. The document explicitly states: "The subject of this premarket submission, the SIGNA™ Sprint, did not require clinical studies to support substantial equivalence." While the device incorporates AI features cleared in other submissions (AIRx™, AIR™ Recon DL, Sonic DL™), this specific 510(k) for the SIGNA™ Sprint system itself does not include an MRMC study or an assessment of human reader improvement with these integrated AI features. The focus is on the substantial equivalence of the overall MR system.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- No, not for the SIGNA™ Sprint as a whole system. This 510(k) is for the MR scanner itself, not for a standalone algorithm. Any standalone performance for the integrated AI features (AIRx™, AIR™ Recon DL, Sonic DL™) would have been part of their respective clearance submissions (K183231, K202238, K223523), not this one.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not applicable. No formal clinical study requiring ground truth was conducted for this submission.
8. The sample size for the training set
- Not applicable/Not provided. This submission is for the SIGNA™ Sprint MR system itself, not a new AI algorithm requiring a training set developed for this specific submission. The AI features mentioned (AIRx™, AIR™ Recon DL, Sonic DL™) were cleared in previous 510(k)s and would have had their own training and validation processes.
9. How the ground truth for the training set was established
- Not applicable/Not provided. As explained in point 8, this submission does not detail the training of new AI algorithms.
Ask a specific question about this device
(126 days)
The system is intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission projection data from the same axial plane taken at different angles. The system may acquire data using Axial, Cine, Helical, Cardiac, and Gated CT scan techniques from patients of all ages. These images may be obtained either with or without contrast. This device may include signal analysis and display equipment, patient and equipment supports, components and accessories.
This device may include data and image processing to produce images in a variety of trans-axial and reformatted planes. Further, the images can be post processed to produce additional imaging planes or analysis results.
The system is indicated for head, whole body, cardiac, and vascular X-ray Computed Tomography applications.
The device output is a valuable medical tool for the diagnosis of disease, trauma, or abnormality and for planning, guiding, and monitoring therapy.
If the spectral imaging option is included on the system, the system can acquire CT images using different kV levels of the same anatomical region of a patient in a single rotation from a single source. The differences in the energy dependence of the attenuation coefficient of the different materials provide information about the chemical composition of body materials. This approach enables images to be generated at energies selected from the available spectrum to visualize and analyze information about anatomical and pathological structures.
GSI provides information of the chemical composition of renal calculi by calculation and graphical display of the spectrum of effective atomic number. GSI Kidney stone characterization provides additional information to aid in the characterization of uric acid versus non-uric acid stones. It is intended to be used as an adjunct to current standard methods for evaluating stone etiology and composition.
The CT system is indicated for low dose CT for lung cancer screening. The screening must be performed within the established inclusion criteria of programs/ protocols that have been approved and published by either a governmental body or professional medical society.
This proposed device Revolution Vibe is a general purpose, premium multi-slice CT Scanning system consisting of a gantry, table, system cabinet, scanner desktop, power distribution unit, and associated accessories. It has been optimized for cardiac performance while still delivering exceptional imaging quality across the entire body.
Revolution Vibe is a modified dual energy CT system based on its predicate device Revolution Apex Elite (K213715). Compared to the predicate, the most notable change in Revolution Vibe is the modified detector design together with corresponding software changes which is optimized for cardiac imaging providing capability to image the whole heart in one single rotation same as the predicate.
Revolution Vibe offers an accessible whole heart coverage, full cardiac capability CT scanner which can deliver outstanding routine head and body imaging capabilities. The detector of Revolution Vibe uses the same GEHC's Gemstone scintillator with 256 x 0.625 mm row providing up to 16 cm of coverage in Z direction within 32 cm scan field of view, and 64 x 0.625 mm row providing up to 4 cm of coverage in Z direction within 50 cm scan field of view. The available gantry rotation speeds are 0.23, 0.28, 0.35, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0 seconds per rotation.
Revolution Vibe inherits virtually all of the key technologies from the predicate such as: high tube current (mA) output, 80 cm bore size with Whisper Drive, Deep Learning Image Reconstruction for noise reduction (DLIR K183202/K213999, GSI DLIR K201745), ASIR-V iterative recon, enhanced Extended Field of View (EFOV) reconstruction MaxFOV 2 (K203617), fast rotation speed as fast as 0.23 second/rot (K213715), and spectral imaging capability enabled by ultrafast kilovoltage(kv) switching (K163213), as well as ECG-less cardiac (K233750). It also includes the Auto ROI enabled by AI which is integrated within the existing SmartPrep workflow for predicting Baseline and monitoring ROI automatically. As such, the Revolution Vibe carries over virtually all features and functionalities of the predicate device Revolution Apex Elite (K213715).
This CT system can be used for low dose lung cancer screening in high risk populations*.
The provided FDA 510(k) clearance letter and summary for the Revolution Vibe CT system does not include detailed acceptance criteria or a comprehensive study report to fully characterize the device's performance against specific metrics. The information focuses more on the equivalence to a predicate device and general safety/effectiveness.
However, based on the text, we can infer some aspects related to the Auto ROI feature, which is the only part of the device described with specific performance testing details.
Here's an attempt to extract and describe the available information, with clear indications of what is not provided in the document.
Acceptance Criteria and Device Performance for Auto ROI
The document mentions specific performance testing for the "Auto ROI" feature, which utilizes AI. For other aspects of the Revolution Vibe CT system, the submission relies on demonstrating substantial equivalence to the predicate device (Revolution Apex Elite) through engineering design V&V, bench testing, and a clinical reader study focused on overall image utility, rather than specific quantitative performance metrics meeting predefined acceptance criteria for the entire system.
1. Table of Acceptance Criteria and Reported Device Performance (Specific to Auto ROI)
| Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Auto ROI Success Rate | "exceeding the pre-established acceptance criteria" | Testing resulted in "success rates exceeding the pre-established acceptance criteria." (Specific numerical value not provided) |
Note: The document does not provide the explicit numerical value for the "pre-established acceptance criteria" or the actual "success rate" achieved for the Auto ROI feature.
2. Sample Size and Data Provenance for the Test Set (Specific to Auto ROI)
- Sample Size: 1341 clinical images
- Data Provenance: "real clinical practice" (Specific country of origin not mentioned). The images were used for "Auto ROI performance" testing, which implies retrospective analysis of existing clinical data.
3. Number of Experts and Qualifications to Establish Ground Truth (Specific to Auto ROI)
- Number of Experts: Not specified for the Auto ROI ground truth establishment.
- Qualifications of Experts: Not specified for the Auto ROI ground truth establishment.
Note: The document mentions 3 readers for the overall clinical reader study (see point 5), but this is for evaluating the diagnostic utility and image quality of the CT system and not explicitly for establishing ground truth for the Auto ROI feature.
4. Adjudication Method for the Test Set (Specific to Auto ROI)
- Adjudication Method: Not specified for the Auto ROI test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
-
Was an MRMC study done? Yes, a "clinical reader study of sample clinical data" was carried out. It is described as a "blinded, retrospective clinical reader study."
-
Effect Size of Human Readers Improvement with AI vs. without AI assistance: The document states the purpose of this reader study was to validate that "Revolution Vibe are of diagnostic utility and is safe and effective for its intended use." It does not report an effect size or direct comparison of human readers' performance with and without AI assistance (specifically for the Auto ROI feature within the context of reader performance). The study seemed to evaluate the CT system's overall image quality and clinical utility, possibly implying that the Auto ROI is integrated into this overall evaluation, but a comparative effectiveness study of the AI's impact on human performance is not described.
- Details of MRMC Study:
- Number of Cases: 30 CT cardiac exams
- Number of Readers: 3
- Reader Qualifications: US board-certified in Radiology with more than 5 years' experience in CT cardiac imaging.
- Exams Covered: "wide range of cardiac clinical scenarios."
- Reader Task: "Readers were asked to provide evaluation of image quality and the clinical utility."
- Details of MRMC Study:
6. Standalone (Algorithm Only) Performance
- Was a standalone study done? Yes, for the "Auto ROI" feature, performance was tested "using 1341 clinical images from real clinical practice," and "the tests results in success rates exceeding the pre-established acceptance criteria." This implies an algorithm-only evaluation of the Auto ROI's ability to successfully identify and monitor ROI.
7. Type of Ground Truth Used (Specific to Auto ROI)
- Type of Ground Truth: Not explicitly stated for the Auto ROI. Given the "success rates" metric, it likely involved a comparison against a predefined "true" ROI determined by human experts or a gold standard method. It's plausible that this was established by expert consensus or reference standards.
8. Sample Size for the Training Set
- Sample Size: Not provided in the document.
9. How Ground Truth for the Training Set Was Established
- Ground Truth Establishment: Not provided in the document.
In summary, the provided documentation focuses on demonstrating substantial equivalence of the Revolution Vibe CT system to its predicate, Revolution Apex Elite, rather than providing detailed, quantitative performance metrics against specific acceptance criteria for all features. The "Auto ROI" feature is the only component where specific performance testing (standalone) is briefly mentioned, but key details like numerical acceptance criteria, actual success rates, and ground truth methodology for training datasets are not disclosed. The human reader study was for general validation of diagnostic utility, not a comparative effectiveness study of AI assistance.
Ask a specific question about this device
(190 days)
Sonic DL is a Deep Learning based reconstruction technique that is available for use on GE HealthCare 1.5T, 3.0T, and 7.0T MR systems. Sonic DL reconstructs MR images from highly under-sampled data, and thereby enables highly accelerated acquisitions. Sonic DL is intended for imaging patients of all ages. Sonic DL is not limited by anatomy and can be used for 2D cardiac cine imaging and 3D Cartesian imaging using fast spin echo and gradient echo sequences. Depending on the region of interest, contrast agents may be used.
Sonic DL is a software feature intended for use with GE HealthCare MR systems. It includes a deep learning based reconstruction algorithm that enables highly accelerated acquisitions by reconstructing MR images from highly under-sampled data. Sonic DL is an optional feature that is integrated into the MR system software and activated through purchasable software option keys.
Here's a breakdown of the acceptance criteria and the study details for Sonic DL, based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for Sonic DL
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly list specific quantitative "acceptance criteria" against which a single "reported device performance" is measured in a pass/fail manner for all aspects. Instead, it presents various performance metrics and qualitative assessments that demonstrate the device's acceptable performance compared to existing standards and its stated claims. For the sake of clarity, I've summarized the implied acceptance thresholds or comparative findings from the quantitative studies and the qualitative assessments.
| Metric/Criterion | Acceptance Criteria (Implied/Comparative) | Reported Device Performance (Sonic DL) |
|---|---|---|
| Non-Clinical Testing (Sonic DL 3D) | ||
| Peak-Signal-to-Noise (PSNR) | Equal to or above 30dB | Equal to or above 30dB at all acceleration factors (up to 12) |
| Structural Similarity Index Measure (SSIM) | Equal to or above 0.8 | Equal to or above 0.8 at all acceleration factors (up to 12) |
| Resolution | Preservation of resolution grid structure and resolution | Preserved resolution grid structure and resolution |
| Medium/High Contrast Detectability | Retained compared to conventional methods | Retained at all accelerations; comparable or better than conventional methods |
| Low Contrast Detectability | Non-inferior to more modestly accelerated conventional reconstruction methods at recommended acceleration rates | Maintained at lower acceleration factors; non-inferior at recommended rates (e.g., 8x Sonic DL 3D ~ 4x parallel imaging; 12x Sonic DL 3D ~ 8x parallel imaging) |
| Model Stability (Hallucination) | Low risk of hallucination; dataset integrity preserved | Low risk of hallucination; dataset integrity preserved across all cases |
| Clinical Testing (Sonic DL 3D) - Quantitative Post Processing | ||
| Volumetric Measurements (Brain Tissues) - Relative MAE 95% CI | Less than 5% for most regions (brain tissues) | Less than 5% for most regions |
| Volumetric Measurements (HOS) - Relative MAE 95% CI | Less than 3% for Hippocampal Occupancy Score (HOS) | Less than 3% for HOS |
| Intra-class Correlation Coefficient (ICC) | Exceeded 0.75 across all comparisons | Exceeded 0.75 across all comparisons |
| Clinical Testing (Sonic DL 3D) - Clinical Evaluation Studies (Likert-score) | ||
| Diagnostic Quality | Images are of diagnostic quality | Sonic DL 3D images are of diagnostic quality (across all anatomies, field strengths, and acceleration factors investigated) |
| Pathology Retention | Pathology seen in comparator images can be accurately retained | Pathology seen in ARC + HyperSense images can be accurately retained |
| Decline with Acceleration | Retain diagnostic quality overall despite decline | Scores gradually declined with increasing acceleration factors yet retained diagnostic quality overall |
| Clinical Claims | ||
| Scan Time Reduction | Substantial reduction in scan time | Yields substantial reduction in scan time |
| Diagnostic Image Quality | Preservation of diagnostic image quality | Preserves diagnostic image quality |
| Acceleration Factors | Up to 12x | Provides acceleration factors up to 12 |
2. Sample Size Used for the Test Set and Data Provenance
- Quantitative Post Processing Test Set:
- Sample Size: 15 fully-sampled datasets.
- Data Provenance: Retrospective, acquired at GE HealthCare in Waukesha, USA, from 1.5T, 3.0T, and 7.0T scanners.
- Clinical Evaluation Studies (Likert-score based):
- Study 1 (Brain, Spine, Extremities):
- Number of image series evaluated: 120 de-identified cases.
- Number of unique subjects: 54 subjects (48 patients, 6 healthy volunteers).
- Age range: 11-80 years.
- Gender: 26 Male, 28 Female.
- Pathology: Mixture of small, large, focal, diffuse, hyper- and hypo-intense lesions.
- Contrast: Used in a subset as clinically indicated.
- Data Provenance: Retrospective and prospective (implied by "obtained from clinical sites and from healthy volunteers scanned at GE HealthCare facilities"). Data collected from 7 sites (4 in United States, 3 outside of United States).
- Study 2 (Brain):
- Number of additional cases: 120 cases.
- Number of unique subjects: From 30 fully-sampled acquisitions.
- Data Provenance: Retrospective, collected internally at GE HealthCare, 1.5T, 3.0T, and 7.0T field strengths.
- Study 1 (Brain, Spine, Extremities):
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Quantitative Post Processing: The "ground truth" here is the fully sampled data and the quantitative measurements derived from it. No human experts are explicitly mentioned for establishing this computational 'ground truth'.
- Clinical Evaluation Studies:
- Study 1: 3 radiologists. Their specific qualifications (e.g., years of experience, subspecialty) are not provided in the document.
- Study 2: 3 radiologists. Their specific qualifications are not provided in the document.
4. Adjudication Method for the Test Set
The document does not explicitly state a formal adjudication method (like 2+1 or 3+1). For the clinical evaluation studies, it mentions that "three radiologists were asked to evaluate the diagnostic quality of images" and "radiologists were also asked to comment on the presence of any pathology." This suggests individual assessments were either aggregated, or findings were considered concordant if a majority agreed, but a specific arbitration or adjudication process for disagreements is not detailed.
5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? Yes, the two Likert-score based clinical studies involved multiple readers (3 radiologists) evaluating multiple cases (120 de-identified cases in Study 1 and 120 additional cases in Study 2) for comparative effectiveness against ARC + HyperSense.
- Effect size of human readers improvement with AI vs. without AI assistance: The document states that "Sonic DL 3D images are of diagnostic quality while yielding a substantial reduction in the scan time compared to ARC + HyperSense images." It also noted that "pathology seen in the ARC + HyperSense images can be accurately retained in Sonic DL 3D images." However, it does not quantify the effect size of how much human readers improve with AI assistance (Sonic DL) versus without it. Instead, the studies aim to demonstrate non-inferiority or comparable diagnostic quality despite acceleration. There's no performance gain stated explicitly for the human reader in terms of diagnostic accuracy or confidence when using Sonic DL images compared to conventional images; rather, the benefit is in maintaining diagnostic quality with faster acquisition.
6. Standalone (Algorithm Only) Performance Study
- Was a standalone study done? Yes, extensive non-clinical testing was performed as a standalone assessment of the algorithmic performance. This included evaluations using:
- Digital reference objects (DROs) and MR scans of physical ACR phantoms to measure PSNR, RMSE, SSIM, resolution, and low contrast detectability.
- A task-based study using a convolutional neural network ideal observer (CNN-IO) to quantify low contrast detectability.
- Reconstruction of in vivo datasets with unseen data inserted to assess model stability and hallucination risk.
These studies directly evaluated the algorithm's output metrics and behavior independently of human interpretation in a clinical workflow, making them standalone performance assessments.
7. Type of Ground Truth Used
- Non-Clinical Testing:
- Quantitative Metrics (PSNR, RMSE, SSIM, Resolution, Contrast Detectability): Fully sampled data was used as the reference "ground truth" against which the under-sampled and reconstructed Sonic DL 3D images were compared.
- Model Stability (Hallucination): The "ground truth" was the original in vivo datasets before inserting previously unseen data, allowing for evaluation of whether the algorithm introduced artifacts or hallucinations.
- Quantitative Post Processing (Clinical Testing):
- Fully sampled data sets were used as the reference for comparison of volumetric measurements with Sonic DL 3D and ARC + HyperSense images.
- Clinical Evaluation Studies (Likert-score based):
- The implied "ground truth" was the diagnostic quality and presence/absence of pathology as assessed by the conventional ARC + HyperSense images, which were considered the clinical standard for comparison. The radiologists were essentially comparing Sonic DL images against the standard of care images without a separate, absolute ground truth like pathology for every lesion.
8. Sample Size for the Training Set
The document does not specify the sample size used for training the Sonic DL 3D deep learning model. It only mentions that Sonic DL is a "Deep Learning based reconstruction technique" and includes a "deep learning convolutional neural network."
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It is standard practice for supervised deep learning models like Sonic DL to be trained on pairs of under-sampled and corresponding fully-sampled or high-quality (e.g., conventionally reconstructed) images, where the high-quality image serves as the 'ground truth' for the network to learn to reconstruct from the under-sampled data. However, the specifics of this process (e.g., data types, annotation, expert involvement) are not mentioned in this document.
Ask a specific question about this device
(34 days)
The SIGNA Prime Elite is a whole body magnetic resonance scanner designed to support high signal-to-noise ratio, and short scan times. It is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, spectroscopic images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the entire body, including, but not limited to, head, neck, heart, abdomen, pelvis, joints, prostate, blood vessels, and musculoskeletal regions of the body.
Depending on the region of interest being imaged, contrast agents may be used. The images produced by SIGNA Prime Elite reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance.
These images and/or spectra when interpreted by a trained physician yield information that may assist in diagnosis.
SIGNA Prime Elite is a whole body magnetic resonance scanner designed to support high resolution, high signal-to-noise ratio, and short scan time. The system uses a combination of time-varying magnet fields (Gradients) and RF transmissions to obtain information regarding the density and position of elements exhibiting magnetic resonance. The system can image in the sagittal, coronal, axial, oblique, and double oblique planes, using various pulse sequences, imaging techniques and reconstruction algorithms. The system features a 1.5T superconducting magnet with 60 cm bore size. The system is designed to conform to NEMA DICOM standards (Digital Imaging and Communications in Medicine).
The document does not provide a table of acceptance criteria and reported device performance. It focuses on demonstrating substantial equivalence to a predicate device rather than presenting specific quantitative performance metrics against pre-defined acceptance criteria.
Here's an analysis of the provided information concerning acceptance criteria and study details:
1. A table of acceptance criteria and the reported device performance
The document does not contain a specific table outlining quantitative acceptance criteria and corresponding reported device performance metrics. It indicates that the SIGNA Prime Elite's image quality performance was compared to the predicate device through bench and clinical testing and found to be "substantially equivalent."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document states: "The subject of this premarket submission, the SIGNA Prime Elite, did not require clinical studies to support substantial equivalence. Sample clinical images have been included in this submission."
Therefore, there is no formal test set sample size mentioned for a specific clinical performance study. The "sample clinical images" are used to demonstrate acceptable diagnostic image performance, but details about their sample size, provenance (country of origin), or whether they are retrospective or prospective are not provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Since no formal clinical study with a defined test set and ground truth establishment is described, this information is not provided. The document mentions that images "when interpreted by a trained physician yield information that may assist in diagnosis," but it doesn't specify expert review for a test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
As there is no formal test set described for a clinical performance study, an adjudication method is not applicable and not mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document states that the SIGNA Prime Elite is a "whole body magnetic resonance scanner." There is no mention of AI assistance or a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. The submission is for an imaging device, not an AI-powered diagnostic tool.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This question is not applicable as the device is a magnetic resonance scanner, not an algorithm that operates standalone without human interpretation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Given that clinical studies were not required and only "sample clinical images" were included to demonstrate acceptable diagnostic image performance, there is no mention of a formally established ground truth type (e.g., expert consensus, pathology, outcomes data) for a test set. The images are expected to be interpreted by a "trained physician."
8. The sample size for the training set
The document describes the SIGNA Prime Elite as a new 1.5T MR system. It is a hardware device with associated software, not a machine learning model. Therefore, the concept of a "training set" in the context of machine learning is not applicable and not mentioned.
9. How the ground truth for the training set was established
As the device is an MR scanner and not an AI/ML model requiring a training set, this information is not applicable and not provided.
Ask a specific question about this device
(167 days)
SIGNA MAGNUS system is a head-only magnetic resonance scanner designed to support high resolution, high signal-tonoise ratio, diffusion-weighted imaging, and short scan times. SIGNA MAGNUS is indicated for use as a diagnostic imaging device to produce axial, sagittal, coronal, and oblique images, parametric maps, and/or spectra, dynamic images of the structures and/or functions of the head, neck, TMJ, and limited cervical spine on patients 6 years of age and older. Depending on the region of interest being imaged, contrast agents may be used.
The images produced by SIGNA MAGNUS reflect the spatial distribution or molecular environment of nuclei exhibiting magnetic resonance. These images and/or spectra, when interpreted by a trained physician, yield information that may assist in diagnosis.
SIGNA MAGNUS is a 3.0T high-performance magnetic resonance imaging system designed to support imaging of the head, neck, TMJ and limited cervical spine. The system supports scanning in axial, coronal, sagittal, oblique, and double oblique planes using a variety of pulse sequences, imaging techniques, acceleration methods, and reconstruction algorithms. The system can be delivered as a new system installation, or as an upgrade to existing compatible whole-body 3.0T MR
Key aspects of the system design:
• An asymmetrically designed, head-only gradient coil that achieves up to 300 mT/m peak gradient amplitude and 750 T/m/s peak slew rate.
• A graduated patient bore size, starting at 74 cm at the entry down to 37 cm at isocenter.
• Uses the same magnet as a conventional whole-body 3.0T system, with integral active shielding and a zero boil-off cryostat.
• Can be installed as a new system or upgraded from an existing compatible whole-body 3.0T MR system.
• A dockable mobile patient table.
• Oscillating Diffusion Encoding (ODEN) - a spectral diffusion technique that uses a sinusoidal diffusion gradient waveform.
The provided text is a 510(k) Summary for the GE Healthcare SIGNA MAGNUS, a magnetic resonance diagnostic device. The summary indicates that this device did not require clinical studies to support substantial equivalence. Therefore, there are no acceptance criteria, performance metrics, sample sizes for test/training sets, expert qualifications, or other details related to clinical performance studies to report for this device, as these were not performed or deemed necessary by the manufacturer for this submission.
The document states:
"The subject of this premarket submission, SIGNA MAGNUS did not require clinical studies to support substantial equivalence. Sample clinical images have been included in this submission. The sample clinical images demonstrate acceptable diagnostic image performance of SIGNA MAGNUS in accordance with the FDA Guidance 'Submission of Premarket Notifications fo October 10, 2023. The image quality of SIGNA MAGNUS is substantially equivalent to that of the predicate device."
This means that the substantial equivalence was primarily based on non-clinical tests and a comparison of technological characteristics and indications for use with a predicate device (SIGNA Premier (K193282)). The non-clinical tests focused on safety and performance in compliance with various voluntary standards (e.g., ANSI AAMI ES60601-1, IEC 60601-1-2, IEC 60601-2-33, IEC 62304, IEC 60601-1-6, IEC 62366-1, IEC62464-1, ISO 10993-1, NEMA MS, NEMA PS3 DICOM).
Due to the absence of a clinical study, the requested information cannot be extracted from the provided text.
Ask a specific question about this device
(132 days)
ECG-less Cardiac streamlines patient preparation by enabling an alternative acquisition of cardiac CT images for general cardiac assessment without the need of a patient-attached ECG monitor. ECG-less Cardiac is for adults only.
ECG-less Cardiac is a software device that is an additional, optional cardiac scan mode that can be used on the Revolution Apex Elite, Revolution Apex, and Revolution CT with Apex edition systems. There is no change to the predicate device hardware to support the subject device. Currently, the available cardiac scan modes on the Revolution CT Family are Cardiac Axial and Cardiac Helical, which makes use of an ECG signal to physiologically trigger the cardiac acquisitions and/or to retrospectively gate the reconstruction.
ECG-less Cardiac is a third cardiac scan mode that introduces the ability to acquire cardiac images without the need of a patient-attached ECG monitor. Hence, an ECG signal from the patient is not utilized for this scan mode. The ECG-less Cardiac workflow leverages the full-heart coverage capability of 160 mm configurations, fast gantry speeds (0.28 and 0.23 s/rot), and existing cardiac software options of SmartPhase and SnapShot Freeze 2 (K183161) to acquire images that are suitable for coronary and cardiac functional assessment.
The ECG-less cardiac feature allows the user to acquire a cardiac CT scan without the need to complete the steps associated with utilizing an ECG monitor, such as attaching ECG electrodes to the patient, checking electrode impedance, and confirming an ECG trace is displayed on the operator console, thus optimizing the workflow.
ECG-less Cardiac may be best utilized in examinations where excluding the ECG connection would streamline the patient examination, including and unloading of the patient. This may result in an improved workflow for certain clinical presentations. ECG-less Cardiac may also increase access to cardiac assessment for patients that are difficult to receive an ECG signal from. Circumstances where the subject device is expected to increase cardiac access includes scenarios where trauma patient has a diagnostic ECG attached and/or other instrumentation, such that there is added difficulty of attaching ECG leads for a gated scan, and situations where it is challenging to get an ECG signal from a patient such as a patient's t-wave triggering the scan or R-peak being difficult to detect.
Here's a summary of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the study's conclusions, focusing on diagnostic utility and image quality. No specific quantitative thresholds for acceptance are explicitly stated in the document beyond "interpretable without a significant motion artifact penalty" and "of diagnostic utility."
| Acceptance Criteria (Inferred) | Reported Device Performance |
|---|---|
| Diagnostic Utility | ECG-less Cardiac acquisitions were consistently rated as interpretable and of diagnostic utility by board-certified radiologists who specialize in cardiac imaging. |
| Image Quality (Motion Artifact) | Images generated from ECG-less Cardiac acquisitions were consistently rated as interpretable without a significant motion artifact penalty. |
| Equivalence to ECG-gated "ground truth" | Engineering bench testing showed that ECG-less Cardiac scan acquisitions can produce images that are equivalent to an ECG-gated "ground truth" nominal phase location. |
| Safety & Effectiveness | The device is deemed safe and effective for its intended use based on non-clinical testing and the clinical reader study. |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size for Test Set: The document does not explicitly state the exact number of cases or images included in the reader study (test set). It refers to "a reader study of sample clinical data" and "prospectively collected clinical data."
- Data Provenance: The data was prospectively collected clinical data from patients undergoing a routine cardiac exam. The country of origin is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three experts were used.
- Qualifications of Experts: They were board-certified radiologists who specialize in cardiac imaging. The document does not specify their years of experience.
4. Adjudication Method for the Test Set
The adjudication method is not explicitly stated. The document mentions that each image was "read by three board certified radiologists who specialize in cardiac imaging who provided an assessment of image quality." This suggests independent readings, but it does not detail a consensus or adjudication process (e.g., 2+1, 3+1).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- No, an MRMC comparative effectiveness study involving human readers improving with AI vs. without AI assistance was not conducted or reported.
- The study was a reader study where experts assessed images generated by the ECG-less Cardiac system. The primary goal was to validate the diagnostic utility and image quality of the ECG-less Cardiac acquisitions themselves, not to assess human reader performance with or without an AI assist feature.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a form of standalone performance was assessed in the engineering bench testing. This testing "assessed how simulated ECG-less Cardiac scan conditions performed against an ECG-gated 'ground truth' nominal phase location." This component evaluated the algorithm's ability to generate images comparable to traditional ECG-gated acquisitions without human interpretation being the primary focus.
7. The Type of Ground Truth Used
- For the engineering bench testing, the ground truth was an ECG-gated "ground truth" nominal phase location. This implies a comparison to a known, established reference standard for cardiac imaging synchronization.
- For the clinical reader study, the ground truth was effectively the expert consensus/assessment of the three board-certified radiologists regarding the interpretability, motion artifact, and diagnostic utility of the ECG-less images. There is no mention of pathology or outcomes data being used as ground truth for this part of the study.
8. The Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set of the ECG-less Cardiac software.
9. How the Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established. It only discusses the testing (validation) phase.
Ask a specific question about this device
Page 1 of 10