Search Results
Found 92 results
510(k) Data Aggregation
(213 days)
Ask a specific question about this device
(121 days)
The Diagnostic Ultrasound System Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800, and Aplio i700 Model TUS-AI700 are indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs (thyroid, breast and testicle), trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, musculo-skeletal (both conventional and superficial), laparoscopic and thoracic/pleural. This system provides high-quality ultrasound images in the following modes: B mode, M mode, Continuous Wave, Color Doppler, Pulsed Wave Doppler, Power Doppler and Combination Doppler, as well as Speckle-tracking, Tissue Harmonic Imaging, Combined Modes, Shear wave, Elastography, and Acoustic attenuation mapping. This system is suitable for use in hospital and clinical settings by physicians or appropriately trained healthcare professionals.
In addition to the aforementioned indications for use, when EUS transducer GF-UCT180 and BF-UC190F are connected, Aplio i800 Model TUS-AI800/E3 provides image information for diagnosis of the upper gastrointestinal tract and surrounding organs, airways, tracheobronchial tree and esophagus.
The Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800 and Aplio i700 Model TUS-AI700, V9.0 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex, and sector array with frequency ranges between approximately 2MHz to 33MHz.
This FDA 510(k) clearance letter details the substantial equivalence of the Aplio i900, Aplio i800, and Aplio i700 Software V9.0 Diagnostic Ultrasound System to its predicate device. The information provided specifically focuses on the validation of new and improved features, with particular attention to the 3rd Harmonic Imaging (3-HI), a new deep learning (DL) enabled filtering process.
Acceptance Criteria and Device Performance for 3rd Harmonic Imaging (3-HI)
| Acceptance Criteria Category | Specific Criteria | Reported Device Performance (3-HI) | Study Details to Support Performance |
|---|---|---|---|
| Clinical Improvement | Spatial Resolution: Demonstrate improvement relative to conventional 2nd harmonic imaging. Contrast Resolution: Demonstrate improvement relative to conventional 2nd harmonic imaging. Artifact Suppression: Demonstrate improvement relative to conventional 2nd harmonic imaging. | Scores for 3-HI were higher than the middle score of 3 (on a 5-point ordinal scale) for spatial resolution, contrast resolution, and artifact suppression, as rated by radiologists in a blinded observer study. | Test Set Size: 30 patients Data Provenance: U.S. clinical site, previously acquired data (retrospective). Ground Truth: Clinical images with representative abdominal organs, anatomical structures, and focal pathologies. Experts: Three (3) U.S. board-certified radiologists. Adjudication Method: Blinded observer study (comparison to images without 3-HI). MRMC Study: Yes, human readers (radiologists) compared images with and without 3-HI. The effect size is indicated by "scores for 3-HI were higher than the middle score of 3". |
| Phantom Study Objectives | Lateral Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Axial Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Slice Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Contrast-to-Noise Ratio (CNR): Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Reverberation Artifact Suppression: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Frequency Spectra: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. | All prespecified performance criteria were achieved. The phantom studies demonstrated the capability of 3-HI to visualize abdominal images better than conventional 2nd harmonic imaging across all specified metrics. | Test Set Size: Not explicitly stated for each metric but "five abdominal phantoms with various physical properties". Data Provenance: Phantom data. Ground Truth: Controlled phantom targets with varying depths, sizes, and contrasts. Experts: Not applicable (objective measurements). Adjudication Method: Not applicable (objective measurements compared to prespecified criteria). |
Detailed Study Information
1. Acceptance Criteria and Reported Device Performance
(See table above)
2. Sample Size Used for the Test Set and Data Provenance
- 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
- Sample Size: 30 patients.
- Data Provenance: Previously acquired data from a U.S. clinical site (retrospective). Patients were selected to ensure diverse demographic characteristics representative of the intended U.S. patient population, including a wide range of body mass indices (18.5-36.3 kg/m²), roughly equivalent numbers of males and females, and ages ranging from 23-89 years old.
- 3rd Harmonic Imaging (3-HI) Phantom Study:
- Sample Size: Five abdominal phantoms.
- Data Provenance: Phantom data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
- Number of Experts: Three (3).
- Qualifications: U.S. board-certified radiologists.
4. Adjudication Method for the Test Set
- 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
- Adjudication Method: Blinded observer study. The three radiologists compared images with 3-HI to images without 3-HI (predicate functionality) using a 5-point ordinal scale. The median score was then compared with the middle score of 3.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a MRMC-like comparative effectiveness study was done for 3-HI's clinical evaluation.
- Effect Size: The statistical analysis demonstrated that scores for 3-HI were higher than the middle score of 3 for spatial resolution, contrast resolution, and artifact suppression. This indicates that human readers (radiologists) rated images with 3-HI as improved compared to those without.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance)
- Yes, a standalone study was performed for 3-HI in the phantom study. The phantom studies objectively examined lateral and axial resolution, slice resolution, contrast-to-noise ratio (CNR), reverberation artifact suppression, and frequency spectra without human interpretation.
7. The Type of Ground Truth Used
- 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
- Type of Ground Truth: Expert consensus (from the three board-certified radiologists) on image quality metrics (spatial resolution, contrast resolution, artifact suppression) through a blinded comparison against predicate functionality. The initial selection of patient images included "representative focal pathologies" suggesting clinical relevance in the images themselves.
- 3rd Harmonic Imaging (3-HI) Phantom Study:
- Type of Ground Truth: Objective measurements against known physical properties and targets within the phantoms.
8. The Sample Size for the Training Set (for 3-HI)
- The document explicitly states that the "The validation data set [30 patients] was entirely independent of the data set used to train the algorithm during its development." However, the actual sample size for the training set is not provided in the given text.
9. How the Ground Truth for the Training Set Was Established (for 3-HI)
- This information is not provided in the given text, beyond the statement that the algorithm was "locked upon completion of development" and had "no post-market, continuous learning capability."
Ask a specific question about this device
(136 days)
This device is a digital radiography/fluoroscopy system used in a diagnostic and interventional angiography configuration. The system is indicated for use in diagnostic and angiographic procedures for blood vessels in the heart, brain, abdomen and lower extremities.
αEvolve Imaging is an imaging chain intended for adults, with Artificial Intelligence Denoising (AID) designed to reduce noise in real-time fluoroscopic images and signal enhancement algorithm, Multi Frequency Processing (MFP).
The Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging, is an interventional X-ray system with a floor mounted C-arm as its main configuration. An optional ceiling mounted C-arm is available to provide a bi-plane configuration where required. Additional units include a patient table, X-ray high-voltage generator and a digital radiography system. The C-arms can be configured with designated X-ray detectors and supporting hardware (e.g. X-ray tube and diagnostic X-ray beam limiting device). The Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging includes αEvolve Imaging, an imaging chain intended for adults, with Artificial Intelligence Denoising (AID) designed to reduce noise in real-time fluoroscopic images and signal enhancement algorithm, Multi Frequency Processing (MFP).
Here's an analysis of the acceptance criteria and the study proving the device meets them, based solely on the provided FDA 510(k) summary:
Overview of the Device and its New Feature:
The device is the Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging. It's an interventional X-ray system. The new feature, αEvolve Imaging, includes Artificial Intelligence Denoising (AID) to reduce noise in real-time fluoroscopic images and a signal enhancement algorithm, Multi Frequency Processing (MFP). The primary claim appears to be improved image quality (noise reduction, sharpness, contrast, etc.) compared to the previous version's (V9.5) "super noise reduction filter (SNRF)."
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary does not explicitly state "acceptance criteria" with numerical thresholds for each test. Instead, it describes various performance evaluations and their successful outcomes. For the clinical study, the success criteria are clearly defined.
| Acceptance Criteria (Inferred/Stated) | Reported Device Performance |
|---|---|
| Bench Testing (Image Quality) | |
| 1. Change in Image Level, Noise & Structure: AID to be better at preserving mean image intensity, improved denoising, and image structure preservation compared to SNRF. | AID determined to be better at preserving mean image intensity and suggested to have improved denoising and image structure preservation (using student's t-test). |
| 2. Signal-to-Variance Ratio (SVR) and Signal-to-Noise Ratio (SNR): AID to show improved ability to preserve image signal while decreasing image noise compared to SNRF. | AID determined to have improved ability to preserve image signal while decreasing image noise (using student's t-test). |
| 3. Modulation Transfer Function (MTF): Improved performance for low-to-mid frequencies and similar high-frequency region compared to SNRF. | Results showed improved performance for low-to-mid frequencies in all test cases, and high-frequency region of MTF curve was similar for AID and SNRF in majority of cases (using student's t-test). |
| 4. Robustness to Detector Defects: Detector defects to be sufficiently obvious to inform clinician of service need, and image quality outside the defect area to remain visually unaffected, facilitating procedure completion. | Detector defects were sufficiently obvious, and image quality outside the area of the detector defect remained visually unaffected, facilitating sufficient image quality to finish the procedure. |
| 5. Normalized Noise Power Spectrum (NNPS): AID to have smaller noise magnitude in the frequency range of ~0.1 cycles/mm to 1.4 cycles/mm, with negligible differences above 1.4 cycles/mm. | AID had a smaller noise magnitude in the frequency range of ~0.1 cycles/mm to 1.4 cycles/mm. Noise magnitudes above 1.4 cycles/mm were very small and differences considered negligible. |
| 6. Image Lag Measurement: AID to perform better in reducing image lag compared to SNRF. | AID determined to perform better in reducing image lag (using student's t-test). |
| 7. Contrast-to-Noise Ratio (CNR) of Low Contrast Object: AID to show significantly higher CNR for low-contrast elements compared to SNRF. | AID had a significantly higher CNR than images processed with SNRF for all elements and test cases (using student's t-test). |
| 8. Contrast-to-Noise Ratio (CNR) of High Contrast Object: AID to show significantly higher CNR for high-contrast objects (guidewire, vessels) compared to SNRF. | AID had a significantly higher vessel and guidewire CNR than images processed with SNRF for all test cases (using student's t-test). |
| Clinical Study (Reader Study) | |
| Overall Preference (Binomial Test): Image sequences denoised by AID chosen significantly more than 50% of the time over SNRF. | The Binomial test found that image sequences denoised by AID were chosen significantly more than 50% of the time (indicating overall preference). |
| Individual Image Quality Metrics (Wilcoxon Signed Rank Test): Mean score of AID images significantly higher than SNRF for sharpness, contrast, confidence, noise, and absence of image artifacts. | The mean score of AID imaging chain images was significantly higher than that of the SNRF imaging chain for sharpness, contrast, confidence, noise, and the absence of image artifacts. |
| Generalizability: Algorithm to demonstrate equivalent or improved performance compared to the predicate with diverse clinical data. | Concluded that the subject algorithm demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing. |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary provides the following information about the clinical test set:
- Clinical Dataset Source: Patient image sequences were acquired from three hospitals:
- Memorial Hermann Hospital (Houston, Texas, USA)
- Waikato Hospital (Hamilton, New Zealand)
- Saiseikai Kumamoto Hospital (Kumamoto, Japan)
- Data Provenance: The study used retrospective "patient image sequences" for side-by-side comparison. The summary does not specify if the acquisition itself was prospective or retrospective, but the evaluation of pre-existing sequences makes it a retrospective study for the purpose of algorithm evaluation.
- Sample Size: The exact number of patient image sequences or cases used in the clinical test set is not specified in the provided document. It only mentions that the sequences were split into four BMI subgroups.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: The document states the clinical comparison was "reviewed by United States board-certified interventional cardiologists." The exact number of cardiologists is not specified.
- Qualifications: "United States board-certified interventional cardiologists." No mention of years of experience or other specific qualifications is provided.
4. Adjudication Method for the Test Set
The document describes a "side-by-side comparison" reviewed by experts in the clinical performance testing section. For the overall preference and individual image quality metrics, statistical tests (Wilcoxon signed rank test and Binomial test) were used. This implies that the experts rated or expressed preference for both AID and SNRF images, and these individual ratings/preferences were then aggregated and analyzed.
The exact adjudication method (e.g., 2+1, 3+1 consensus) for establishing a ground truth or a final decision on image quality aspects is not explicitly stated. It seems each expert provided their assessment, and these assessments were then statistically analyzed for superiority rather than reaching a consensus for each image pair.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
-
MRMC Study: Yes, a type of MRMC comparative study was conducted. The clinical performance testing involved multiple readers (US board-certified interventional cardiologists) evaluating multiple cases (patient image sequences).
-
Effect Size of Human Readers' Improvement with AI Assistance: The study directly compared AID-processed images to SNRF-processed images in a side-by-side fashion. It doesn't measure how much humans improve with AI assistance in a diagnostic task (e.g., how much their accuracy or confidence improves when using AI vs. not using AI). Instead, it measures the perceived improvement in image quality of the AI-processed images when evaluated by human readers.
- The study determined: "the mean score of the AID imaging chain images was significantly higher than that of the SNRF imaging chain with regard to sharpness, contrast, confidence, noise, and the absence of image artifacts."
- And for overall preference, "the Binomial test found that the image sequences denoised by AID were chosen significantly more than 50% of the time."
This indicates a statistically significant preference for and higher perceived image quality in AID-processed images by readers. However, it does not quantify diagnostic performance improvement with AI assistance, as it wasn't a study of diagnostic accuracy but rather image quality assessment. The "confidence" metric might hint at improved reader confidence using AID images, but it's not a direct measure of diagnostic effectiveness.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, extensive standalone performance testing of the AID algorithm was conducted through "Performance Testing – Bench" and "Image Quality Evaluations." This involved objective metrics and phantom studies without human subjective assessment.
Examples include:
- Change in Image Level, Noise and Structure
- Signal-to-Variance Ratio (SVR) and Signal-to-Noise Ratio (SNR)
- Modulation Transfer Function (MTF)
- Robustness to Detector Defects (visual comparison, but the algorithm's output is purely standalone)
- Normalizes Noise Power Spectrum (NNPS)
- Image Lag Measurement
- Contrast-to-Noise Ratio of a Low Contrast Object
- Contrast-to-Noise Ratio of a High Contrast Object
7. The Type of Ground Truth Used
- For Bench Testing: The ground truth for bench tests was primarily established through physical phantoms and objective image quality metrics. For example, the anthropomorphic chest phantom, low-contrast phantom, and flat field fluoroscopic images provided known characteristics against which AID and SNRF performance were measured using statistical tests.
- For Clinical Study: The ground truth for the clinical reader study was established by expert opinion/subjective evaluation (preference and scores for sharpness, contrast, noise, confidence, absence of artifacts) from "United States board-certified interventional cardiologists." There is no mention of a more objective ground truth like pathology or outcomes data for the clinical image evaluation.
8. The Sample Size for the Training Set
The document does not provide any information about the sample size used for the training set of the Artificial Intelligence Denoising (AID) algorithm.
9. How the Ground Truth for the Training Set was Established
The document does not provide any information about how the ground truth for the training set was established. It describes the AID as "Artificial Intelligence Denoising (AID) designed to reduce noise," implying a machine learning approach, but details on its training are missing from this summary.
Ask a specific question about this device
(120 days)
Self-Propelled CT Scan Base Kit, CGBA-035A:
The movable gantry base unit allows the Aquilion ONE (TSX-308A) system to be installed in the same procedure room as the INFX-8000C system, enabling coordinated clinical use within a shared workspace. This configuration provides longitudinal positioning along the z-axis for image acquisition.
Alphenix, INFX-8000C/B, INFX-8000C/S, V9.6 with Calculated DAP:
This device is a digital radiography/fluoroscopy system used in a diagnostic and interventional angiography configuration. The system is indicated for use in diagnostic and angiographic procedures for blood vessels in the heart, brain, abdomen and lower extremities. The Calculated Dose Area Product (DAP) feature provides an alternative method for determining dose metrics without the use of a physical area dosimeter. This function estimates the cumulative reference air kerma, reference air kerma rate, and cumulative dose area product based on system parameters, including X-ray exposure settings, beam hardening filter configuration, beam limiting device position, and region of interest (ROI) filter status. The calculation method is calibration-dependent, with accuracy contingent upon periodic calibration against reference measurements.
The Alphenix 4DCT is composed of the INFX-8000C interventional angiography system and the dynamic volume CT system, Aquilion ONE, TSX-308A. This combination enables patient access and efficient workflow for interventional procedures. Self-Propelled CT Scan Base Kit, CGBA-035A, is an optional kit intended to be used in conjunction with an Aquilion ONE / INFX-8000C based IVR-CT system. This device is attached to the Aquilion ONE CT gantry to support longitudinal movement and allow image acquisition in the z-direction (Z-axis), both axial and helical. When this option is installed, the standard CT patient couch is replaced with the fixed catheterization table utilized by the interventional x-ray system, INFX-8000C. The Self-Propelled CT Scan Base Kit, CGBA-035A, will be used as part of an Aquilion ONE / INFX-8000C based IVR-CT system. Please note, the intended uses of the Aquilion ONE CT System and the INFX-8000C Interventional X-Ray System remain the same. There have been no modifications made to the imaging chains in these FDA cleared devices and the base system software remains the same. Since both systems will be installed in the same room and to prevent interference during use, system interlocks have been incorporated into the systems.
The Alphenix, INFX-8000C/B, INFX-8000C/S, V9.6 with Calculated DAP, is an interventional x-ray system with a ceiling suspended C-arm as its main configuration. Additional units include a patient table, x-ray high-voltage generator and a digital radiography system. The C-arms can be configured with designated x-ray detectors and supporting hardware (e.g. x-ray tube and diagnostic x-ray beam limiting device). The INFX-8000C system incorporates a Calculated Dose Area Product (DAP) feature, which provides an alternative method for determining dose metrics without the use of a physical area dosimeter. This function estimates the cumulative reference air kerma, reference air kerma rate, and cumulative dose area product based on system parameters, including X-ray exposure settings, beam hardening filter configuration, beam limiting device position, and region of interest (ROI) filter status. The calculation method is calibration-dependent, with accuracy contingent upon periodic calibration against reference measurements.
N/A
Ask a specific question about this device
(118 days)
Vantage Fortian/Orian 1.5T systems are indicated for use as a diagnostic imaging modality that produces cross-sectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.
MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:
- Proton density (PD) (also called hydrogen density)
- Spin-lattice relaxation time (T1)
- Spin-spin relaxation time (T2)
- Flow dynamics
- Chemical Shift
Depending on the region of interest, contrast agents may be used. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.
The Vantage Fortian (Model MRT-1550/WK, WM, WO, WQ)/Vantage Orian (Model MRT-1550/U3, U4, U7, U8) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. These Vantage Fortian/Orian models use 1.4 m short and 4.1 tons light weight magnet. They include the Canon Pianissimo™ Sigma and Pianissimo Zen technology (scan noise reduction technology). The design of the gradient coil and the whole-body coil of these Vantage Fortian/Orian models provide the maximum field of view of 55 x 55 x 50 cm and include the standard (STD) gradient system.
The Vantage Orian (Model MRT-1550/ UC, UD, UG, UH, UK, UL, UO, UP) is a 1.5 Tesla Magnetic Resonance Imaging (MRI) System. The Vantage Orian models use 1.4 m short and 4.1 tons light weight magnet. They include the Canon Pianissimo™ and Pianissimo Zen technology (scan noise reduction technology). The design of the gradient coil and the whole-body coil of these Vantage Orian models provide the maximum field of view of 55 x 55 x 50 cm. The Model MRT-1550/ UC, UD, UG, UH, UK, UL, UO, UP includes the XGO gradient system.
This system is based upon the technology and materials of previously marketed Canon Medical Systems MRI systems and is intended to acquire and display cross-sectional transaxial, coronal, sagittal, and oblique images of anatomic structures of the head or body. The Vantage Fortian/Orian MRI System is comparable to the current 1.5T Vantage Fortian/Orian MRI System (K240238), cleared April 12, 2024, with the following modifications.
Acceptance Criteria and Study for Canon Medical Systems Vantage Fortian/Orian 1.5T with AiCE Reconstruction Processing Unit for MR
This document outlines the acceptance criteria and the study conducted to demonstrate that the Canon Medical Systems Vantage Fortian/Orian 1.5T with AiCE Reconstruction Processing Unit for MR (V10.0) device meets these criteria, specifically focusing on the new features: 4D Flow, Zoom DWI, and PIQE.
The provided text focuses on the updates in V10.0 of the device, which primarily include software enhancements: 4D Flow, Zoom DWI, and an extended Precise IQ Engine (PIQE). The acceptance criteria and testing are described for these specific additions.
1. Table of Acceptance Criteria and Reported Device Performance
The general acceptance criterion for all new features appears to be demonstrating clinical acceptability and performance that is either equivalent to or better than conventional methods, maintaining image quality, and confirming intended functionality. Specific quantitative acceptance criteria are not explicitly detailed in the provided document beyond qualitative assessments and comparative statements.
| Feature | Acceptance Criteria (Implied from testing) | Reported Device Performance |
|---|---|---|
| 4D Flow | Velocity measurement with and without PIQE of a phantom should meet the acceptance criteria for known flow values. Images in volunteers should demonstrate velocity stream lines consistent with physiological flow. | The testing confirmed that the flow velocity of the 4DFlow sequence met the acceptance criteria. Images in volunteers demonstrated velocity stream lines. |
| Zoom DWI | Effective suppression of wraparound artifacts in the PE direction. Reduction of image distortion level when setting a smaller PE-FOV. Accurate measurement of ADC values. | Testing confirmed that Zoom DWI is effective for suppressing wraparound artifacts in the PE direction; setting a smaller PE-FOV in Zoom DWI scan can reduce the image distortion level; and the ADC values can be measured accurately. |
| PIQE (Bench Testing) | Generate higher in-plane matrix images from low matrix images. Mitigate ringing artifacts. Maintain similar or better contrast and SNR compared to standard clinical techniques. Achieve sharper edges. | Bench testing demonstrated that PIQE generates images with sharper edges while mitigating the smoothing and ringing effects and maintaining similar or better contrast and SNR compared to standard clinical techniques (zero-padding interpolation and typical clinical filters). |
| PIQE (Clinical Image Review) | Images reconstructed with PIQE should be scored clinically acceptable or better by radiologists/cardiologists across various categories (ringing, sharpness, SNR, overall image quality (IQ), and feature conspicuity). PIQE should generate higher spatial in-plane resolution images from lower resolution images (e.g., triple matrix dimensions, 9x factor). PIQE should contribute to ringing artifact reduction, denoising, and increased sharpness. PIQE should be able to accelerate scanning by reducing acquisition matrix while maintaining clinical matrix size and image quality. PIQE benefits should be obtainable on regular clinical protocols without requiring acquisition parameter adjustment. Reviewer agreement should be strong. | The resulting reconstructions were scored on average at, or above, clinically acceptable. Exhibiting a strong agreement at the "good" and "very good" level in the IQ metrics, the Reviewers' scoring confirmed all the specific criteria listed (higher spatial resolution, ringing reduction, denoising, sharpness, acceleration, and applicability to regular protocols). |
2. Sample Size Used for the Test Set and Data Provenance
- 4D Flow & Zoom DWI: Evaluated utilizing phantom images and "representative volunteer images." Specific numbers for volunteers are not provided.
- PIQE Clinical Image Review Study:
- Subjects: A total of 75 unique subjects.
- Scans: Comprising a total of 399 scans.
- Reconstructions: Each scan was reconstructed multiple ways with or without PIQE, totaling 1197 reconstructions for scoring.
- Data Provenance: Subjects were from two sites in USA and Japan. The study states that although the dataset includes subjects from outside the USA, the population is expected to be representative of the intended US population due to PIQE being an image post-processing algorithm that is not disease-specific and not dependent on factors like population variation or body habitus.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- PIQE Clinical Image Review Study:
- Number of Experts: 14 USA board-certified radiologists/cardiologists.
- Distribution: 3 experts per anatomy (Body, Breast, Cardiac, Musculoskeletal (MSK), and Neuro).
- Qualifications: "USA board-certified radiologists/cardiologists." Specific years of experience are not mentioned.
4. Adjudication Method for the Test Set
- PIQE Clinical Image Review Study: The study describes a randomized, blinded clinical image review study. Images reconstructed with either the conventional method or the new PIQE method were randomized and blinded to the reviewers. Reviewers scored the images independently using a modified 5-point Likert scale. Analytical methods used included Gwet's Agreement Coefficient for reviewer agreement and Generalized Estimating Equations (GEE) for differences between reconstruction techniques, implying a statistical assessment of agreement and comparison across reviewers rather than a simple consensus adjudication method (e.g., 2+1, 3+1).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, an MRMC comparative effectiveness study was done for PIQE.
- Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance: The document states that "the Reviewers' scoring confirmed that: (a) PIQE generates higher spatial in-plane resolution images from lower resolution images (with the ability to triple the matrix dimensions in both in-plane directions, i.e. a factor of 9x); (b) PIQE contributes to ringing artifact reduction, denoising and increased sharpness; (c) PIQE is able to accelerate scanning by reducing the acquisition matrix only, while maintaining clinical matrix size and image quality; and (d) PIQE benefits can be obtained on regular clinical protocols without requiring acquisition parameter adjustment."
- While it reports positive outcomes ("scored on average at, or above, clinically acceptable," "strong agreement at the 'good' and 'very good' level"), it does not provide a quantitative effect size (e.g., AUC difference, diagnostic accuracy improvement percentage) of how much human readers improve with AI (PIQE) assistance compared to without it. The focus is on the quality of PIQE-reconstructed images as perceived by experts, rather than the direct impact on diagnostic accuracy or reader performance metrics. It confirms that the performance is "similar or better" compared to conventional methods.
6. Standalone (Algorithm Only) Performance Study
- Yes, standalone performance was conducted for PIQE and other features.
- 4D Flow and Zoom DWI: Evaluated using phantom images, which represents standalone, objective measurement of the algorithm's performance against known physical properties.
- PIQE: Bench testing was performed on typical clinical images to evaluate metrics like Edge Slope Width (sharpness), Ringing Variable Mean (ringing artifacts), Signal-to-Noise ratio (SNR), and Contrast Ratio. This is an algorithmic-only evaluation against predefined metrics, without direct human interpretation as part of the performance metric.
7. Type of Ground Truth Used
- 4D Flow & Zoom DWI:
- Phantom Studies: Known physical values (e.g., known flow values for velocity measurement, known distortion levels, known ADC values).
- PIQE:
- Bench Testing: Quantitative imaging metrics derived from the images themselves (Edge Slope Width, Ringing Variable Mean, SNR, Contrast Ratio) are used to assess the impact of the algorithm. No external ground truth (like pathology) is explicitly mentioned here, as the focus is on image quality enhancement.
- Clinical Image Review Study: Expert consensus/opinion (modified 5-point Likert scale scores from 14 board-certified radiologists/cardiologists) was used as the ground truth for image quality, sharpness, ringing, SNR, and feature conspicuity, compared against images reconstructed with conventional methods. No pathology or outcomes data is mentioned as ground truth.
8. Sample Size for the Training Set
The document explicitly states that the 75 unique subjects used in the PIQE clinical image review study were "separate from the training data sets." However, it does not specify the sample size for the training set used for the PIQE deep learning model.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set for PIQE was established. It only mentions that the study test data sets were separate from the training data sets.
Ask a specific question about this device
(232 days)
This device is indicated to acquire and display cross-sectional volumes of the whole body (abdomen, pelvis, chest, extremities, and head) of adult patients.
TSX-501R has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.
CT Scanner TSX-501R/1 V11.1 employs a next-generation X-ray detector unit (photon counting detector unit), which allows images to be obtained based on X-rays with different energy levels. This device captures cross sectional volume data sets used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician. This system is based upon the technology and materials of previously marketed Canon CT systems.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided 510(k) clearance letter.
It's important to note that a 510(k) summary typically doesn't provide the full, granular detail of a clinical study report. The information often indicates what was tested and the conclusion, but less about the specific methodologies, statistical thresholds for acceptance, or detailed performance metrics.
Understanding the Context: 510(k) Clearance
This document is a 510(k) clearance letter for a new CT scanner (CT Scanner TSX-501R/1 V11.1). The primary goal of a 510(k) submission is to demonstrate "substantial equivalence" to a legally marketed predicate device, not necessarily to prove absolute safety and effectiveness through extensive new clinical trials (which is more typical for a PMA - Premarket Approval). Therefore, the "acceptance criteria" and "study" described here are geared towards demonstrating this equivalence.
The core technology difference is the shift from an Energy Integrating Detector (EID) in the predicate to a Photon Counting Detector in the new device. The testing focuses on ensuring this new detector performs equivalently or better in terms of image quality and safety.
Acceptance Criteria and Reported Device Performance
Given the nature of a 510(k) for a CT scanner's hardware update (new detector), the "acceptance criteria" are implicitly tied to demonstrating equivalent or improved image quality and safety compared to the predicate device. The performance is assessed through bench testing with phantoms and review of clinical images.
Table of Acceptance Criteria and Reported Device Performance:
| Category | Acceptance Criteria (Implicit) | Reported Device Performance (as stated in the summary) |
|---|---|---|
| Objective Image Quality Performance (using phantoms) | Equivalent or improved performance compared to the predicate device regarding: - Contrast-to-Noise Ratios (CNR)- CT Number Accuracy- Uniformity- Pulse Pile Up- Slice Sensitivity Profile (SSPz)- Modulation Transfer Function (MTF)- Standard Deviation of Noise and Pulse Pile- Noise Power Spectra (NPS)- Low Contrast Detectability (LCD) | "It was concluded that the subject device demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing." |
| Fundamental Properties of the Photon Counting Detector (using phantoms) | Effectiveness and equivalent performance compared to expected or predicate device for: - Detector resolution and noise properties (MTF and DQE)- Artifact analysis- Count rate vs. current curve- Pulse pileup or maximum count rate- Lag/residual signal levels- Stability over time- Bad pixel map | "These bench studies utilized phantom data and achieved results demonstrative of equivalent performance in comparison with the predicate device." |
| Clinical Image Quality (Human Review) | Reconstructed images using the subject device are of diagnostic quality. | "It was confirmed that the reconstructed images using the subject device were of diagnostic quality." |
| Safety & Standards Conformance | Conformance to relevant electrical, radiation, software, and cybersecurity standards and regulations. | "This device is in conformance with the applicable parts of the following standards [list provided]... Additionally, this device complies with all applicable requirements of the radiation safety performance standards..." |
| Risk Analysis & Verification/Validation | Established specifications for the device have been met, and risks are adequately managed. | "Risk analysis and verification/validation activities conducted through bench testing demonstrate that the established specifications for the device have been met." |
| Software Documentation & Cybersecurity | Adherence to FDA guidance documents for software functions and cybersecurity. | "Software Documentation for a Basic Documentation Level... is included... Cybersecurity documentation... was included..." |
Study Details:
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set (Clinical Images): The specific number of clinical images/cases reviewed is not provided. The text states "Representative chest, abdomen, brain and MSK diagnostic images." This implies a selection of images from various body regions.
- Data Provenance: The document does not specify the country of origin for the clinical images. It also does not explicitly state whether the data was retrospective or prospective, though for a 510(k) supporting equivalence, retrospective data collection for image review is common.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Number of Experts: The document states "reviewed by American Board-Certified Radiologists." The specific number is not provided.
- Qualifications: "American Board-Certified Radiologists." This indicates a high level of qualification and experience in medical imaging interpretation.
-
Adjudication Method for the Test Set:
- The document does not specify an adjudication method (like 2+1 or 3+1) for the clinical image review. It simply states they were "reviewed by American Board-Certified Radiologists" and "it was confirmed that the reconstructed images using the subject device were of diagnostic quality." This implies a consensus or individual assessment of diagnostic quality, but the process is not detailed.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was it done? No, a formal MRMC comparative effectiveness study demonstrating how human readers improve with AI vs. without AI assistance was not conducted or described for this submission. This makes sense as the device is a CT scanner itself, not an AI-assisted diagnostic software. The clinical image review was to confirm diagnostic quality of the images produced by the new scanner, not to assess reader performance with or without an AI helper.
-
Standalone (Algorithm Only) Performance:
- Was it done? Yes, in a sense. The "bench testing" focusing on Objective Image Quality Evaluations and Fundamental Properties of the Photon Counting Detector can be considered "standalone" performance for the device's imaging capabilities. These tests used phantoms and measured technical specifications without human interpretation as the primary endpoint. The device's stated function is to acquire and display images, so its "standalone" performance is its ability to produce good images.
-
Type of Ground Truth Used:
- Bench Testing (Phantoms): The ground truth is the physical properties of the phantoms and the expected performance characteristics based on established physics and engineering principles (e.g., a known object size for MTF, known density for CT number accuracy).
- Clinical Images: The ground truth for confirming "diagnostic quality" is expert consensus/opinion from American Board-Certified Radiologists. It's an assessment of whether the image contains sufficient information and clarity for diagnostic purposes, not necessarily a comparison to a biopsy or long-term outcome.
-
Sample Size for the Training Set:
- The document does not mention a training set in the context of typical AI/machine learning development. This device is a CT scanner hardware system, not an AI diagnostic algorithm that learns from training data. Therefore, the concept of a "training set" as it relates to AI models is not applicable here.
-
How Ground Truth for the Training Set Was Established:
- As stated above, the concept of a "training set" as applied to AI/machine learning development does not directly apply to this CT scanner hardware submission. The device's performance is based on its physical design and engineering, not on learning from a large dataset.
Ask a specific question about this device
(238 days)
The Diagnostic Ultrasound System Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800, and Aplio i700 Model TUS-AI700 are indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs (thyroid, breast and testicle), trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, musculo-skeletal (both conventional and superficial), laparoscopic and Thoracic/Pleural. This system provides high-quality ultrasound images in the following modes: B mode, M mode, Continuous Wave, Color Doppler, Pulsed Wave Doppler, Power Doppler and Combination Doppler, as well as Speckle-tracking, Tissue Harmonic Imaging, Combined Modes, Shear wave, Elastography, and Acoustic attenuation mapping. This system is suitable for use in hospital and clinical settings by physicians or legally qualified persons who have received the appropriate training.
In addition to the aforementioned indications for use, when EUS transducer GF-UCT180 and BF-UC190F are connected, Aplio i800 Model TUS-AI800/E3 provides image information for diagnosis of the upper gastrointestinal tract and surrounding organs, airways, tracheobronchial tree and esophagus.
The Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800 and Aplio i700 Model TUS-AI700, V8.5 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex, and sector array with frequency ranges between approximately 2MHz to 33MHz.
Based on the provided FDA 510(k) clearance letter for the Canon Medical Systems Aplio i900/i800/i700 Diagnostic Ultrasound System, Software V8.5, the document does not contain the detailed information required to describe specific acceptance criteria and the study that proves the device meets those criteria.
This document is a regulatory clearance letter, which affirms that the device is substantially equivalent to a previously cleared predicate device. It confirms that the manufacturer has submitted evidence to demonstrate this equivalence and that the FDA has found it acceptable. However, it does not explicitly provide the specific performance metrics (like sensitivity, specificity, accuracy, or specific measurements) that would typically constitute "acceptance criteria" for a novel AI/software feature, nor does it detail a study that would "prove" these criteria were met in the way one might expect for a new algorithmic claim.
The document states:
- "Risk Analysis and verification and validation activities demonstrate that the established specifications for these devices have been met."
- "Additional performance testing included in the submission was conducted in order to demonstrate that the requirements for the new transducers and improved software functionality were met."
- "The results of all these studies demonstrate that the subject devices meet established specifications and perform as intended and in accordance with labeling."
This indicates that internal testing was performed, and specifications were met, but the specifics of these tests, the acceptance criteria, and the raw performance data are not disclosed in this public clearance letter.
Therefore, I cannot fill in the requested table and answer many of the specific questions.
Here's an attempt to answer what can be inferred or directly stated from the provided text, with explicit notes about what is not present:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Not provided in the document. | The document generally states that "established specifications for these devices have been met" and that the device "perform[s] as intended and in accordance with labeling." Specific performance metrics (e.g., accuracy, sensitivity, specific measurement tolerances) for any software feature or the overall system are not reported in this document. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not provided in the document.
- Data Provenance (e.g., country of origin, retrospective or prospective): Not provided in the document.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: Not provided in the document.
- Qualifications of Experts: Not provided in the document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not provided in the document.
5. If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: Not indicated or described in the document. The document focuses on modifications to existing software functionalities and new transducers, implying an update to an existing ultrasound system rather than a new AI-assisted diagnostic aid with a human-in-the-loop study.
- Effect Size: Not applicable as an MRMC study is not mentioned.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance Study: Not explicitly described or detailed in the document. The submission mentions "improved software functionality" but does not present standalone performance metrics for any specific algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not provided in the document.
8. The sample size for the training set
- Sample Size for Training Set: Not provided in the document. The document discusses software improvements and migrations of existing functionalities, implying iterative development and validation rather than a new de novo AI model requiring a distinct training set description in this context.
9. How the ground truth for the training set was established
- Ground Truth Establishment for Training Set: Not provided in the document.
Summary of what the document does provide regarding safety and effectiveness:
The clearance letter primarily establishes that the updated device (Aplio i900/i800/i700 Diagnostic Ultrasound System, Software V8.5) is substantially equivalent to its predicate device (Aplio i900/i800/i700, Diagnostic Ultrasound System, V8.1, K233195). This means the FDA has determined that the modifications (new transducers and software workflow/image quality improvements) do not raise new questions of safety or effectiveness and that the device performs as intended.
The safety assessment notes:
- Design and manufacturing under Quality System Regulations (21 CFR § 820) and ISO 13485 Standards.
- Conformance with applicable standards: ANSI AAMI ES60601-1, IEC 60601-1-2, IEC 60601-2-37, IEC 62304, IEC 62359, and ISO 10993-1.
- Risk Analysis, verification, and validation activities were performed.
- Cybersecurity documentation was included per FDA guidance.
- Software documentation appropriate for the Basic Documentation Level was included per FDA guidance.
In essence, while the document confirms that testing was done and specifications were met to achieve substantial equivalence, it does not disclose the granular details of those tests, the specific performance metrics (acceptance criteria), or the methodologies (like sample sizes, expert qualifications, or ground truth establishment) that would be present in a detailed performance study report.
Ask a specific question about this device
(84 days)
UltraExtend NX CUW-U001S Ultrasound Image Analysis Program is designed to allow the user to observe images and perform analysis based on examination data acquired using the following diagnostic ultrasound systems; TUS-AI900, TUS-AI800, and TUS-AI700.
This system is suitable for use in hospital and clinical settings by physicians or legally qualified persons who have received the appropriate training.
The UltraExtend NX, V2.0 is designed to allow the user to observe images and perform analysis based on examination data acquired using the Aplio i900/i800/i700 diagnostic ultrasound systems. RAW only or data saved in Image + RAW should be used for UltraExtend NX.
The FDA 510(k) clearance letter for the UltraExtend NX CUW-U001S V2.0 Ultrasound Image Analysis Program indicates that the device has integrated AI/ML-based functionality (2D Wall Motion Tracking with Full-assist function for left ventricle (LV) and Auto EF with Full-assist function for LV) that was previously cleared with a reference device (K223017). The submission states that "these studies utilized a representative subset of the clinical data acquired for the original performance testing of these features; additionally these studies applied the same acceptance criteria to evaluate the performance of these features compared to the same ground truth as utilized in the original performance evaluation of these features with the reference device."
Unfortunately, the provided text does not contain the specific acceptance criteria or detailed results of the performance testing for these AI/ML features. It only states that the features "perform as intended when integrated into the subject device, and with substantial equivalence as with the reference device."
Therefore, I cannot provide a table of acceptance criteria and reported device performance or many of the specific details requested in your prompt based solely on the provided document. The document refers to the original performance testing of the reference device (K223017) for these details.
However, I can extract and infer information about the study design to the extent possible:
Here's what can be inferred from the provided text, and what cannot be determined:
Acceptance Criteria and Device Performance
- The document states that the same acceptance criteria as the original performance testing for the reference device (K223017) were applied.
- Cannot Determine: The specific numerical acceptance criteria (e.g., specific accuracy, sensitivity, specificity thresholds) or the reported device performance metrics (e.g., actual accuracy, sensitivity, specificity values) are not provided in this document.
Study Information
| Information Type | Details from Document |
|---|---|
| 1. Acceptance Criteria & Reported Performance | Acceptance Criteria: "applied the same acceptance criteria to evaluate the performance of these features compared to the same ground truth as utilized in the original performance evaluation of these features with the reference device." Reported Performance: "The results of this testing demonstrate that both features perform as intended when integrated into the subject device, and with substantial equivalence as with the reference device." No specific numerical criteria or performance values are provided. |
| 2. Sample Size (Test Set) & Data Provenance | Sample Size: "a representative subset of the clinical data acquired for the original performance testing of these features" The exact number of cases/samples in this subset is not specified. Data Provenance: "clinical data" Country of origin (likely global, given the company's international presence but not explicitly stated), and whether retrospective or prospective is not explicitly stated for the test set, but "acquired" suggests previously collected. |
| 3. Number & Qualifications of Experts | Cannot determine. The document does not specify the number or qualifications of experts used for establishing the ground truth or for any readouts. |
| 4. Adjudication Method (Test Set) | Cannot determine. The method used for adjudicating expert opinions to establish ground truth (e.g., 2+1, 3+1) is not provided. |
| 5. MRMC Comparative Effectiveness Study | Not an MRMC Study. The testing described is not a multi-reader multi-case comparative effectiveness study comparing human readers with and without AI assistance. It is focused on demonstrating the embedded AI/ML features perform as intended and substantially equivalent to their performance in the previous device. There's no mention of human reader efficacy improvement. |
| 6. Standalone Performance (Algorithm Only) | Yes, indirectly. The performance evaluation of the AI/ML-based functionality (2D Wall Motion Tracking with Full-assist function for left ventricle and Auto EF with Full-assist function for left ventricle) within the UltraExtend NX device is focused on how the integrated features perform, compared to the ground truth. While it's integrated into a user-facing product, the "Full-assist function" implies an algorithmic component being evaluated against a ground truth. The submission confirms "the results of this testing demonstrate that both features perform as intended when integrated into the subject device". |
| 7. Type of Ground Truth Used | "the same ground truth as utilized in the original performance evaluation of these features with the reference device." No further specifics on the nature of the ground truth (e.g., expert consensus, pathology, follow-up outcomes) are provided. |
| 8. Sample Size (Training Set) | Cannot determine. The document does not provide any information about the training set size for the AI/ML models. It only discusses the test set used for the validation of the integrated features. |
| 9. How Ground Truth for Training Set Established | Cannot determine. Given that the training set details are not provided, how its ground truth was established is also not present in this document. |
Summary of missing information:
To fully answer your prompt, you would need to consult the original 510(k) submission for the reference device (K223017), Aplio i900/i800/i700 Diagnostic Ultrasound System, Software Version 7.0, as that is where the detailed performance data, acceptance criteria, and ground truth establishment methodology for the AI/ML features would have been submitted and evaluated by the FDA. The current document (K250328) focuses on demonstrating that these already cleared AI/ML features maintain their performance when integrated into a new workstation.
Ask a specific question about this device
(75 days)
Vantage Galan 3T systems are indicated for use as a diagnostic imaging modality that produces crosssectional transaxial, coronal, sagittal, and oblique images that display anatomic structures of the head or body. Additionally, this system is capable of non-contrast enhanced imaging, such as MRA.
MRI (magnetic resonance imaging) images correspond to the spatial distribution of protons (hydrogen nuclei) that exhibit nuclear magnetic resonance (NMR). The NMR properties of body tissues and fluids are:
·Proton density (PD) (also called hydrogen density)
·Spin-lattice relaxation time (T1)
·Spin-spin relaxation time (T2)
·Flow dynamics
·Chemical Shift
Depending on the region of interest, contrast agents may be used. When interpreted by a trained physician, these images yield information that can be useful in diagnosis.
The Vantage Galan (Model MRT-3020) is a 3 Tesla Magnetic Resonance Imaging (MRI) System, previously cleared under K241496. This system is based upon the technology and materials of previously marketed Canon Medical Systems and is intended to acquire and display crosssectional transaxial, coronal, sagittal, and oblique images of anatomic structures of the head or body.
This document describes a 510(k) premarket notification for the Vantage Galan 3T, MRT-3020, V10.0 with AiCE Reconstruction Processing Unit for MR. This submission concerns a modification to an already cleared device, primarily involving the addition of a standard gradient system and the extension of the Precise IQ Engine (PIQE) to new scan families, weightings, and anatomical areas.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of quantitative acceptance criteria for PIQE performance. Instead, it describes acceptance in qualitative terms based on expert review.
| Metric/Category | Acceptance Criteria (Implicit) | Reported Device Performance (PIQE) |
|---|---|---|
| Image Quality Metrics (Bench Testing) | Improvement in sharpness, mitigation of ringing, maintenance/improvement of SNR and contrast compared to standard techniques. | Generates images with sharper edges, mitigates smoothing and ringing effects, maintains similar or better contrast and SNR compared to zero-padding interpolation and typical clinical filters. |
| Clinical Image Review (Likert Scale) | Scored "at or above, clinically acceptable" on average. Strong agreement at "good" and "very good" level for all IQ metrics. | All reconstructions scored on average at, or above, clinically acceptable. Exhibited strong agreement at the "good" and "very good" level for all IQ metrics (ringing, sharpness, SNR, overall IQ, feature conspicuity). |
| Functionality | Generate higher spatial in-plane resolution from lower resolution images (up to 9x factor). Reduce ringing artifacts, denoise, and increase sharpness. Accelerate scanning by reducing acquisition matrix while maintaining clinical matrix size and image quality. Obtain benefits on regular clinical protocols without requiring acquisition parameter adjustment. | PIQE achieves these functionalities as confirmed by expert review and technical description. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 106 unique subjects.
- Data Provenance: Two sites in the USA and one in Japan. This data is described as "separate from the training data sets." The document states that the multinational study population is expected to be representative of the intended US population for PIQE, as PIQE is an image post-processing algorithm not disease-specific or dependent on acquisition parameters that might be affected by population variation. Comparisons were internal (each subject as its own control).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: 14 USA board-certified radiologists and cardiologists (3 reviewers per anatomy).
- Qualifications: "USA board-certified radiologists and cardiologists." Specific experience levels (e.g., years of experience) are not provided.
4. Adjudication Method for the Test Set
The document describes a scoring process by multiple reviewers but does not specify a formal adjudication method (e.g., 2+1, 3+1). It states: "scored by 3 reviewers per anatomy in various clinically-relevant categories... Reviewer scoring data was analyzed for reviewer agreement and differences between reconstruction techniques using Gwet's Agreement Coefficient and Generalized Estimating Equations, respectively." This suggests that the scores from the three reviewers were aggregated and analyzed statistically, rather than undergoing a consensus or tie-breaking adjudication process for each individual case.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- MRMC Study: Yes, a multi-site, randomized, blinded clinical image review study was conducted.
- Effect Size (AI-assisted vs. without AI assistance): This was not an AI-assisted reader study. The study compared images reconstructed with the conventional method (matrix expansion with Fine Reconstruction and typical clinical filter) against images reconstructed with PIQE. The purpose was to evaluate the image quality produced by PIQE, not to assess reader performance with or without AI assistance. Therefore, no effect size on human reader improvement with AI assistance is reported. The study aimed to demonstrate that PIQE-reconstructed images are clinically acceptable and offer benefits like sharpness and ringing reduction.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance evaluation of the PIQE algorithm was conducted through "bench testing." This involved evaluating metrics like Edge Slope Width, Ringing Variable Mean, Signal-to-Noise ratio, and Contrast Change Ratio on typical clinical images from various anatomical regions. This bench testing demonstrated that PIQE "generates images with sharper edges while mitigating the smoothing and ringing effects and maintaining similar or better contrast and SNR."
7. The Type of Ground Truth Used
- For Bench Testing: The "ground truth" implicitly referred to established quantitative image quality metrics (Edge Slope Width, Ringing Variable Mean, Signal-to-Noise ratio, and Contrast Change Ratio) and comparisons against conventional reconstruction methods.
- For Clinical Image Review Study: The "ground truth" was established by expert consensus/evaluation, where 14 board-certified radiologists and cardiologists scored images on various clinically-relevant categories (ringing, sharpness, SNR, overall IQ, and feature conspicuity) using a modified 5-point Likert scale.
8. The Sample Size for the Training Set
The document explicitly states that the "106 unique subjects... from two sites in USA and one in Japan... were scanned... to provide the test data sets (separate from the training data sets)." The sample size for the training set is not provided in the document.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set was established, as details about the training set itself are omitted.
Ask a specific question about this device
(132 days)
This device is indicated to acquire and display cross sectional volumes of the whole body, to include the head, with the capability to image whole organs in a single rotation. Whole organs include but are not limited to brain, heart, pancreas, etc. The Aquilion ONE has the capability to provide volume sets of the entire organ. These volume sets can be used to perform specialized studies, using indicated software/hardware, of the whole organ by a trained and qualified physician.
FIRST is an iterative reconstruction algorithm intended to reduce exposure dose and improve high contrast spatial resolution for abdomen, pelvis, chest, cardiac, extremities and head applications.
AiCE is a noise reduction algorithm that improves image quality and reduces image noise by employing Deep Convolutional Network methods for abdomen, pelvis, lung, cardiac, extremities, head, and inner ear applications.
The spectral imaging system allows the system to acquire two nearly simultaneous CT images of an anatomical location using distinct tube voltages and/or tube currents by rapid KV switching. The Xray dose will be the sum of the dose at each respective tube voltage and current in a rotation. Information regarding the material composition of various organs, tissues, and contrast materials may be gained from the differences in X-ray attenuation between these distinct energies. When used by a qualified physician, a potential application is to determine the course of treatment.
PIQE* is a Deep Learning Reconstruction method designed to enhance spatial resolution. By incorporating noise reduction into the Deep Convolutional Network (DCNN), it is possible to achieve both spatial resolution improvement and noise reduction for cardiac, abdomen and pelvis, and lung applications, in comparison to FBP and hybrid iterative reconstruction.
CLEAR Motion is a Deep Learning Reconstruction (DLR) method designed to reduce motion artifacts. A Deep Convolutional Network (DCNN) is used to estimate the patient's motion. This information is used in the reconstruction process to obtain lung images with less motion artifacts.
Aquilion ONE (TSX-308A/3) V1.5 is a whole body multi-slice helical CT scanner, consisting of a gantry, couch and a console used for data processing and display. This device captures cross sectional volume data sets used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician. This system is based upon the technology and materials of previously marketed Canon CT systems.
Here's a breakdown of the acceptance criteria and study information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document doesn't explicitly state quantitative acceptance criteria in a dedicated section. However, it implicitly defines performance through comparisons to a predicate device and statements about image quality.
| Feature / Study Focus | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| PIQE Lung Image Quality (Phantom Study) | Equivalent or improved performance compared to predicate (TSX-306A Aquilion Prism) based on CNR, CT Number Accuracy, Uniformity, SSP, MTF, SD of NPS, LCD. | Concluded that the subject device demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing. (Testing included Contrast-to-Noise Ratios, CT Number Accuracy, Uniformity, Slice Sensitivity Profile, Modulation Transfer Function, Standard Deviation of Noise Power Spectra, and Low Contrast Detectability.) |
| PIQE Body Image Quality (Phantom Study) | Equivalent or improved performance compared to predicate (TSX-306A Aquilion Prism) based on CNR, CT Number Accuracy, Uniformity, SSP, MTF, SD of NPS, LCD. | Concluded that the subject device demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing. (Testing included Contrast-to-Noise Ratios, CT Number Accuracy, Uniformity, Slice Sensitivity Profile, Modulation Transfer Function, Standard Deviation of Noise Power Spectra, and Low Contrast Detectability.) |
| Spectral Cardiac Image Quality (Phantom Study) | Equivalent or improved performance compared to predicate (TSX-306A Aquilion Prism) based on CNR, CT Number Accuracy, Uniformity, SSP, MTF, SD of NPS, LCD. | Concluded that the subject device demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing. (Testing included Contrast-to-Noise Ratios, CT Number Accuracy, Uniformity, Slice Sensitivity Profile, Modulation Transfer Function, Standard Deviation of Noise Power Spectra, and Low Contrast Detectability.) |
| CLEAR Motion Performance (Phantom Study) | Performed as intended, significantly reducing motion artifacts and maintaining CT Numbers compared to standard reconstructed images without CLEAR Motion. | Conclusions from these studies demonstrated that CLEAR Motion performed as intended, in that motion artifacts were significantly reduced and CT Numbers were maintained, compared to standard reconstructed images in which CLEAR Motion was not applied. (Evaluated using a water phantom and a thoracic dynamic phantom at 12 BPM, reconstructed with AIDR3D, AiCE and/or FBP with and without CLEAR Motion applied.) |
| Clinical Image Quality with Subject Device | Images of diagnostic quality. | ...it was confirmed that the reconstructed images using the subject device were of diagnostic quality. |
2. Sample Size Used for the Test Set and Data Provenance:
The document mentions the use of "phantoms" for image quality evaluations and "clinical images" for performance testing.
- Phantom Studies:
- Sample Size: Not explicitly stated, but multiple phantoms were used (e.g., water phantom, thoracic dynamic phantom). The exact number of scans or reconstructed images from these phantoms is not provided.
- Data Provenance: Not explicitly stated, but phantom studies typically involve controlled, non-clinical data generation.
- Clinical Image Evaluations:
- Sample Size: Not explicitly stated; "Representative body, cardiac, chest, head, and extremity diagnostic images" were used. The exact number of cases is not provided.
- Data Provenance: Implied to be retrospective clinical data, as they are "obtained using the subject device" and "reviewed by American Board-Certified Radiologists." No country of origin is specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- Number of Experts: Not explicitly stated for each specific evaluation. For clinical image evaluation, it states "American Board-Certified Radiologists" (plural), indicating more than one.
- Qualifications of Experts: "American Board-Certified Radiologists." No specific years of experience are mentioned.
4. Adjudication Method for the Test Set:
- Clinical Images: For the clinical image quality evaluation, it states "reviewed by American Board-Certified Radiologists." It doesn't specify an adjudication method (e.g., 2+1, 3+1, none). It implies a consensus or individual assessment to confirm diagnostic quality.
- Phantom Studies: Phantoms have inherent, objective ground truth based on their design and known properties, so expert adjudication isn't typically applicable in the same way.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study was not explicitly described in the provided text. The document focuses on showing substantial equivalence through phantom studies and a general statement about diagnostic quality of clinical images, rather than a comparative study of human readers with and without AI assistance to quantify improvement.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study:
- Yes, standalone performance was evaluated. The "Image Quality Evaluations" and "CLEAR Motion Evaluations" using phantoms are examples of standalone performance testing. These tests assess the device's algorithms (PIQE, CLEAR Motion) directly against objective metrics or by comparing reconstructed images for specific features (e.g., noise reduction, motion artifact reduction) without human intervention in the diagnostic process.
7. Type of Ground Truth Used:
- For Phantom Studies (PIQE, CLEAR Motion): Objective ground truth derived from the known physical properties and design of the phantoms (e.g., known image metrics, controlled motion patterns).
- For Clinical Image Quality: Expert consensus/review by "American Board-Certified Radiologists" to confirm "diagnostic quality."
8. Sample Size for the Training Set:
- Not provided. The document describes the device, its features (some of which use Deep Learning Reconstruction), and details of performance testing. It does not include information about the size or nature of the training data used for the AI algorithms (AiCE, PIQE, CLEAR Motion).
9. How the Ground Truth for the Training Set was Established:
- Not provided. As the training set details are absent, the method for establishing its ground truth is also not mentioned.
Ask a specific question about this device
Page 1 of 10