Search Results
Found 37 results
510(k) Data Aggregation
(203 days)
GE Medical Systems SCS
CardIQ Suite is a non-invasive software application designed to provide an optimized application to analyze cardiovascular anatomy and pathology based on 2D or 3D CT cardiac non contrast and angiography DICOM data from acquisitions of the heart. It provides capabilities for the visualization and measurement of vessels and visualization of chamber mobility. CardIQ Suite also aids in diagnosis and determination of treatment paths for cardiovascular diseases to include, coronary artery disease, functional parameters of the heart, heart structures and follow-up for stent placement, bypasses and plaque imaging. CardIQ Suite provides calcium scoring, a non-invasive software application, that can be used with non-contrasted cardiac images to evaluate calcified plaques in the coronary arteries, heart valves and great vessels such as the aorta. The clinician can use the information provided by calcium scoring to monitor the progression/regression of calcium in coronary arteries overtime, and this information may aid the clinician in their determination of the prognosis of cardiac disease. CardIQ Suite also provides an estimate of the volume of heart fat for informational use.
CardIQ Suite is a non-invasive software application designed to work with DICOM CT data acquisitions of the heart. It is a collection of tools that provide capabilities for generating measurements both automatically and manually, displaying images and associated measurements in an easy-to-read format and tools for exporting images and measurements in a variety of formats.
CardIQ Suite provides an integrated workflow to seamlessly review calcium scoring and coronary CT angiography (CCTA) data. Calcium Scoring has a fully automatic capability which will detect calcifications within the coronary arteries, label the coronary arteries according to regional territories and generate a total and per territory calcium score based on the AJ 130 and Volume scoring methods. Interactive tools allow editing of both the auto scored coronary lesions and other calcified lesions such as aortic valve, mitral valve as well as other general cardiac structures. Calcium scoring results can be compared with two percentile guide databases to better understand a patient's percentage of risk based on age, gender, and ethnicity. Additionally, for these non-contrasted exams, the heart fat estimation automatically estimates values within the heart that constitute adipose tissue, typically between –200 and –30 Hounsfield Units.
Calcium Scoring results can be exported as DICOM SR, batch axial SCPT, or a PDF report to assist with integration into structured reporting templates. Images can be saved and exported for sharing with referring physicians, incorporating into reports and archiving as part of the CT examination.
The Multi-Planar Reformat (MPR) Cardiac Review and Coronary Review steps provide an interactive toolset for review of cardiac exams. Coronary CTA datasets can be reviewed utilizing the double oblique angles to visually track the path of the coronary arteries as well as to view the common cardiac chamber orientations. Cine capability for multi-phase data may be useful for visualization of cardiac structures in motion such as chambers, valves and arteries, automatic tracking and labeling will allow a comprehensive analysis of the coronaries. Vessel lumen diameter is calculated, and the minimum lumen diameter computed is shown in color along the lumen profile.
Distance measurement and ROI tools are available for quantitative evaluation of the anatomy. Vascular findings of interest can be identified and annotated by the user, and measurements can be calculated for centerline distances, cross-sectional diameter and area, and lumen minimum diameter.
Let's break down the acceptance criteria and study details for the CardIQ Suite device based on the provided FDA 510(k) clearance letter.
1. Table of Acceptance Criteria and Reported Device Performance
The document provides specific acceptance criteria and performance results for the novel or modified algorithms introduced in the CardIQ Suite.
Feature/Algorithm Tested | Acceptance Criteria | Reported Device Performance |
---|---|---|
New Heart Segmentation (non-contrast CT exams) | More than 90% of exams successfully segmented. | Met the acceptance criteria of more than 90% of the exams that are successfully segmented. |
New Heart Fat Volume Estimate (non-deep learning) | Average Dice score $\ge$ 90%. | Average Dice score is greater than or equal to 90%. (Note: Under or over estimation may occur due to inaccurate heart segmentation). |
New Lumen Diameter Quantification (non-deep learning) | Mean absolute difference between estimated diameters and reference device (CardIQ Xpress 2.0) diameters lower than the mean voxel size. | The mean absolute difference is lower than the mean voxel size, demonstrating sufficient agreement for lumen quantification. |
Modified Coronary Centerline Tracking | Performance is enhanced when compared to the predicate device. | Proven that the performance of these algorithms is enhanced when compared to the predicate device. |
Modified Coronary Centerline Labeling | Performance is enhanced when compared to the predicate device. | Proven that the performance of these algorithms is enhanced when compared to the predicate device. |
2. Sample Sizes Used for the Test Set and Data Provenance
- Heart Segmentation (non-contrast CT exams): 111 CT exams
- Heart Fat Volume Estimate: 111 CT exams
- Lumen Diameter Quantification: 94 CT exams with a total of 353 narrowings across all available test sets.
- Coronary Centerline Tracking and Labeling: "a database of retrospective CT exams." (Specific number not provided for this particular test, but it is part of the overall bench testing.)
Data Provenance: The document states that the CT exams used for bench testing were "collected from different clinical sites, with a variety of acquisition parameters, and pathologies." It also notes that this database is "retrospective." The country of origin is not explicitly stated in the provided text.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not explicitly state the number of experts used or their specific qualifications (e.g., radiologist with 10 years of experience) for establishing the ground truth for the test sets. The tests are described as "bench testing" and comparisons to a "reference device" (CardIQ Xpress 2.0) or to an expectation of "successfully segmented."
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). The performance is reported based on comparisons to a reference device or meeting a quantitative metric (e.g., Dice score, successful segmentation percentage, mean absolute difference).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention or describe that a multi-reader multi-case (MRMC) comparative effectiveness study was done. The focus is on the performance of the algorithms themselves ('bench testing') and their enhancement compared to predicates, rather than human reader improvement with AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
Yes, the studies described are standalone performance evaluations of the algorithms. They are referred to as "bench testing" and evaluate the device's algorithms directly against defined metrics or a reference device, without involving human readers in a diagnostic setting for performance comparison.
7. Type of Ground Truth Used
The type of ground truth used varies based on the specific test:
- Heart Segmentation (non-contrast CT exams) & Heart Fat Volume Estimate: The ground truth for these appears to be implicitly established by what constitutes "successfully segmented" or against which the "Dice score" is calculated. A "predefined HU threshold" is mentioned for heart fat, suggesting a quantitative, rule-based ground truth related to Hounsfield Units within segmented regions.
- Lumen Diameter Quantification: The ground truth for this was established by comparison to diameters from the reference device, CardIQ Xpress 2.0 (K073138).
- Coronary Centerline Tracking and Labeling: The ground truth for evaluating enhancement compared to the predicate is not explicitly defined but would likely involve some form of expert consensus or highly accurate manual delineation, which is then used to assess the "enhancement" of the new algorithm.
8. Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions that the "new deep learning algorithm for heart segmentation of non-contrasted exams uses the same model as the previous existing heart segmentation algorithm for contrasted exams, however now the input is changed, and the model is trained and tested with the non-contrasted exams." Similarly for coronary tracking, it states the deep learning algorithm was "retrained to a finer resolution." However, no specific training set sizes are given.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It is noted that the models were "trained," which implies the existence of a ground truth for the training data, but the methodology for its establishment (e.g., expert annotation, semi-automated methods) is not described in the provided text.
Ask a specific question about this device
(146 days)
GE Medical Systems SCS
VersaViewer is a medical diagnosis software supporting 2D, 3D and 4D medical images series for their processing and analysis through customizable layouts allowing multimodality review. It streamlines standard and advanced medical imaging analysis by providing a suite of measurements capabilities. It is designed for use by trained healthcare professionals and is intended to assist the physician in diagnosis, who is responsible for making all final patient management decisions.
VersaViewer is not intended for the displaying of digital mammography images for diagnosis.
VersaViewer is a software application for processing and analysis of 2D, 3D and 4D medical imaging data. The application provides adaptive layout to display selected series and, common radiology toolset to perform measurements. It aims to enable the review of medical imaging acquisitions for which a dedicated advanced visualization application is not required.
VersaViewer has the following functionalities:
- Reconstruct and display 2D, 3D and 4D medical images from multiple modalities.
- Display relevant series in an adaptive layout based on user selection.
- Access and dynamically load series of interest through embedded Series Selector.
- Allow to select different image rendering modes such as Volume Rendering, MIP (maximum intensity projection) /MinIP (minimum intensity projection) / Average, MPR (multiplanar reformation) and Oblique.
- Basic image review tools including paging, WW/WL adjustment and zoom. 3D volumes can be visualized in adjustable multi-oblique planes.
- Set of annotation, measurement, and segmentation tools.
- Dedicated panel collects findings as they are deposited on the images and enables user to manage them.
- Images and findings export options.
VersaViewer also includes One View as an optional feature.
One View provides reformatted views to assist radiologists in interpreting various types of spectral exams by projecting GSI (Gemstone Spectral Imaging) material decomposition images over monochromatic and color overlay.
Here's an analysis of the acceptance criteria and study detailed in the provided FDA 510(k) submission for VersaViewer, structured as requested:
Acceptance Criteria and Device Performance for VersaViewer
The provided 510(k) summary for VersaViewer indicates that the device's performance was evaluated through engineering bench testing for its newly introduced deep learning algorithm, specifically for automated segmentation. While explicit, numerically stated acceptance criteria and corresponding reported device performance values are not provided in the document, the conclusion states:
"The result of the algorithm validation showed that the algorithm successfully passed the defined acceptance criteria."
This implies that internal, pre-defined acceptance criteria were established and met. Without the actual criteria and performance metrics, a definitive table cannot be generated. However, based on the text, we can infer the nature of the evaluation.
Inferred Acceptance Criteria & Reported Performance:
Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|
Accuracy/Robustness of Automated Segmentation for six body parts (lung, liver, bone, aorta, heart, entire body) using the deep learning algorithm. | "The algorithm successfully passed the defined acceptance criteria." (Implies satisfactory accuracy and robustness as per internal thresholds.) |
Successful Design Verification and Validation for all functionalities, including image rendering, manipulation, measurement, and export. | "VersaViewer has successfully completed the design control testing per GE HealthCare's quality system." |
"All the testing and results did not raise new or different questions of safety and effectiveness other than those already associated with predicate devices." | |
Compliance with DICOM Standards | "The proposed device complies with NEMA PS 3.1 - 3.20 (2023) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard." |
1. Sample Size for Test Set and Data Provenance
- Sample Size for Test Set: The document states "a database of retrospective CT exams." It does not specify the exact number of exams or cases in this database.
- Data Provenance: The data used was from "retrospective CT exams." The country of origin is not explicitly mentioned, but GE Medical Systems SCS (the submitter) is based in France, and GE Medical Systems, LLC (reference device manufacturer) is in the US, suggesting a multinational potential. Given it's a 510(k) for a global company, it's common for data to be sourced from multiple regions, but this is not confirmed here.
2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used or their qualifications for establishing ground truth for the test set of the deep learning algorithm. It only mentions that the deep learning algorithm's segmentation was compared "to legacy segmentation algorithms" during verification and validation. This suggests a comparison against existing, validated methods rather than a direct human expert ground truth adjudication for the algorithm's performance.
3. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1) for establishing ground truth for the deep learning algorithm's performance on the test set. The validation primarily involved comparison to "legacy segmentation algorithms," implying a technical comparison rather than an expert consensus process for newly established ground truth for each case.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No mention of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was made in the provided document. Therefore, there is no information about the effect size of how much human readers improve with AI vs. without AI assistance. The VersaViewer is described as a medical diagnosis software with tools for processing and analysis, assisting the physician, but not primarily as an AI-assistance tool for diagnostic improvement (though its One View feature uses deep learning for segmentation).
5. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop)
Yes, a standalone performance study was implicitly done for the deep learning algorithm. The document states:
"Engineering has performed bench testing for the newly introduced deep learning algorithm in the subject device for automated segmentation of six body parts as lung, liver, bone, aorta, heart and body (entire body) using a database of retrospective CT exams."
This describes the evaluation of the algorithm's performance in isolation.
6. Type of Ground Truth Used
For the deep learning segmentation algorithm, the ground truth was established by comparison to legacy segmentation algorithms. The text states: "...the deep learning algorithm employed in the subject device have been successfully verified and validated, through comparison to legacy segmentation algorithms."
7. Sample Size for the Training Set
The document does not provide the sample size used for the training set of the deep learning algorithm. It only refers to a "database of retrospective CT exams" used for bench testing of the algorithm, which is typically the test set or validation set.
8. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established. It only mentions the comparison to "legacy segmentation algorithms" for the verification and validation (i.e., testing) phase. The methods for annotating or establishing ground truth for the data used to train the deep learning model are not detailed.
Ask a specific question about this device
(111 days)
GE Medical Systems SCS
3DXR is the AW application which is intended to perform the three-dimensional (3D) reconstruction computation of any images acquired with a 3D Acquisition mode of the X-ray interventional system for visualization under Volume Viewer. The 3D Acquisition modes are intended for imaging vessels, bones and soft tissues as well as other internal body structures. The 3D reconstructed Volume assist the physician in diagnosis, surgical planning, Interventional procedures and treatment follow-up.
3DXR is a post-processing software-only application, runs on Advantage Workstation (AW) platform [K110834], and performs 3D reconstruction for the CBCT 3D acquisition images (input) acquired from the fixed interventional X-ray system [K181403, K232344]. The reconstructed 3D volume (output) is visualized under Volume Viewer application [K041521]. The proposed 3DXR is a modification from the predicate 3DXR [included and cleared in K181403]. A new option, called CleaRecon DL, based on Deep-Learning (DL) technology, is added in the proposed subject 3DXR application.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (from Engineering Bench Testing) | Reported Device Performance |
---|---|
Quantitative (Image Analysis) | |
Reduction of Mean Absolute Error (MAE) between images with and without CleaRecon DL | A statistically significant reduction in MAE was observed between the two samples. |
Increase of Structural Similarity Index Measure (SSIM) between images with and without CleaRecon DL | A statistically significant increase in SSIM was observed between the two samples. |
Reduction of MAE (phantoms) | Reduction of MAE was observed. |
Reduction of Standard Deviation (SD) (phantoms) | Reduction of SD was observed. |
Qualitative (Clinical Evaluation) | |
CleaRecon DL removes streaks artifacts and does not introduce other artifacts | Clinicians confirmed that CleaRecon DL removes streaks artifacts and, in 489 reviews, did not identify any structure or pattern that has been hidden or artificially created by CleaRecon DL when compared to the original reconstruction. |
CleaRecon DL provides a clearer image and impacts confidence in image interpretation | In 98% of the cases, the CBCT images reconstructed with CleaRecon DL option are evaluated as clearer than the conventional CBCT images. Clinicians assessed how it impacts their confidence in image interpretation. (Specific quantitative impact on confidence not provided, but generally positive due to clearer images.) |
CleaRecon DL does not bring artificial structures and/or hide important anatomical structures | Within 489 reviews, clinicians did not identify any structure or pattern that has been hidden or artificially created by CleaRecon DL when compared to the original reconstruction. |
No degradation of image quality or other concerns related to safety and performance (overall) | Engineering bench testing passed predefined acceptance criteria, demonstrated performance, and "no degradation of image quality, nor other concerns related to safety and performance were observed." Clinical evaluation results "met the predefined acceptance criteria and substantiated the performance claims." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Engineering Bench Testing:
- Test Set 1 (Image Pairs): Two samples (number not specified beyond "two samples").
- Test Set 2 (Phantoms): Not specified beyond "phantoms."
- Clinical Image Quality Evaluation (Retrospective):
- Sample Size: 110 independent exams, each from a unique patient.
- Data Provenance: Retrospectively collected from 13 clinical sites.
- 80 patients from the US
- 26 patients from France
- 4 patients from Japan
- Patient Population: Adult patients (pediatrics excluded) undergoing interventional procedures. No inclusion criteria for age (within adult range) or sex/gender (except for prostate).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Engineering Bench Testing: Not applicable; ground truth was intrinsic (images with and without applying CleaRecon DL, or phantoms with reference).
- Clinical Image Quality Evaluation:
- Number of Experts: 13 clinicians.
- Qualifications: "Clinicians" (specific specialties or years of experience are not mentioned, but their role implies expertise in image interpretation for interventional procedures).
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Clinical Image Quality Evaluation: Each of the 110 exams (with and without CleaRecon DL) was compared/evaluated at least 3 times independently by the recruited clinicians. This resulted in 490 pairs of clinicians' evaluations. This suggests a multi-reader, independent review with subsequent aggregation of results, rather than a formal consensus-based adjudication like 2+1 or 3+1 for individual cases, as the "489 reviews" and "98% of cases" suggest aggregated findings.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A multi-reader multi-case (MRMC) study was implicitly conducted as part of the clinical image quality evaluation, with 13 clinicians reviewing 110 cases (though the "multi-case" aspect is strong, the "multi-reader" aspect is also present for each case, as each was reviewed at least 3 times independently).
- Effect Size: The study focused on the impact of the image quality on interpretation, rather than a direct measure of human reader performance improvement in diagnostic accuracy or efficiency with and without AI assistance. The results indicated:
- "In 98% of the cases, the CBCT images reconstructed with CleaRecon DL option are evaluated as clearer than the conventional CBCT images."
- Clinicians were asked to assess "how it impacts their confidence in image interpretation," but the specific effect size or improvement in confidence wasn't quantified.
- No hidden or artificially created structures were identified, indicating perceived safety and reliability.
- Therefore, while it showed a significant improvement in perceived image clarity, it did not provide a quantitative effect size for human reader improvement (e.g., in AUC, sensitivity, or specificity) with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, a standalone performance evaluation of the algorithm was done as part of the "Engineering bench testing." This involved:
- Testing on a segregated test dataset of image pairs (with and without CleaRecon DL) where Mean Absolute Error (MAE) and Structural Similarity Index Measure (SSIM) were computed.
- Testing on phantoms where MAE and Standard Deviation (SD) were computed relative to a reference (without simulated artifacts).
- These tests directly assessed the algorithm's capability to reduce artifacts and improve image metrics without human interaction.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Engineering Bench Testing:
- For the image pair comparison, the ground truth was essentially the "ideal" or "less artifact-laden" image derived from the paired comparison. This is a form of reference-based comparison where the output without the artifact or with the artifact corrected is the standard.
- For phantoms, the ground truth was the known characteristics of the phantom (e.g., absence of artifacts in the reference image).
- Clinical Image Quality Evaluation:
- The ground truth was established through expert evaluation/consensus (13 clinicians evaluating side-by-side images). However, it was focused on subjective image quality and the presence/absence of artifacts, rather than ground truth for a specific diagnosis or outcome.
8. The sample size for the training set
- The text states: "The CleaRecon DL algorithm was trained and qualified using pairs of images with and without streak artifacts." However, the specific sample size of the training set is not provided.
9. How the ground truth for the training set was established
- The ground truth for the training set was established using "pairs of images with and without streak artifacts." This implies that for each image with streak artifacts, there was a corresponding reference image without such artifacts, which allowed the algorithm to learn how to remove them. The method by which these "pairs of images" and their respective "with/without streak artifacts" labels were generated or confirmed is not detailed. It could involve expert labeling, simulation, or other methods.
Ask a specific question about this device
(254 days)
GE Medical Systems SCS
CardIQ Suite is a non-invasive software application designed to provide an optimized application to analyze cardiovascular anatomy and pathology based on 2D or 3D CT cardiac non contrast and angiography DICOM data from acquisitions of the heart. It provides capabilities for the visualization and measurement of vessels and visualization of chamber mobility. CardIQ Suite also aids in diagnosis and determination of treatment paths for cardiovascular diseases to include, coronary artery disease, functional parameters of the heart structures and follow-up for stent placement, bypasses and plaque imaging.
CardIQ Suite provides calcium scoring, a non-invasive software application, that can be used with non-contrasted cardiac images to evaluate calcified plaques in the coronary arteries, heart valves and great vessels such as the aorta. The clinician can use the information provided by calcium scoring to monitor the progression of calcium in coronary arteries over time, and this information may aid the clinician in their determination of the prognosis of cardiac disease.
CardIQ Suite is a non-invasive software application designed to work with DICOM CT data acquisitions of the heart. It is a collection of tools that provide capabilities for generating measurement's both automatically and manually, displaying images and associated measurements in an easy-to-read format and tools for exporting images and measurements in a variety of formats.
CardIQ Suite provides an integrated workflow to seamlessly review calcium scoring and coronary CT angiography (CCTA) data. Calcium Scoring has the capability to automatically segment and label the calcifications within the coronary arteries, and then automatically compute a total and per territory calcium score. The calcium segmentation/labeling is using a new deep learning algorithm. The calcium scoring is based on the standard Agatston/Janowitz 130 (AJ 130) and Volume scoring methods for the segmented calcific regions. The software also provides the users a manual calcium scoring capability that allows them to edit (add/delete or update) auto scored lesions. It also allows the user to manually score calcific lesions within coronary arteries, aorta, aortic valve and mitral valve as well as other general cardiac structures. Calcium scoring offers quantitative results in the AJ 130 score, Volume and Adaptive Volume scoring methods.
Calcium Scoring results can be exported as DICOM SR to assist with integration into structured reporting templates. Images can be saved and exported for sharing with referring physicians, incorporating into reports and archiving as part of the CT examination.
The Multi-Planar Reformat (MPR) Cardiac Review and Coronary Review steps provide an interactive toolset for review of cardiac exams. Coronary CTA datasets can be reviewed utilizing the double oblique angles to visually track the path of the coronary arteries as well as to view the common cardiac chamber orientations. Cine capability for multi-phase data may be useful for visualization of cardiac structures in motion such as chambers, valves and arteries, automatic tracking and labeling will allow a comprehensive analysis of the coronaries. Distance measurement and ROI tools are available for quantitative evaluation of the anatomy.
Based on the provided text, here is a description of the acceptance criteria and the study that proves the device meets them:
Device: CardIQ Suite (K233731)
Functionality being assessed: Automated Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline Tracking, and Coronary Artery Labeling (all utilizing new deep learning algorithms).
1. Table of Acceptance Criteria and Reported Device Performance
Feature / Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Automated Outputs Acceptability (Reader Study) | Acceptable by readers for greater than 90% of exams which had good image quality (based on Likert Scales and Additional Grading Scales). | The automated outputs provided by the Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline tracking and Coronary Labeling algorithms incorporated in the subject device CardIQ Suite were scored to be acceptable by the readers for greater than 90% of the exams which had good image quality. |
Algorithm Validation (Bench Testing) | Algorithm successfully passes the defined acceptance criteria (specific criteria not detailed in the provided text, but implied for each of the four new deep learning algorithms: heart segmentation, coronary segmentation, coronary centerline tracking, and coronary labeling). | The result of the algorithm validation showed that the algorithm successfully passed the defined acceptance criteria. |
2. Sample size used for the test set and the data provenance
- Test Set (Reader Study): A "sample of clinical CT images" was used. The exact number of cases is not specified.
- Test Set (Bench Testing): A "database of retrospective CT exams" was used. The exact number of cases is not specified.
- Data Provenance: The text does not explicitly state the country of origin. The bench testing data is described as "representative of the clinical scenarios where CardIQ Suite is intended to be used," suggesting it covers relevant acquisition protocols and clinical indicators. Both studies are retrospective ("retrospective CT exams" for bench testing and "sample of clinical CT images" for the reader study implying pre-existing data).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Number of Experts: A "reader study evaluation was performed," indicating multiple readers. The exact number is not explicitly stated.
- Qualifications of Experts: The text refers to them as "readers." Their specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed.
4. Adjudication method for the test set
The reader study used "Likert Scales and Additional Grading Scales" for evaluation. The specific adjudication method (e.g., 2+1, 3+1 consensus) for establishing a definitive ground truth or resolving discrepancies among readers is not detailed in the provided text. Scores were "to be acceptable by the readers," implying individual reader agreement or perhaps a simple majority/threshold.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Yes, a reader study was performed, which is a type of MRMC study in a sense.
The study did not directly measure human reader improvement with AI vs. without AI assistance quantitatively (e.g., AUC increase). Instead, it focused on the acceptability of the AI-generated outputs.
However, the conclusion states an perceived improvement in workflow efficiency: "Based on the reader study evaluation, we conclude that the automation of Heart Segmentation, Coronary Tree Segmentation, Coronary Centerline Tracking and Coronary Artery Labeling provides an improvement in workflow efficiency when compared to the predicate and reference devices wherein these functionalities were performed manually by the user or using traditional algorithms."
The effect size (quantification of improvement) in terms of reader diagnostic performance is not provided, only the qualitative statement about workflow efficiency and the acceptability of the AI's outputs.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, a standalone evaluation of the algorithms was performed as "Engineering has performed bench testing for the four newly introduced deep learning algorithms... The result of the algorithm validation showed that the algorithm successfully passed the defined acceptance criteria." This bench testing implies an assessment of the algorithm's performance independent of human interaction for its specific outputs against some predefined criteria.
7. The type of ground truth used
- For the Reader Study: The ground truth for evaluating the acceptability of the automated outputs was based on the "scores" given by human "readers" using Likert Scales and Additional Grading Scales. This is a form of expert consensus/reader assessment of the AI's output quality.
- For the Bench Testing for Algorithm Validation: The text states "the algorithm successfully passed the defined acceptance criteria". While the exact nature of this "defined acceptance criteria" is not specified, it would typically involve comparing algorithm output to a reference standard, which could be expert-annotated ground truth, a pre-established gold standard, or other quantitative metrics. The document does not specify if it was pathology, outcomes data, or expert consensus. It likely involved expert-derived annotations or quantitative metrics on the retrospective CT exams.
8. The sample size for the training set
The sample size for the training set is not provided in the given text.
9. How the ground truth for the training set was established
The method for establishing ground truth for the training set is not provided in the given text.
Ask a specific question about this device
(66 days)
GE Medical Systems SCS
BreView is a tool to aid clinicians with the review of multi-parametric breast magnetic resonance (MR) images. The combination of acquired images, reconstructed images, annotations, and measurements performed by the clinician are intended to provide the referring physician with clinically relevant information that may aid in diagnosis and treatment planning.
BreView is a dedicated advanced visualization review and post-processing tool that facilitates optimized post-processing of MR breast data review and assessment workflows for radiologists - including organizing images and composing reports. Adding the techniques of automatic motion correction and subtraction improves the review process. Software functionalities include:
- Ability to load 2D, 3D and 4D MR datasets as well as DICOM Secondary Captures (SCPT) .
- Optimized display of multi-parametric images within a dedicated layout
- Display customization: ability to choose layout, orientation, rendering mode
- Guided workflows for reviewing and processing MR breast exams
- Automated motion correction and/or subtraction of multi-phase datasets
- . Multi-planar reformats and maximum intensity projections (MIPs)
- Semi-automated segmentation and measurement tools
- Graph view for multi-phase datasets
- Save and export capabilities though DICOM Secondary Captures
- Data export in the form of a summary table
The provided text does not contain detailed information about specific acceptance criteria or a comprehensive study that proves the device meets acceptance criteria. The document is primarily a 510(k) premarket notification, focusing on demonstrating substantial equivalence to a predicate device rather than presenting a detailed performance study with specific acceptance metrics.
However, based on the type of information typically expected for such a submission and the limited details provided, I can infer some aspects and highlight what is missing.
Here's an analysis based on the provided text, structured to address your questions as much as possible, while also noting where information is explicitly not present:
Context: The device, "BreView," is a medical diagnostic software intended to aid clinicians in reviewing multi-parametric breast MR images. Its core functionalities include image organization, display customization, automated motion correction and/or subtraction, multi-planar reformats, semi-automated segmentation and measurement tools, graph view, and data export. It is marketed as a Class II device.
1. Table of Acceptance Criteria and Reported Device Performance
Critique: The document does not provide a specific table of acceptance criteria with corresponding performance metrics. It mentions "Performance testing (Verification, Validation)" and "Bench testing" but lacks quantifiable results against predefined acceptance thresholds.
However, we can infer some criteria from the "Comparison" table against the predicate device, although these are qualitative comparisons rather than quantitative performance data.
Feature/Functionality | Acceptance Criteria (Inferred/Generic) | Reported Device Performance (as described in the document) |
---|---|---|
Indications for Use | Equivalent or narrower than predicate. | "Equivalent. Both the predicate and proposed device are intended to display and process multi-parametric MR images... BreView is intended for breast exams only, thus BreView has a narrower indication for use than READY View." |
Computational Platform | Equivalent to predicate (AW Server). | "Equivalent. Both applications are server based and accessible through connected computer." |
Compatible MR Series | DICOM compliant, supporting 2D, 3D, 4D. | "Identical. DICOM compliant MR series including 2D, 3D and 4D MR series." Explicitly lists types of MR exams (T1 DCE, T1, T2, DWI, ADC maps). |
Non-rigid registration (NRR) | Identical to predicate's method for motion correction. | "Identical. The non-rigid registration uses the same method as that of Integrated Registration (K093234) for the registration of MR series." "Non-Rigid Registration algorithm was tested and found to be successfully implemented." |
Image Subtraction | Equivalent functionality to predicate, with added flexibility. | "Equivalent. Both proposed and predicate devices enable image subtraction although the proposed device allows the user to apply image subtraction without NRR registration while the predicate only allows subtraction after registration." |
Multi-planar reformats & MIP | Equivalent rendering methods to predicate. | "Equivalent. Imaging Fabric rendering methods are used in cleared applications like CardIQ Suite (K213725)." |
Smart Brush Tool (Segmentation) | Equivalent or better semi-automated segmentation. | "Equivalent, both the predicate and proposed device include semi-automated contouring tools... End user evaluation of the brush revealed that its behavior was equivalent or better to that of AutoContour (VV Apps K041521)." |
Graph View | Equivalent capability for temporal variation display. | "Equivalent. Both applications have the capability to display in a graphical form the temporal variation of pixel values in a temporal series." |
Data Export | Equivalent DICOM Secondary Capture export. | "Equivalent. Both the predicate and proposed device have capabilities to export the information displayed in the application." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not specified. The document mentions "Bench testing was conducted on a database of exams representative of the clinical scenarios where BreView is intended to be used, with consideration of acquisition protocols and clinical indicators." However, no numerical sample size (e.g., number of cases, number of images) is provided for this test set.
- Data Provenance: Not specified. There is no mention of the country of origin of the data or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. The document mentions "End user evaluation of the brush," implying radiologists or similar clinical professionals, but their specific qualifications or number are not detailed.
4. Adjudication Method for the Test Set
- Adjudication Method: None stated. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) used for establishing ground truth or evaluating performance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Conducted: No. The document does not describe a formal MRMC study where human readers' performance with and without AI assistance was evaluated. The evaluation appears to be primarily focused on the device's technological equivalence rather than its impact on human reader effectiveness.
- Effect Size of Improvement: Not applicable, as no MRMC study was described.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
- Standalone Performance Evaluation: Implied, but not explicitly quantified with metrics. The statement, "BreView Non-Rigid Registration algorithm was tested and found to be successfully implemented," suggests a standalone evaluation of this specific algorithm. However, no quantitative performance metrics (e.g., accuracy, precision, dice score for segmentation, registration error) are provided for any standalone algorithm. The "equivalence or better" statement for the Smart Brush tool is a comparative qualitative assessment based on "End user evaluation," not a formal standalone study result.
7. Type of Ground Truth Used
- Type of Ground Truth: The document does not explicitly define how ground truth was established for "bench testing." For functionalities like non-rigid registration and image subtraction, the "ground truth" might be implied by successful implementation against a known good algorithm (the predicate's method). For the "Smart Brush tool," the ground truth seems to be based on expert (end-user) evaluation comparing its behavior to the predicate's tool. There's no mention of pathology, clinical outcomes data, or a formal consensus process.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not specified. The document only mentions "Bench testing was conducted on a database of exams representative of the clinical scenarios," which refers to a test set, not a training set. Given that this is a 510(k) submission and the device is presented as applying "the same fundamental scientific technology as its predicate device," it is possible that new training data (in the context of deep learning) was not a primary focus of this submission, if the core algorithms are established or rule-based.
9. How Ground Truth for the Training Set Was Established
- Ground Truth Establishment for Training Set: Not applicable / Not specified. Since no training set or related machine learning components requiring explicit ground truth labeling were discussed in detail, this information is not provided.
Ask a specific question about this device
(68 days)
GE Medical Systems SCS
The angiographic X-ray systems are indicated for use for patients from newborn to geriatric in generating fluoroscopic and rotational images of human anatomy for cardiovascular, vascular and nonvascular, diagnostic and interventional procedures.
Additionally, with the OR table, the angiographic X-ray systems are indicated for use in generating fluoroscopic and rotational images of human anatomy for image-guided surgical procedures.
The OR table is suitable for interventional and surgical procedures.
The intended use and indications for use are unchanged from the predicate device.
GE HealthCare IGS interventional x-ray systems are designed to perform monoplane fluoroscopic x-ray examinations to provide the imaging information needed to perform minimally invasive interventional X-Ray imaging procedures (standard configuration).
Additionally, in the OR configuration (with an OR Table), these systems allow to perform surgery and X-Ray image guided surgical procedures in a hybrid Operating Room.
Allia IGS 3, Allia IGS 5, Allia IGS 7, Allia IGS 7 OR are monoplane GE HealthCare IGS interventional X-Ray system product models. Each product model is designed with a set of components that are combined into different configurations for providing specialized interventional x-ray systems. GE HealthCare IGS interventional x-ray system consists of a C-arm positioner, an x-ray table or the interface to the radiologic table, an x-ray tube assembly, an x-ray power unit with its exposure control unit, an x-ray imaging chain (including a digital detector and an image processing unit).
Allia IGS 5 is a monoplane system (C-arm positioner with L-shaped gantry) and is proposed in IGS 530 configuration with a square digital detector of 31cm (also called 30 cm configuration) or in IGS 520 configuration with a square digital detector of 20.5cm (also called 20 cm configuration). These product configurations are available in Standard configuration with Omega V table or Omega IV table (IGS 520 configuration only); or in OR configuration with Innova-IQ GE HealthCare OR table.
Allia IGS 3 is a monoplane system (C-arm positioner with L-shaped gantry) and is proposed with a square digital detector of 31cm. This product is available in Standard configuration with Omega V table. Allia IGS 7 is a monoplane system (C-arm with mobile AGV gantry) and is proposed in IGS 730 configuration with a square 31cm digital detector (also called 30 cm configuration). This product configuration is described in sub configurations: Standard or OR configuration. The Innova-IQ table in the OR configuration is the GE HealthCare OR table. Allia IGS 7 OR is a monoplane system (C-arm positioner with mobile AGV gantry) and is proposed in IGS 730 OR configuration with a square 31cm digital detector configuration. This is the product model compatible with a qualified configuration of the Maquet Magnus OR table system. This product model is provided with a table integration kit. The Magnus OR table configuration compatible with Allia IGS 7 OR includes flat table top configurations for Interventional X-Ray imaging and surgery procedures, an optional universal table top (table top with articulated joints) to enable expansion to surgical procedures requiring advanced patient positioning and with X-Ray imaging capabilities. A set of Magnus OR table accessories is included in the compatible configuration.
The purpose of this Premarket Notification is for the re-design of the x-ray source assembly.
Allia IGS 3, Allia IGS 5 in IGS 520 and IGS 530 configurations, Allia IGS 7 in IGS 730 configuration, and Allia IGS 7 OR in IGS 730 OR configuration will be improved and will be marketed as Allia IGS 3 Pulse, Allia IGS 5 Pulse, Allia IGS 7 Pulse, and Allia IGS 7 OR Pulse. All these four models may also be designated with the marketing name Allia IGS Pulse.
The provided text does not contain specific acceptance criteria and detailed device performance results relevant to AI/ML software. The document is an FDA 510(k) clearance letter for an interventional fluoroscopic x-ray system (Allia IGS 3, IGS 5, IGS 7, IGS 7 OR). It describes changes to hardware components (e.g., x-ray tube, cooling unit, generator), lists applicable regulatory standards, and generally states that the device successfully completed verification and validation testing, including risk management, software V&V, image quality, and dose performance using standard IQ metrics and QA phantoms. It also mentions "additional engineering bench testing... to substantiate the quantitative performance claims related to the new x-ray source assembly" and "testing... to demonstrate the overall imaging performance... using a wide variety of anthropomorphic phantoms."
However, none of this information is presented in the format of specific acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) for an AI/ML algorithm or detailed device performance metrics against those criteria. Furthermore, there is no mention of a study involving human readers, comparative effectiveness studies with or without AI, or the collection of ground truth data for a specific diagnostic task relevant to an AI/ML component.
Therefore, I cannot fulfill your request to describe acceptance criteria and the study that proves the device meets them in the context of AI/ML software performance based on the provided text. The document refers to the overall system's compliance with safety and performance standards for an X-ray system, not the performance of an AI/ML component.
Ask a specific question about this device
(241 days)
GE Medical Systems SCS
Spine Auto Views is a non-invasive image analysis software package which may be used in conjunction with CT images to aid in the automatic generation of anatomically focused multi-planar reformats and automatically export results to predetermined DICOM destinations.
Spine Auto Views assists clinicians by providing anatomically focused reformats of the spine, with the ability to apply anatomical labels of the vertebral bodies and intervertebral disc spaces.
Spine Auto Views may be used for multiple care areas and is not specific to any disease state. It can be utilized for the review of various types of exams including trauma, oncology, and routine body.
Spine Auto Views is a software analysis package designed to generate batch reformats and apply labels to the spine. It is intended to streamline the process of generating clinically relevant batch reformat outputs that are requested for many CT exam types.
Spine Auto Views can generate, automatically, patient specific, anatomically focused spine reformats. Spine Auto Views brings a state-of-the-art deep learning algorithm that generates oblique axial reformats, appropriately angled through each disc space without the need for a user interface and human interaction. 3D curved coronal and curved sagittal views of the spine as well as traditional reformat planes can all be generated with Spine Auto Views, no manual interaction required. Vertebral bodies and disc spaces can be labeled, and all series networked to desired DICOM destination(s), ready to read. The automated reformats may help in providing a consistent output of anatomically orientated images, labeled, and presented to the interpreting physician ready to read.
Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance Study for Spine Auto Views
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Definition | Reported Device Performance (Spine Auto Views) |
---|---|
Algorithm Capability to Automatically Detect Intervertebral Discs: Successful detection of position and orientation of intervertebral discs. | Passed. The algorithm successfully passed the defined acceptance criteria for automatically detecting the position and orientation of intervertebral discs using a database of retrospective CT exams. |
User Acceptance of Oblique Axial Reformats: Reader acceptance of automatically generated oblique axial reformats. | Greater than 95% of the time for all readers. The reader evaluation concluded that Spine Auto Views oblique axial reformats generated user-acceptable results over 95% of the time for all readers. |
User Acceptance of Curved Coronal and Curved Sagittal Reformats: Reader assessment of automatically generated curved coronal and curved sagittal batch reformats compared to standard views. | Assessed by readers. The batch reformats of curved coronal and curved sagittal views were assessed by readers by comparing them with corresponding standard coronal and sagittal views. (Specific success rate not quantified in the provided text, but implied as satisfactory given the overall conclusion). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: A database of retrospective CT exams for algorithm validation and a sample of clinical CT images for the reader study. The exact number of exams/images for each is not specified in the document.
- Data Provenance: The document states that the database of exams for algorithm validation was "representative of the clinical scenarios where Spine Auto Views is intended to be used, with consideration of acquisition parameters and patient characteristics." No specific country of origin is mentioned, but the mention of "retrospective CT exams" confirms the nature of the data.
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1). For the reader study, it indicates that readers assessed the reformats, but it doesn't detail how discrepancies or consensus were handled if multiple readers were involved in rating the same case.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? A reader study was performed to assess user acceptance, which is a form of MRMC study in that multiple readers evaluated cases. However, it was not explicitly a "comparative effectiveness study" with and without AI assistance as described in the prompt.
- Effect Size (AI vs. No AI): The study focused on the acceptance of the AI-generated reformats, implying a comparison against traditional manual generation (which would be "without AI assistance" for reformats). The text states that "Spine Auto Views oblique axial reformats generates user acceptable results greater than 95% of the time for all readers." This indicates a high level of acceptance for the AI-generated images, suggesting a positive effect compared to the traditional manual process, which the AI aims to streamline and automate. However, a quantitative effect size in terms of clinical improvement or efficiency gain compared to a "without AI" baseline is not provided. The device's primary benefit is automation ("no manual interaction required"), implying an improvement in workflow efficiency and consistency over manual methods.
6. Standalone (Algorithm Only) Performance
- Was standalone performance done? Yes, an engineering validation of the Spine Auto Views algorithm's capability to automatically detect the position and orientation of intervertebral discs was performed as a standalone assessment. This evaluated the algorithm's performance independent of human interaction or interpretation in a clinical setting.
7. Type of Ground Truth Used
- Algorithm Validation: The ground truth for the algorithm validation (disc detection) seems to have been established through a reference standard derived from the "database of retrospective CT exams." While the exact method (e.g., manual expert annotation, pathology correlation) is not explicitly stated, it implies a reliable, established truth against which the algorithm's detections were compared.
- Reader Study: For the reader study, the "user acceptable results" and comparison with "corresponding standard coronal and standard sagittal views" suggest a ground truth based on expert consensus/clinical acceptability by the evaluating readers.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set used for the deep learning algorithm. It only mentions a "database of retrospective CT exams" for validation.
9. How Ground Truth for Training Set Was Established
The document does not provide details on how the ground truth for the training set was established. It only refers to the validation set and its ground truth.
Ask a specific question about this device
(120 days)
GE Medical Systems SCS
FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.
The output includes segmented vasculature, and selective display of proximal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.
FlightPlan for Embolization is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps clinicians visualize vasculature to aid in the planning of endovascular embolization procedures throughout the body.
A new option, called AI Segmentation, was developed from the modifications to the predicate device, GE HealthCare's FlightPlan for Embolization [K193261]. It includes two new algorithms. This Al Segmentation option is what triggered this 510(k) submission.
The software process 3D X-ray angiography images (CBCT) acquired from GE HealthCare's interventional X-ray system [K181403], operates on GEHC's Advantage Workstation (AW) [K110834] platform and AW Server (AWS) [K081985] platform, and is an extension to the GE HealthCare's Volume Viewer application [K041521].
FlightPlan for Embolization is intended to be used during the planning phase of embolization procedures.
The primary features/functions of the proposed software are:
- Semi-automatic segmentation of vasculature from a starting point determined by the user, when AI Segmentation option is not activated;
- Automatic segmentation of vasculature powered by a deep-learning algorithm, when Al Segmentation option is activated;
- Automatic definition of the root point powered by a deep-learning algorithm, when AI Segmentation option is activated;
- Selective display (Live Tracking) of proximal vessels from a point determined by the user's cursor;
- Ability to segment part of the selected vasculature;
- Ability to mark points of interest (POI) to store cursor position(s);
- Save results and optionally export them to other applications such as GEHC's Vision Applications ● [K092639] for 3D road-mapping.
Here's a breakdown of the acceptance criteria and the study details for the GE Medical Systems SCS's FlightPlan for Embolization device, based on the provided text:
Acceptance Criteria and Device Performance
Feature / Algorithm | Acceptance Criteria | Reported Device Performance |
---|---|---|
Vessel Extraction | 90% success rate | 93.7% success rate |
Root Definition | 90% success rate | 95.2% success rate |
Study Details
1. Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: 207 contrast-injected CBCT scans, each from a unique patient.
- Data Provenance: The scans were acquired during the planning of embolization procedures from GE HealthCare's interventional X-ray system. The text indicates that these were from "clinical sites" and were "representative of the intended population" but does not specify countries of origin. The study appears to be retrospective, using existing scans.
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
- Vessel Extraction: 3 board-certified radiologists.
- Root Definition: 1 GEHC advanced application specialist.
3. Adjudication Method for the Test Set:
- Vessel Extraction: Consensus of 3 board-certified radiologists. (Implies a qualitative agreement, not a specific quantitative method like 2+1).
- Root Definition: The acceptable area was manually defined by the annotator (the GEHC advanced application specialist).
4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
- No, an MRMC comparative effectiveness study was not explicitly described in terms of human readers improving with AI vs. without AI assistance. The non-clinical testing focused on the algorithms' performance against ground truth and the clinical assessment used a Likert scale to evaluate the proposed device with the AI option, rather than a direct comparison of human reader performance with and without AI.
5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
- Yes, a standalone performance evaluation was conducted for both the vessel extraction and root definition algorithms. The reported success rates of 93.7% and 95.2% are measures of the algorithms' performance against established ground truth without a human in the loop for the primary performance metrics.
6. The Type of Ground Truth Used:
- Vessel Extraction: Expert consensus (3 board-certified radiologists).
- Root Definition: Manual definition by an expert (GEHC advanced application specialist), defining an "acceptable area."
7. The Sample Size for the Training Set:
- The document states that "contrast injected CBCT scans acquired from GE HealthCare's interventional X-ray system [K181403] were used for designing and qualifying the algorithms." However, it does not specify the sample size for the training set. It only mentions that a test set of 207 scans was "reserved, segregated, and used to evaluate both algorithms."
8. How the Ground Truth for the Training Set Was Established:
- The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
Ask a specific question about this device
(60 days)
GE Medical Systems SCS
DynamicIQ enables visualization and quantification of PET tracer pharmacokinetics based on whole body dynamic PET images. PET tracer pharmacokinetics include the physiological parameters of tracer uptake rate of the tracer and total blood distribution volume (Vd) that allow for analyzing the tracer accumulation over time, providing additional information that may help in the evaluation of SUV measurements on PET static images. The output of DynamicIQ is intended to be used by appropriately trained healthcare professionals as adjunct information for the review, analysis, and communication of PET static images for diagnosis, staging, treatment planning and monitoring. The parametric images should always be considered in addition to the conventional static PET images, which are the primary source to assist with diagnosis.
DynamiclQ is a post processing and visualization for visualizing and quantifying dynamic and static PET DICOM series. The software provides FDG tracer pharmacokinetics by generation of parametric images based on dynamic PET scans. The parametric images include physiological parameters of tracer uptake rate (Ki), metabolic rate of FDG and total blood distribution volume (Vd). In addition to the FDG Tracer pharmacokinetics, the software also performs the conventional review, analysis and communication of conventional PET static images, CT and MR images.
DynamicIQ assists with the clinical workflow by providing adjunct information of FDG tracer pharmacokinetics along with conventional PET static images for cross reference. The adjunct information of FDG tracer pharmacokinetics also help with analyzing the images that could have variability of quantitative measurements due to differences in uptake time, patient body size and blood glucose levels, leading to better characterization of tracer uptake compared to SUV alone.
The provided text describes the regulatory submission for DynamicIQ, a post-processing and visualization software for PET images. It mentions a study for performance testing but does not provide specific details on the acceptance criteria, reported performance values, or the study design.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and reported device performance.
- Sample sizes for test set, data provenance, number of experts for ground truth, expert qualifications, or adjudication methods for the test set.
- Information on a multi-reader multi-case (MRMC) comparative effectiveness study or standalone performance.
- Specifics on the type of ground truth used.
- Sample size or ground truth establishment for the training set.
The document states that "DynamicIQ has successfully completed the design control testing per GE's quality system. It was designed and will be manufactured under the Quality System Regulations of 21CFR 820 and ISO 13485. No additional hazards were identified, and no unexpected test results were observed." It also mentions "Performance testing (Verification, Validation)" and "Safety Testing (Verification)" were applied. This confirms that studies were conducted to ensure the device meets its design requirements and safety standards, but no specific details of those studies are included in this FDA 510(k) summary.
Ask a specific question about this device
(47 days)
GE Medical Systems SCS
Vision 2, EVARVision, TrackVision 2 and HeartVision 2 software applications are intended to enable users to load 3D datasets and overlay and register in real time these 3D datasets with radioscopic or radiographic images of the same anatomy in order to support catheter/device guidance during interventional procedures.
Structures of interest and estimated dimensions can be overlaid on the radioscopic or radiographic images. Image processing can be applied to enhance the display of such images. This information is intended to assist the physician during interventional procedures.
The Stereo 3D option enables physicians to visualize and localize needles, points, and segments on a 3D model/space using a stereotactic reconstruction of radioscopic or radiographic images at a significantly lower dose than use of a full cone beam CT acquisition. This information is intended to assist the physician during interventional procedures.
Vision Applications are a group of software applications called Vision 2, EVARVision, TrackVision 2 and HeartVision 2 that share the same core functionalities to target different clinical procedures.
Vision Applications load 3D datasets previously acquired from an acquisition modality (CT, MR or CBCT) and prepared with Volume Viewer application [K041521]. They overlay and register in real-time these 3D datasets with the 2D X-ray live images acquired from the GE Interventional X-ray system [K181403] (called IGS X-ray system in the rest of the document) to help support localization and guidance of catheters / devices during interventional procedures, in conjunction with primary images, native live 2D X-ray images.
Vision Applications help physicians to perform interventional procedures by providing enhanced image quality and additional 3D information instead of 2D X-ray live images alone.
Vision Applications operate on GE's Advantage Workstation (AW) [K110834] platform and communicates with the IGS X-ray system [K181403] for receiving the live X-ray images.
The subject device, Vision Applications were developed from modifications to the primary predicate device Innova Vision Applications [K092639], including the addition of new optional feature "Digital Pen". The Digital Pen option is what triggered this 510k and was modified from the reference device, GE's IGS X-ray systems [K181403] under the name Stenosis Analysis. The Vision Applications include also Stereo 3D option feature [K152352, secondary predicate].
The primary features/functionalities of the Vision Applications are:
- . Digital Pen.
- Overlay of 2D/3D images.
- Reception and display of live 2D images and related information. ●
- Loading of 3D datasets.
- Review mode.
- Film/Sequence/photo store.
- . Display controls for Visualization of images: including Zoom/Roam, Rendering, Planning data display, Annotation display, Virtual Collimation, ECG Display, Calcification Visualization Enhancement, display adjustment tools.
- Automatic Registration: including A priori registration and Registration based on Augmented Calibration.
- Manual Registration.
- Bi-view registration.
- User Interface: control from AW and from Tableside.
- . 2D Modes.
- Send Angles: including EVAR Angles, Progress View/Bull's eye.
- . Stereo 3D.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on your provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (What was measured) | Reported Device Performance (How the device performed) |
---|---|
Dimension estimation accuracy of Digital Pen option | The test results met the predefined acceptance criteria. (No specific numerical metrics provided in the text) |
Design Control Testing (Overall Software Quality) | Successfully completed per GE's quality system. No additional hazards, unexpected test results observed. |
Compliance with NEMA PS 3.1 - 3.20 DICOM Standard | The proposed device complies with this standard. |
Development under Quality System Regulations | Designed and manufactured under 21CFR 820 and ISO 13485. |
Software Development Lifecycle Adherence | Adhered to Requirements Definition, Risk Analysis, Technical Design Reviews, Formal Design Reviews, Software Development Lifecycle. |
Performance Testing (Verification, Validation) | Successfully completed. |
System Testing (Verification, Validation) | Successfully completed. |
Software Level of Concern | Moderate level of concern. |
Safety and Effectiveness (Compared to Predicate) | No new questions of safety and effectiveness other than those already associated with predicate devices. |
2. Sample Size for Test Set and Data Provenance
- Sample Size: The text states "a phantom with series of known dimension." It does not provide a specific numerical sample size for the test set.
- Data Provenance: The data was generated through "Engineering bench testing" using a "phantom." This indicates the data is prospective and simulated in a controlled environment, rather than from actual patient clinical data. The country of origin is not explicitly stated for this specific test, but the submitter is GE Medical Systems SCS, located in Buc, France.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: The document does not mention the use of experts to establish ground truth for this engineering bench testing. The ground truth for the dimension estimation was based on a "phantom with series of known dimension," implying a pre-defined, objective standard rather than expert consensus on images.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable. The ground truth was based on a phantom with known dimensions, not on human interpretation requiring adjudication.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, an MRMC comparative effectiveness study was not performed. The text explicitly states, "The subject of this premarket submission, Vision Applications, did not require clinical studies to support substantial equivalence."
6. Standalone (Algorithm Only) Performance Study
- Standalone Study: Yes, the testing described appears to be a standalone performance study of the "Digital Pen option" specifically focusing on its "dimension estimation accuracy." This was performed using "engineering bench testing" with a "phantom," meaning the algorithm's performance was evaluated against known physical dimensions without human-in-the-loop assistance during the measurement evaluation. The overall device, however, is intended to assist human operators.
7. Type of Ground Truth Used
- Type of Ground Truth: For the "Digital Pen option" dimension estimation testing, the ground truth was objective, pre-defined physical dimensions of a phantom ("a phantom with series of known dimension").
8. Sample Size for Training Set
- Sample Size: The document does not provide information about the sample size used for the training set. This is common for submissions if the device is a modification of a predicate and primarily relies on engineering changes and validation rather than a completely new AI model.
9. How Ground Truth for Training Set was Established
- How Ground Truth was Established: The document does not provide information on how ground truth was established for a training set. This suggests that the development likely leveraged existing algorithms or established principles from the predicate devices and the reference device's "Stenosis Analysis" functionality, rather than requiring a new, extensive labeled training dataset for novel AI model development.
Ask a specific question about this device
Page 1 of 4