Search Results
Found 640 results
510(k) Data Aggregation
(164 days)
JAK
Ask a specific question about this device
(106 days)
JAK
CT:VQ software is a non-invasive image post-processing technology, using CT lung images to provide clinical decision support for thoracic disease diagnosis and management in adult patients. It utilizes two non-contrast chest CT studies to quantify and visualize ventilation and perfusion.
Quantification and visualizations are provided as DICOM images. CT:VQ may be used when Radiologists, Pulmonologists, and/or Nuclear Medicine Physicians need a better understanding of a patient's lung function and/or respiratory condition.
CT:VQ is a Software as a Medical Device (SaMD) technology, which can be used in the analysis of a paired (inspiratory/expiratory) non-contrast Chest CT. It is designed to measure regional ventilation (V) and regional perfusion (Q) in the lungs.
The Device provides visualization and quantification to aid in the assessment of thoracic diseases. These regional measures are derived from the lung tissue displacement, the lung volume change, and the Hounsfield Units of the paired (inspiratory/expiratory) chest CT.
The Device outputs DICOM images containing the ventilation output and perfusion output, consisting of a series of image slices generated with the same slice spacing as the expiration CT. In each slice the intensity value for each voxel represents either the value of ventilation or the value for perfusion, respectively, at that spatial location. Additional Information sheet is also generated containing quantitative data, such as lung volume.
Here's a breakdown of the acceptance criteria and the study details for the CT:VQ device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
The acceptance criteria for CT:VQ are implicitly demonstrated through its strong performance in clinical studies, showing agreement with established gold standards. While explicit numerical acceptance criteria are not provided in a table format within the summary, the narrative describes the goals of the study:
- Consistency/Agreement with Nuclear Medicine Imaging (SPECT/CT): The device's regional ventilation and perfusion measurements should align well with SPECT/CT findings.
- Correlation with Pulmonary Function Tests (PFTs): CT:VQ metrics should statistically correlate with standard PFTs like DLCO and FEV1/FVC ratio.
- Interpretability and Clinical Actionability: The outputs should be clear, understandable, and useful for clinicians.
- Safety and Effectiveness Profile: The device should have a safety and effectiveness profile similar to the primary predicate device.
Table of Acceptance Criteria and Reported Device Performance (as inferred from the text):
Acceptance Criterion (Inferred) | Reported Device Performance |
---|---|
Strong regional agreement with SPECT VQ (Ventilation) | CT:VQ showed strong regional agreement with SPECT VQ across lobar distributions of ventilation. In the Reader Performance Study, clinicians consistently rated CT:VQ outputs as having good to excellent agreement with SPECT across all lung regions. |
Strong regional agreement with SPECT VQ (Perfusion) | CT:VQ showed strong regional agreement with SPECT VQ across lobar distributions of perfusion. In the Reader Performance Study, clinicians consistently rated CT:VQ outputs as having good to excellent agreement with SPECT across all lung regions. |
Correlation with Gas Transfer Impairment (DLCO) | Quantitative perfusion heterogeneity metrics derived from CT:VQ demonstrated stronger associations with gas transfer impairment (DLCO) than those derived from SPECT, suggesting improved physiological sensitivity. There was a statistically significant correlation between the CT:VQ and PFT outputs. |
Correlation with Airway Obstruction (FEV1 and FEV1/FVC % predicted) | Ventilation heterogeneity metrics from CT:VQ correlated well with FEV1 and FEV1/FVC % predicted. There was a statistically significant correlation between the CT:VQ and PFT outputs. |
Interpretability and Clinical Actionability by Intended Users | The Reader Performance Study affirmed that CT:VQ outputs are interpretable and clinically actionable by intended users. |
Inter-reader variability similar to SPECT | Inter-reader variability was not significantly different for CT:VQ than for SPECT. |
Feasibility of generating reliable and consistent data | The clinical studies successfully demonstrated the feasibility of generating valid data that is reliable and consistent with Nuclear Medicine Ventilation imaging results. |
Safety and effectiveness profile similar to predicate device | Based on the clinical performance, CT:VQ was found to have a safety and effectiveness profile that is similar to the primary predicate device. It also demonstrated the capability to provide information without contrast agents (unlike some alternative perfusion methods). |
Robustness across various CT inputs | Verification testing demonstrated that the Device was robust within acceptable performance limits across the entire range of inputs (CT scanners, institutions, varying lung volumes, image properties affecting voxel size and SNR). Specific performance limits are not quantified in the summary, but the general claim of robustness is made. |
Study Details
Here's a breakdown of the specific information requested about the studies:
1. Sample sizes used for the test set and the data provenance:
- Test Set Sample Sizes:
- Reader Performance Study: n=77
- Standalone Performance Assessment: n=58 (a subset of the overall clinical studies data)
- Data Provenance:
- Country of Origin: Not explicitly stated, but the submission is from 4DMedical Limited in Australia, and the FDA clearance is in the US. The description mentions "clinically-acquired data included paired chest CTs acquired on CT scanners across a range of manufacturers and models and at different institutions, across a diverse range of patients." This suggests multi-institutional data, potentially from various geographic locations, but this is not confirmed.
- Retrospective or Prospective: Not explicitly stated whether the studies were retrospective or prospective. The description "clinical studies were also conducted to demonstrate the safety and efficacy... in the context of clinical care" and comparing with "gold-standard and best practice measures for respiratory diagnosis" often implies retrospective analysis of existing data combined with prospective data collection, but this is not definitive in the text.
2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated for establishing ground truth, although for the Reader Performance Study, "clinicians with expertise in thoracic imaging and pulmonary care" were involved in rating the outputs. The implication is that these experts, along with SPECT/CT and PFT results, contributed to the ground truth.
- Qualifications of Experts: "Clinicians with expertise in thoracic imaging and pulmonary care." No specific number of years of experience or board certifications (e.g., radiologist with 10 years of experience) is provided.
3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Adjudication Method: Not explicitly stated. The summary mentions "Inter-reader variability was not significantly different for CT:VQ than for SPECT," which implies multiple readers, but the method for resolving discrepancies or establishing a final ground truth from multiple readers is not detailed.
4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: A "Reader Performance Study" was conducted with n=77 cases, involving "clinicians with expertise in thoracic imaging and pulmonary care." This aligns with the characteristics of an MRMC study.
- Effect Size of Human Reader Improvement with AI vs. without AI assistance: The summary does not provide an effect size or direct comparison of human reader performance with CT:VQ assistance versus without it. The study focused on assessing:
- Agreement between CT:VQ outputs and SPECT.
- Interpretability and clinical actionability of CT:VQ outputs.
- Inter-reader variability of CT:VQ vs. SPECT.
It does not quantify an improvement in reader accuracy or efficiency due to AI assistance.
5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, a "Standalone Performance Assessment" was performed with a subset of 58 cases. The findings indicated strong regional agreement between CT:VQ and SPECT VQ measurements and stronger associations of CT:VQ perfusion metrics with DLCO compared to SPECT.
6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: A combination of established clinical diagnostics was used:
- Nuclear Medicine Imaging (Single photon emission computed tomography, SPECT/CT): Used as a "gold-standard and best practice measure" for regional ventilation and perfusion.
- Pulmonary Function Tests (PFTs): Specifically Diffusing capacity of the lung for carbon monoxide (DLCO) and FEV1/FVC ratio, used to correlate with CT:VQ outputs.
- Clinical Diagnosis/Findings: Implied through "Case Studies further illustrated key advantages of CT:VQ... successfully replicated the diagnostic findings of SPECT."
7. The sample size for the training set:
- Training Set Sample Size: Not explicitly stated in the provided text. The summary only mentions the sample sizes for the clinical validation studies (test sets).
8. How the ground truth for the training set was established:
- Training Set Ground Truth Establishment: Not explicitly stated how the ground truth for the training set was established, as the training set size and characteristics are not detailed. Typically, it would involve similar rigorous processes (e.g., expert annotation, gold-standard imaging modalities, clinical outcomes) as the test set, but this information is absent in this document.
Ask a specific question about this device
(126 days)
JAK
The system is intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission projection data from the same axial plane taken at different angles. The system may acquire data using Axial, Cine, Helical, Cardiac, and Gated CT scan techniques from patients of all ages. These images may be obtained either with or without contrast. This device may include signal analysis and display equipment, patient and equipment supports, components and accessories.
This device may include data and image processing to produce images in a variety of trans-axial and reformatted planes. Further, the images can be post processed to produce additional imaging planes or analysis results.
The system is indicated for head, whole body, cardiac, and vascular X-ray Computed Tomography applications.
The device output is a valuable medical tool for the diagnosis of disease, trauma, or abnormality and for planning, guiding, and monitoring therapy.
If the spectral imaging option is included on the system, the system can acquire CT images using different kV levels of the same anatomical region of a patient in a single rotation from a single source. The differences in the energy dependence of the attenuation coefficient of the different materials provide information about the chemical composition of body materials. This approach enables images to be generated at energies selected from the available spectrum to visualize and analyze information about anatomical and pathological structures.
GSI provides information of the chemical composition of renal calculi by calculation and graphical display of the spectrum of effective atomic number. GSI Kidney stone characterization provides additional information to aid in the characterization of uric acid versus non-uric acid stones. It is intended to be used as an adjunct to current standard methods for evaluating stone etiology and composition.
The CT system is indicated for low dose CT for lung cancer screening. The screening must be performed within the established inclusion criteria of programs/ protocols that have been approved and published by either a governmental body or professional medical society.
This proposed device Revolution Vibe is a general purpose, premium multi-slice CT Scanning system consisting of a gantry, table, system cabinet, scanner desktop, power distribution unit, and associated accessories. It has been optimized for cardiac performance while still delivering exceptional imaging quality across the entire body.
Revolution Vibe is a modified dual energy CT system based on its predicate device Revolution Apex Elite (K213715). Compared to the predicate, the most notable change in Revolution Vibe is the modified detector design together with corresponding software changes which is optimized for cardiac imaging providing capability to image the whole heart in one single rotation same as the predicate.
Revolution Vibe offers an accessible whole heart coverage, full cardiac capability CT scanner which can deliver outstanding routine head and body imaging capabilities. The detector of Revolution Vibe uses the same GEHC's Gemstone scintillator with 256 x 0.625 mm row providing up to 16 cm of coverage in Z direction within 32 cm scan field of view, and 64 x 0.625 mm row providing up to 4 cm of coverage in Z direction within 50 cm scan field of view. The available gantry rotation speeds are 0.23, 0.28, 0.35, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, and 1.0 seconds per rotation.
Revolution Vibe inherits virtually all of the key technologies from the predicate such as: high tube current (mA) output, 80 cm bore size with Whisper Drive, Deep Learning Image Reconstruction for noise reduction (DLIR K183202/K213999, GSI DLIR K201745), ASIR-V iterative recon, enhanced Extended Field of View (EFOV) reconstruction MaxFOV 2 (K203617), fast rotation speed as fast as 0.23 second/rot (K213715), and spectral imaging capability enabled by ultrafast kilovoltage(kv) switching (K163213), as well as ECG-less cardiac (K233750). It also includes the Auto ROI enabled by AI which is integrated within the existing SmartPrep workflow for predicting Baseline and monitoring ROI automatically. As such, the Revolution Vibe carries over virtually all features and functionalities of the predicate device Revolution Apex Elite (K213715).
This CT system can be used for low dose lung cancer screening in high risk populations*.
The provided FDA 510(k) clearance letter and summary for the Revolution Vibe CT system does not include detailed acceptance criteria or a comprehensive study report to fully characterize the device's performance against specific metrics. The information focuses more on the equivalence to a predicate device and general safety/effectiveness.
However, based on the text, we can infer some aspects related to the Auto ROI feature, which is the only part of the device described with specific performance testing details.
Here's an attempt to extract and describe the available information, with clear indications of what is not provided in the document.
Acceptance Criteria and Device Performance for Auto ROI
The document mentions specific performance testing for the "Auto ROI" feature, which utilizes AI. For other aspects of the Revolution Vibe CT system, the submission relies on demonstrating substantial equivalence to the predicate device (Revolution Apex Elite) through engineering design V&V, bench testing, and a clinical reader study focused on overall image utility, rather than specific quantitative performance metrics meeting predefined acceptance criteria for the entire system.
1. Table of Acceptance Criteria and Reported Device Performance (Specific to Auto ROI)
Feature/Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Auto ROI Success Rate | "exceeding the pre-established acceptance criteria" | Testing resulted in "success rates exceeding the pre-established acceptance criteria." (Specific numerical value not provided) |
Note: The document does not provide the explicit numerical value for the "pre-established acceptance criteria" or the actual "success rate" achieved for the Auto ROI feature.
2. Sample Size and Data Provenance for the Test Set (Specific to Auto ROI)
- Sample Size: 1341 clinical images
- Data Provenance: "real clinical practice" (Specific country of origin not mentioned). The images were used for "Auto ROI performance" testing, which implies retrospective analysis of existing clinical data.
3. Number of Experts and Qualifications to Establish Ground Truth (Specific to Auto ROI)
- Number of Experts: Not specified for the Auto ROI ground truth establishment.
- Qualifications of Experts: Not specified for the Auto ROI ground truth establishment.
Note: The document mentions 3 readers for the overall clinical reader study (see point 5), but this is for evaluating the diagnostic utility and image quality of the CT system and not explicitly for establishing ground truth for the Auto ROI feature.
4. Adjudication Method for the Test Set (Specific to Auto ROI)
- Adjudication Method: Not specified for the Auto ROI test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
-
Was an MRMC study done? Yes, a "clinical reader study of sample clinical data" was carried out. It is described as a "blinded, retrospective clinical reader study."
-
Effect Size of Human Readers Improvement with AI vs. without AI assistance: The document states the purpose of this reader study was to validate that "Revolution Vibe are of diagnostic utility and is safe and effective for its intended use." It does not report an effect size or direct comparison of human readers' performance with and without AI assistance (specifically for the Auto ROI feature within the context of reader performance). The study seemed to evaluate the CT system's overall image quality and clinical utility, possibly implying that the Auto ROI is integrated into this overall evaluation, but a comparative effectiveness study of the AI's impact on human performance is not described.
- Details of MRMC Study:
- Number of Cases: 30 CT cardiac exams
- Number of Readers: 3
- Reader Qualifications: US board-certified in Radiology with more than 5 years' experience in CT cardiac imaging.
- Exams Covered: "wide range of cardiac clinical scenarios."
- Reader Task: "Readers were asked to provide evaluation of image quality and the clinical utility."
- Details of MRMC Study:
6. Standalone (Algorithm Only) Performance
- Was a standalone study done? Yes, for the "Auto ROI" feature, performance was tested "using 1341 clinical images from real clinical practice," and "the tests results in success rates exceeding the pre-established acceptance criteria." This implies an algorithm-only evaluation of the Auto ROI's ability to successfully identify and monitor ROI.
7. Type of Ground Truth Used (Specific to Auto ROI)
- Type of Ground Truth: Not explicitly stated for the Auto ROI. Given the "success rates" metric, it likely involved a comparison against a predefined "true" ROI determined by human experts or a gold standard method. It's plausible that this was established by expert consensus or reference standards.
8. Sample Size for the Training Set
- Sample Size: Not provided in the document.
9. How Ground Truth for the Training Set Was Established
- Ground Truth Establishment: Not provided in the document.
In summary, the provided documentation focuses on demonstrating substantial equivalence of the Revolution Vibe CT system to its predicate, Revolution Apex Elite, rather than providing detailed, quantitative performance metrics against specific acceptance criteria for all features. The "Auto ROI" feature is the only component where specific performance testing (standalone) is briefly mentioned, but key details like numerical acceptance criteria, actual success rates, and ground truth methodology for training datasets are not disclosed. The human reader study was for general validation of diagnostic utility, not a comparative effectiveness study of AI assistance.
Ask a specific question about this device
(115 days)
JAK
This computed tomography system is intended to generate and process cross-sectional images of patients by computer reconstruction of x-ray transmission data.
The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions.
This CT system can be used for low dose lung cancer screening in high risk populations*.
*As defined by professional medical societies. Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
Siemens intends to update the software version syngo CT VB20 (update) for the following NAEOTOM Alpha class CT systems:
Dual Source NAEOTOM CT scanner systems:
- NAEOTOM Alpha (trade name ex-factory CT systems: NAEOTOM Alpha.Peak; trade name installed base CT systems with SW upgrade only: NAEOTOM Alpha)
For simplicity, the product name of NAEOTOM Alpha will be used throughout this submission instead of the trade name NAEOTOM Alpha.Peak.
- NAEOTOM Alpha.Pro
Single Source NAEOTOM CT scanner system:
- NAEOTOM Alpha.Prime
The subject devices NAEOTOM Alpha (trade name ex-factory CT systems: NAEOTOM Alpha.Peak) and NAEOTOM Alpha.Pro with software version SOMARIS/10 syngo CT VB20 (update) are Computed Tomography X-ray systems which feature two continuously rotating tube-detector systems, denominated as A- and B-systems respectively (dual source NAEOTOM CT scanner system).
The subject device NAEOTOM Alpha.Prime with software version SOMARIS/10 syngo CT VB20 (update) is a Computed Tomography X-ray system which features one continuously rotating tube-detector systems, denominated as A-system (single source NAEOTOM CT scanner system).
The detectors' function is based on photon-counting technology.
In this submission, the above-mentioned CT scanner systems are jointly referred to as subject devices by "NAEOTOM Alpha class CT scanner systems".
The NAEOTOM Alpha class CT scanner systems with SOMARIS/10 syngo CT VB20 (update) produce CT images in DICOM format, which can be used by trained staff for post-processing applications commercially distributed by Siemens and other vendors. The CT images can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions. The radiation therapy planning support includes, but is not limited to, Brachytherapy, Particle Therapy including Proton Therapy, External Beam Radiation Therapy, Surgery. The computer system delivered with the CT scanner is able to run optional post-processing applications.
Only trained and qualified users, certified in accordance with country-specific regulations, are authorized to operate the system. For example, physicians, radiologists, or technologists. The user must have the necessary U.S. qualifications in order to diagnose or treat the patient with the use of the images delivered by the system.
The platform software for the NAEOTOM Alpha class CT scanner systems is syngo CT VB20 (update) (SOMARIS/10 syngo CT VB20 (update)). It is a command-based program used for patient management, data management, X-ray scan control, image reconstruction, and image archive/evaluation. The software platform provides plugin software interfaces that allow for the use of specific commercially available post-processing software algorithms in an unmodified form from the cleared stand-alone post-processing version.
Software version syngo CT VB20 (update) (SOMARIS/10 syngo CT VB20 (update)) shall support additional software features compared to the software version of the predicate devices NAEOTOM Alpha class CT systems with syngo CT VB20 (SOMARIS/10 syngo CT VB20) cleared in K243523.
Software version SOMARIS/10 syngo CT VB20 (update) will be offered ex-factory and as optional upgrade for the existing NAEOTOM Alpha class systems.
The bundle approach is feasible for this submission since the subject devices have similar technological characteristics, software operating platform, and supported software characteristics. All subject devices will support previously cleared software and hardware features in addition to the applicable modifications as described within this submission. The intended use remains unchanged compared to the predicate devices.
The provided document describes the acceptance criteria and a study that proves the device meets those criteria for the NAEOTOM Alpha CT Scanner Systems. However, the document primarily focuses on demonstrating substantial equivalence to a predicate device and safety and effectiveness based on non-clinical testing and adherence to standards, rather than detailing a specific clinical performance study with defined acceptance criteria for a diagnostic aid.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a specific table of acceptance criteria with corresponding performance metrics in the way one would typically find for a diagnostic AI device (e.g., sensitivity, specificity, AUC). Instead, it states that:
- Acceptance Criteria for Software: "The test specification and acceptance criteria are related to the corresponding requirements." and "The test results show that all of the software specifications have met the acceptance criteria."
- Acceptance Criteria for Features: "Test results show that the subject devices...is comparable to the predicate devices in terms of technological characteristics and safety and effectiveness and therefore are substantially equivalent to the predicate devices."
- Performance Claim: "The conclusions drawn from the non-clinical and clinical tests demonstrate that the subject devices are as safe, as effective, and perform as well as or better than the predicate devices."
The closest the document comes to defining and reporting on "performance criteria" for a specific feature, beyond basic safety and technical functionality, are for the HD FoV 5.0 and ZeeFree RT algorithms.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
HD FoV 5.0 algorithm: As safe and effective as HD FoV 4.0. | HD FoV 5.0 algorithm: Bench test results comparing it to HD FoV 4.0 based on physical and anthropomorphic phantoms. Performance was also evaluated by board-approved radio-oncologists and medical physicists via a retrospective blinded rater study. No specific metrics (e.g., image quality scores, diagnostic accuracy) are provided in this summary. |
ZeeFree RT reconstruction: | ZeeFree RT reconstruction: |
- No relevant errors in CT values and noise in homogeneous water phantom. | - Bench test results show it "does not affect CT values and noise levels in a homogenous water phantom outside of stack-transition areas compared to the non-corrected standard reconstruction." |
- No relevant errors in CT values in phantoms with tissue-equivalent inserts (even with metals and iMAR). | - Bench test results show it "introduces no relevant errors in terms of CT values measured in a phantom with tissue-equivalent inserts, even in the presence of metals and in combination with the iMAR algorithm." |
- No relevant geometrical distortions in a static torso phantom. | - Bench test results show it "introduces no relevant geometrical distortions in a static torso phantom." |
- No relevant deteriorations of position or shape in a dynamic thorax phantom (spherical shape with various breathing motions). | - Bench test results show it "introduces no relevant deteriorations of the position or shape of a dynamic thorax phantom when moving a spherical shape according to regular, irregular, and patient breathing motion." Also states it "can be successfully applied to phantom data if derived from a suitable motion phantom demonstrating its correct technical function on the tested device." |
- Successfully applied to 4D respiratory-gated images (Direct i4D). | - Bench test results show it "can successfully be applied to 4D respiratory-gated sequence images (Direct i4D)." |
- Enables optional reconstruction of stack artifact-corrected images which reduce misalignment artifacts where present in standard images. | - Bench test results show it "enables the optional reconstruction of stack artefact corrected images, which reduce the strength of misalignment artefacts, if such stack alignment artefacts are identified in non-corrected standard images." |
- Does not introduce relevant new artifacts not present in non-corrected standard reconstruction. | - Bench test results show it "does not introduce relevant new artefacts, which were previously not present in the non-corrected standard reconstruction." Also states it "does not introduce new artifacts, which were previously not present in the non-corrected standard reconstruction, even in presence of metals." |
- Independent from physical detector width of acquired data. | - Bench test results show it "is independent from the physical detector width of the acquired data." |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "physical and anthropomorphic phantoms" for HD FoV 5.0 and "homogeneous water phantom" and "phantom with tissue-equivalent inserts," and "dynamic thorax phantom" for ZeeFree RT. It also refers to "retrospective blinded rater studies of respiratory 4D CT examinations performed at two institutions" for ZeeFree RT, but does not specify the sample size (number of cases/patients) or the country of origin for these real-world examination datasets. The data provenance (retrospective/prospective) is stated for the rater study for ZeeFree RT as retrospective, but not for the HD FoV 5.0 rater study (though implied by "retrospective blinded rater study").
3. Number of Experts and Qualifications for Ground Truth
For the HD FoV 5.0 and ZeeFree RT rater studies, the experts were "board-approved radio-oncologists and medical physicists." The number of experts is not specified, nor is their specific years of experience.
4. Adjudication Method for the Test Set
The document explicitly states "retrospective blinded rater study" for HD FoV 5.0 and ZeeFree RT. However, it does not specify the adjudication method (e.g., 2+1, 3+1, none) if there were multiple raters and disagreements.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document states that for HD FoV 5.0 and ZeeFree RT, "the performance of the algorithm was evaluated by board-approved radio-oncologists and medical physicists by means of retrospective blinded rater study." This indicates a reader study, which is often a component of an MRMC study.
However, the study described does not appear to be comparing human readers with AI assistance vs. without AI assistance. Instead, for HD FoV 5.0, it's comparing the new algorithm's results to its predecessor, HD FoV 4.0. For ZeeFree RT, it's comparing the reconstruction to "Standard reconstruction" and assessing if it introduces errors or new artifacts. It's an evaluation of the algorithm's output, not necessarily a direct measure of human reader improvement with AI assistance. Therefore, no effect size for human reader improvement with AI vs. without AI assistance is reported because this specific type of comparative effectiveness study was not described.
6. Standalone (Algorithm Only) Performance Study
Yes, standalone (algorithm only) performance was conducted. The bench testing described for both HD FoV 5.0 and ZeeFree RT involves detailed evaluations of the algorithms' outputs using phantoms and comparing them to established standards or previous versions. For example, for ZeeFree RT, the bench test objectives include demonstrating that it "introduces no relevant errors in terms of CT values and noise levels measured in a homogeneous water phantom" and "does not introduce relevant new artefacts." This is an assessment of the algorithm's direct output.
7. Type of Ground Truth Used
The ground truth used primarily appears to be:
- Phantom-based measurements: For HD FoV 5.0 (physical and anthropomorphic phantoms) and ZeeFree RT (homogeneous water phantom, tissue-equivalent inserts, static torso phantom, dynamic thorax phantom). These phantoms have known properties which serve as ground truth for evaluating image quality metrics.
- Expert Consensus/Interpretation: For HD FoV 5.0 and ZeeFree RT, it involved "board-approved radio-oncologists and medical physicists" in "retrospective blinded rater studies." This suggests the experts' interpretations (potentially comparing image features or diagnostic quality) formed a part of the ground truth or served as the primary evaluation method. The text doesn't specify if there was a pre-established "true" diagnosis or condition for these clinical cases, or if the experts were rating image quality or agreement with a reference standard.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set for any of the algorithms or software features. This document is a 510(k) summary, which generally focuses on justification for substantial equivalence rather than detailed algorithm development specifics.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established, as it does not provide information about the training set itself.
Ask a specific question about this device
(116 days)
JAK
The Marie Imaging System is indicated for the acquisition of CT images and the precise positioning of human patients to facilitate delivery of external beam radiation when integrated with a separate therapy treatment delivery device.
The Marie System is intended to acquire CT images and enable the precise positioning of human patients to facilitate delivery of external beam radiation when integrated with a separate therapy treatment delivery device.
The Marie System is intended to be used by healthcare professionals to image patients in an upright position rather than conventional supine treatments, to enable precise treatment planning and patient positioning for radiotherapy.
Specifically, it is intended to:
- Image the patient to provide image-guided radiation therapy
- Image the patient to acquire images for the purpose of treatment planning
- Immobilize patients in an upright position for upright radiotherapy.
The Marie System is comprised of two major sub-systems: a computed tomography (CT) imaging system that performs pretreatment imaging and treatment simulation in the upright positions and a beam agnostic, patient positioning system that supports the patient in the upright positions.
The Marie Imaging System is used with compatible devices for treatment delivery and patient immobilization. The positioning system is designed with six degrees of freedom of motion and a patient positioning system to provide the desired posture for each cancer site to achieve accurate, reproducible patient setups, while the imaging system acquires helical scans by translating and rotating up the patient.
The provided text solely describes the Leo Cancer Care Marie System as a Computed Tomography X-ray System with its features, safety, and performance details. It outlines the regulatory clearance (FDA 510(k)) based on substantial equivalence to a predicate device (P-ARTIS K160611). It describes the device's characteristics and the non-clinical tests performed to demonstrate its performance and functionality against design and risk management requirements.
Crucially, the provided text does not contain any information about acceptance criteria for AI performance, clinical study design, sample sizes for test or training sets, ground truth establishment using experts, or any MRMC comparative effectiveness studies. The document is a 510(k) clearance summary for a medical device (a CT imaging and patient positioning system), not for an AI diagnostic or assistive software. The "performance testing" section refers to hardware and software system performance, not AI model performance.
Therefore, I cannot answer most of your questions based on the provided input. The questions you've asked are typically relevant for AI/ML-based medical devices or diagnostics that involve image interpretation and require rigorous validation against human expert performance. The Marie System, as described, is a physical imaging system for patient positioning and CT image acquisition, not an AI software.
If this were an AI device, the answers would need to be extracted from sections detailing clinical performance studies, AI model validation, or similar. Since those sections are absent, I am unable to provide the requested information.
Ask a specific question about this device
(107 days)
JAK
This computed tomography system is intended to generate and process cross-sectional images of patients by computer reconstruction of X-ray transmission data.
The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment, and radiation therapy planning as well as for diagnostic and therapeutic interventions.
This CT system can be used for low dose lung cancer screening in high risk populations*.
*As defined by professional medical societies. Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
Siemens intends to market a new software version, SOMARIS/10 syngo CT VB20 for the following SOMATOM Computed Tomography (CT) Scanner Systems:
a) Single Source CT Scanner systems (SOMATOM go. Platform):
- SOMATOM go.Now
- SOMATOM go.Up
- SOMATOM go.All
- SOMATOM go.Top
- SOMATOM go.Sim
- SOMATOM go.Open Pro
In this submission, the above listed CT scanner systems are jointly referred to as subject devices by "SOMATOM go. Platform" CT scanner systems.
b) Dual Source CT Scanner system:
- SOMATOM Pro.Pulse
The above listed subject devices with SOMARIS/10 syngo CT VB20 are Computed Tomography X-ray Systems which feature one (Single Source) or two (Dual Source) continuously rotating tube-detector system and function according to the fan beam principle. The SOMATOM go. Platform and the SOMATOM Pro.Pulse with software SOMARIS/10 syngo CT VB20 produce CT images in DICOM format, which can be used by trained staff for software applications, e.g. post-processing applications, commercially distributed by Siemens Healthcare and other vendors as an aid in diagnosis, treatment preparation and therapy planning support (including, but not limited to, Brachytherapy, Particle including Proton Therapy, External Beam Radiation Therapy, Surgery). The computer system delivered with the CT scanner is able to run optional post processing applications.
The provided FDA 510(k) Clearance Letter for the SOMATOM CT Systems focuses heavily on establishing substantial equivalence to predicate devices through comparisons of technological characteristics, hardware, and software. It generally asserts that the device has met performance criteria through verification and validation testing, but it does not provide a detailed "Acceptance Criteria Table" with specific quantitative metrics and reported device performance. Similarly, it describes the types of studies performed (e.g., bench testing, retrospective blinded rater study), but it lacks the specific details requested regarding sample sizes, data provenance, expert qualifications, and effect sizes that would typically be found in a detailed study report.
Therefore, I will extract and synthesize the information that is available in the document and explicitly state where the requested information is not provided.
Understanding the Device and its Changes
The devices under review are Siemens SOMATOM CT Systems (SOMATOM go.Now, SOMATOM go.Up, SOMATOM go.All, SOMATOM go.Top, SOMATOM go.Sim, SOMATOM go.Open Pro, and SOMATOM Pro.Pulse) with a new software version, SOMARIS/10 syngo CT VB20. This new software version builds upon the previous VB10 version cleared in K233650 and K232206.
The submission focuses on modifications and new features introduced with VB20, including:
- Eco Power Mode: New feature for reduced energy consumption during idle times (not supported on go.Now and go.Up).
- Oncology Exchange: New feature for transferring prescription information from ARIA Oncology Information System.
- myExam Contrast: New feature for exchanging contrast injection parameters.
- FAST 3D Camera/FAST Integrated Workflow: Modifications including retrained algorithms, collision indication, and Centerline/Grid Overlay.
- FAST Planning: Extended to detect additional body regions.
- myExam Companion (myExam Compass/myExam Cockpit): Clinical decision trees now available for child protocols.
- HD FoV 5.0: New extended field of view reconstruction algorithm (for go.Sim and go.Open Pro only).
- CT guided intervention – myAblation Guide interface: New interface.
- Flex 4D Spiral: Modifications regarding dynamic tube current modulation.
- ZeeFree RT: New stack artifact reduced reconstruction for respiratory-related examinations (for go.Open Pro only).
- DirectDensity: Modified to include stopping-power ratio (Kernel St).
- DirectLaser: Patient Marking workflow improvement.
- Respiratory Motion management - Open Online Interface: New interface for respiratory gating.
- DirectSetup Notes: Enabled for certain SOMATOM go. Platform systems.
The core argument for clearance is substantial equivalence to predicate devices. This means that, despite modifications, the device is as safe and effective as a legally marketed device (the predicates).
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not contain a specific table of quantitative acceptance criteria with corresponding reported device performance values. Instead, it describes general acceptance criteria related to verification and validation tests and then provides qualitative statements about the test results demonstrating comparability or improvement over predicate devices.
Here's a summary of the described performance evaluations:
Feature/Metric | Acceptance Criteria (Qualitative) | Reported Device Performance (Qualitative) |
---|---|---|
Overall | Meet acceptance criteria for all software specifications. Enable safe and effective integration. Perform as intended in specified use conditions. | "All software specifications have met the acceptance criteria." "Verification and validation support the claims of substantial equivalence." "Perform(s) as intended in the specified use conditions." "As safe, as effective, and perform as well as or better than the predicate devices." |
FAST 3D Camera Accuracy (Isocentering, Range, Direction) | Comparable or better accuracy to predicate device for adults; extend support to adolescents. | "Overall, the subject devices with syngo CT VB20 delivers comparable or improved accuracy to the predicate devices with syngo CT VB10 predicate device for adults and extends the support to adolescents." |
FAST Planning Correctness | High fraction (percentage) of ranges calculated correctly and without needing change. Meets interactive requirements (fast calculation time). | "For more than 90% of the ranges no editing action was necessary to cover standard ranges." "For more than 95%, the speed of the algorithm was sufficient." |
HD FoV 5.0 Performance (vs. HD FoV 4.0) | As safe and effective as HD FoV 4.0. | "Results obtained with the new HD FoV 5.0 algorithm are compared with its predecessor, the HD FoV 4.0 algorithm, based on physical and anthropomorphic phantoms...This comparison is conducted to demonstrate that the HD FoV 5.0 algorithm is as safe and effective as the HD FoV 4.0 algorithm." (No quantitative metrics provided in this document excerpt regarding this comparison's outcome). |
Flex 4D Spiral Functionality & Image Quality | Proper function and acceptable image quality. | "The performed bench test report describes the technical background of Flex 4D Spiral and its functionalities with SOMATOM CT scanners, demonstrate the proper function of those, and assess the image quality of Flex 4D Spiral." (No quantitative metrics provided) |
ZeeFree RT Reconstruction Performance | No relevant errors in CT values and noise in homogeneous phantoms. No relevant errors in CT values in tissue-equivalent phantoms. No relevant geometrical distortions in static phantoms. No relevant deteriorations of position/shape in dynamic phantoms. No relevant new artifacts. Maintain performance with iMAR. Independent of detector width. | "introduces no relevant errors in terms of CT values and noise levels measured in a homogeneous water phantom" "introduces no relevant errors in terms of CT values measured in a phantom with tissue-equivalent inserts, even in the presence of metals and in combination with the iMAR algorithm" "introduces no relevant geometrical distortions in a static torso phantom" "introduces no relevant deteriorations of the position or shape of a dynamic thorax phantom" "does not introduce relevant new artefacts" "can be successfully applied in combination with metal artifact correction (iMAR)" "is independent from the physical detector width" |
DirectDensity Performance (iBHC variants) | Reduced dependence on tube voltage and filtration for non-water-like tissues. Image values aligned with material properties. | "reduced dependence on tube voltage and filtration compared to the corresponding quantitative kernel (Qr) with iBHC Bone for non-water-like tissues, such as adipose and bone." "generate image value closely aligned with the respective material properties." "has been validated." |
2. Sample Sizes Used for the Test Set and Data Provenance
The document provides very limited, qualitative information:
- FAST 3D Camera: Optimized using "additional data from adults and adolescence patients." No specific number of patients or images mentioned.
- FAST Planning: Evaluated on "patient data." No specific number of patients or images mentioned.
- HD FoV 5.0: Evaluated with "physical and anthropomorphic phantoms."
- Flex 4D Spiral: No specific sample size or data type mentioned for performance assessment.
- ZeeFree RT: Evaluated with "homogeneous water phantom," "phantom with tissue-equivalent inserts," "static torso phantom," and "dynamic thorax phantom." Also, "retrospective blinded rater studies of respiratory 4D CT examinations performed at two institutions." No specific number of phantoms, images per phantom, or patient cases mentioned.
- DirectDensity: Evaluated on "SOMATOM CT scanner models." No specific sample size or data type mentioned.
Data Provenance:
- Country of Origin: Not specified for the patient data used for algorithm optimization/validation.
- Retrospective or Prospective:
- FAST 3D Camera: Implied retrospective as it uses "additional data."
- FAST Planning: Implied retrospective as it uses "patient data."
- HD FoV 5.0: Retrospective for the blinded rater study.
- ZeeFree RT: Retrospective for the blinded rater study of clinical cases. The phantom tests are by nature not retrospective/prospective.
3. Number of Experts and Qualifications for Ground Truth
- HD FoV 5.0: "board-approved radio-oncologists and medical physicists." The number of experts is not specified.
- ZeeFree RT: "board-approved radio-oncologists and medical physicists." The number of experts is not specified.
For other tests, ground truth appears to be established by phantom measurements or internal engineering verification, rather than human expert reads validating clinical ground truth.
4. Adjudication Method for the Test Set
The document mentions "retrospective blinded rater study" for HD FoV 5.0 and ZeeFree RT. However, it does not specify the adjudication method used (e.g., 2+1, 3+1, none) for these studies. It only states they were "blinded."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was mentioned for HD FoV 5.0 and ZeeFree RT. Both were "retrospective blinded rater studies."
- Effect Size: The document does not report specific effect sizes (e.g., how much human readers improve with AI vs. without AI assistance). It only states that the purpose of the comparison was to "demonstrate that the HD FoV 5.0 algorithm is as safe and effective as the HD FoV 4.0 algorithm" and for ZeeFree RT that it "enables the optional reconstruction of stack artefact corrected images, which reduce the strength of misalignment artefacts." This implies an assessment of non-inferiority or improvement in image quality, but specific quantitative results for reader performance are not provided in this excerpt.
6. Standalone (Algorithm Only) Performance
The document describes tests for several algorithms (FAST 3D Camera, FAST Planning, HD FoV 5.0, Flex 4D Spiral, ZeeFree RT, DirectDensity) using phantoms and "patient data." These evaluations seem to be focused on the algorithm's performance in generating images or calculations, independent of human interpretation in some cases (e.g., accuracy of FAST 3D Camera, correctness percentage of FAST Planning).
However, it does not explicitly use the term "standalone performance" to differentiate these from human-in-the-loop assessments. The mention of "retrospective blinded rater studies" for HD FoV 5.0 and ZeeFree RT indicates a human-in-the-loop component for that specific evaluation, but the phantom testing mentioned alongside them would be considered standalone.
7. Type of Ground Truth Used
- Phantom Data: For HD FoV 5.0, Flex 4D Spiral, ZeeFree RT, and DirectDensity, physical and/or anthropomorphic phantoms were used, implying the ground truth is precisely known physical characteristics or pre-defined phantom configurations.
- Expert Consensus/Reads: For HD FoV 5.0 and ZeeFree RT, board-approved radio-oncologists and medical physicists performed retrospective blinded rater studies, implying their interpretations/ratings served as a form of ground truth or evaluation metric. It's not explicitly stated if this was against a clinical gold standard (e.g., pathology) or if it was a comparative assessment of image quality and clinical utility.
- Internal Verification: For FAST 3D Camera, FAST Planning, accuracy was assessed, likely against internal system metrics or pre-defined ideal outcomes.
8. Sample Size for the Training Set
The document does not provide any specific information about the sample size used for training the algorithms (e.g., for FAST 3D Camera, FAST Planning, HD FoV 5.0, ZeeFree RT). It only states that FAST 3D Camera was "optimized using additional data" and FAST Planning's algorithm had "product development, validation, and verification on patient data."
9. How the Ground Truth for the Training Set Was Established
The document does not provide any specific information on how the ground truth for the training set was established. It only mentions the data types used for validation/verification (phantoms, patient data from two institutions, expert raters).
Ask a specific question about this device
(115 days)
JAK
The Philips iCT CT systems is a Computed Tomography X-Ray System intended to produce images of the head and body by computer reconstruction of x-ray transmission data taken at different angles and planes. These devices may include signal analysis and display equipment, patient and equipment supports, components and accessories. The iCT is indicated for head, whole body, cardiac and vascular X-ray Computed Tomography applications in patients of all ages.
These scanners are intended to be used for diagnostic imaging and for low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer*. The screening must be performed within the established inclusion criteria of programs / protocols that have been approved and published by either a governmental body or professional medical society.
*Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
The Philips iCT CT System is a whole-body computed tomography (CT) X-ray system designed for diagnostic imaging. It features a continuously rotating X-ray tube and multi-slice detector gantry, enabling the acquisition of X-ray transmission data from multiple angles and planes. The system reconstructs these data into cross-sectional images using advanced image reconstruction algorithms, supporting a wide range of clinical applications.
The system consists of a gantry, which houses the rotating X-ray tube, detector array, and key imaging subsystems; a patient support couch, which moves the patient through the gantry bore in synchronization with the scan and is available in multiple configurations; an operator console, which serves as the primary user interface for system controls, image processing, and data management; and a Data Measurement System (DMS), which captures X-ray attenuation data to support high-quality image reconstruction.
The provided FDA 510(k) clearance letter for the Philips iCT CT System (K250648) focuses on demonstrating substantial equivalence to a predicate device (K162838) based on hardware and software enhancements.
However, there is no information within this document that describes specific acceptance criteria in terms of algorithm performance metrics (e.g., sensitivity, specificity, AUC) for an AI/ML-driven diagnostic task, nor does it detail a study proving the device meets such criteria in a clinical context.
The document primarily addresses:
- Physical and technical characteristics of the CT system (e.g., spatial resolution, low contrast resolution, noise, scan speeds).
- Safety and performance of system modifications (e.g., OS upgrade, cybersecurity enhancements, new phantom kit) through non-clinical verification and validation activities.
- Substantial equivalence to a predicate device based on these engineering and system-level tests.
The mention of "low dose CT lung cancer screening for the early detection of lung nodules that may represent cancer" refers to a general indication for the CT system itself, not a specific AI/ML diagnostic algorithm for nodule detection or characterization within the system. The note to "refer to clinical literature, including the results of the National Lung Screening Trial" further supports that the clinical efficacy of CT for lung screening is established and not being re-proven by this submission for a new AI feature.
Therefore, I cannot populate the requested table or answer the specific questions about AI/ML study design directly from the provided text, as this information is not present. The document focuses on the CT scanner as the device, not a specific AI-powered diagnostic algorithm within it that would require the detailed studies outlined in your request.
If the "Philips iCT CT System" were to include an AI component with an explicit diagnostic function beyond general image acquisition and display, the FDA submission would typically contain a dedicated section on its performance evaluation, including the types of studies you are asking about. This document does not describe such an AI component or its associated clinical performance study.
Ask a specific question about this device
(232 days)
JAK
This device is indicated to acquire and display cross-sectional volumes of the whole body (abdomen, pelvis, chest, extremities, and head) of adult patients.
TSX-501R has the capability to provide volume sets. These volume sets can be used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician.
CT Scanner TSX-501R/1 V11.1 employs a next-generation X-ray detector unit (photon counting detector unit), which allows images to be obtained based on X-rays with different energy levels. This device captures cross sectional volume data sets used to perform specialized studies, using indicated software/hardware, by a trained and qualified physician. This system is based upon the technology and materials of previously marketed Canon CT systems.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided 510(k) clearance letter.
It's important to note that a 510(k) summary typically doesn't provide the full, granular detail of a clinical study report. The information often indicates what was tested and the conclusion, but less about the specific methodologies, statistical thresholds for acceptance, or detailed performance metrics.
Understanding the Context: 510(k) Clearance
This document is a 510(k) clearance letter for a new CT scanner (CT Scanner TSX-501R/1 V11.1). The primary goal of a 510(k) submission is to demonstrate "substantial equivalence" to a legally marketed predicate device, not necessarily to prove absolute safety and effectiveness through extensive new clinical trials (which is more typical for a PMA - Premarket Approval). Therefore, the "acceptance criteria" and "study" described here are geared towards demonstrating this equivalence.
The core technology difference is the shift from an Energy Integrating Detector (EID) in the predicate to a Photon Counting Detector in the new device. The testing focuses on ensuring this new detector performs equivalently or better in terms of image quality and safety.
Acceptance Criteria and Reported Device Performance
Given the nature of a 510(k) for a CT scanner's hardware update (new detector), the "acceptance criteria" are implicitly tied to demonstrating equivalent or improved image quality and safety compared to the predicate device. The performance is assessed through bench testing with phantoms and review of clinical images.
Table of Acceptance Criteria and Reported Device Performance:
Category | Acceptance Criteria (Implicit) | Reported Device Performance (as stated in the summary) |
---|---|---|
Objective Image Quality Performance (using phantoms) | Equivalent or improved performance compared to the predicate device regarding: |
- Contrast-to-Noise Ratios (CNR)
- CT Number Accuracy
- Uniformity
- Pulse Pile Up
- Slice Sensitivity Profile (SSPz)
- Modulation Transfer Function (MTF)
- Standard Deviation of Noise and Pulse Pile
- Noise Power Spectra (NPS)
- Low Contrast Detectability (LCD) | "It was concluded that the subject device demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing." |
| Fundamental Properties of the Photon Counting Detector (using phantoms) | Effectiveness and equivalent performance compared to expected or predicate device for: - Detector resolution and noise properties (MTF and DQE)
- Artifact analysis
- Count rate vs. current curve
- Pulse pileup or maximum count rate
- Lag/residual signal levels
- Stability over time
- Bad pixel map | "These bench studies utilized phantom data and achieved results demonstrative of equivalent performance in comparison with the predicate device." |
| Clinical Image Quality (Human Review) | Reconstructed images using the subject device are of diagnostic quality. | "It was confirmed that the reconstructed images using the subject device were of diagnostic quality." |
| Safety & Standards Conformance | Conformance to relevant electrical, radiation, software, and cybersecurity standards and regulations. | "This device is in conformance with the applicable parts of the following standards [list provided]... Additionally, this device complies with all applicable requirements of the radiation safety performance standards..." |
| Risk Analysis & Verification/Validation | Established specifications for the device have been met, and risks are adequately managed. | "Risk analysis and verification/validation activities conducted through bench testing demonstrate that the established specifications for the device have been met." |
| Software Documentation & Cybersecurity | Adherence to FDA guidance documents for software functions and cybersecurity. | "Software Documentation for a Basic Documentation Level... is included... Cybersecurity documentation... was included..." |
Study Details:
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set (Clinical Images): The specific number of clinical images/cases reviewed is not provided. The text states "Representative chest, abdomen, brain and MSK diagnostic images." This implies a selection of images from various body regions.
- Data Provenance: The document does not specify the country of origin for the clinical images. It also does not explicitly state whether the data was retrospective or prospective, though for a 510(k) supporting equivalence, retrospective data collection for image review is common.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Number of Experts: The document states "reviewed by American Board-Certified Radiologists." The specific number is not provided.
- Qualifications: "American Board-Certified Radiologists." This indicates a high level of qualification and experience in medical imaging interpretation.
-
Adjudication Method for the Test Set:
- The document does not specify an adjudication method (like 2+1 or 3+1) for the clinical image review. It simply states they were "reviewed by American Board-Certified Radiologists" and "it was confirmed that the reconstructed images using the subject device were of diagnostic quality." This implies a consensus or individual assessment of diagnostic quality, but the process is not detailed.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- Was it done? No, a formal MRMC comparative effectiveness study demonstrating how human readers improve with AI vs. without AI assistance was not conducted or described for this submission. This makes sense as the device is a CT scanner itself, not an AI-assisted diagnostic software. The clinical image review was to confirm diagnostic quality of the images produced by the new scanner, not to assess reader performance with or without an AI helper.
-
Standalone (Algorithm Only) Performance:
- Was it done? Yes, in a sense. The "bench testing" focusing on Objective Image Quality Evaluations and Fundamental Properties of the Photon Counting Detector can be considered "standalone" performance for the device's imaging capabilities. These tests used phantoms and measured technical specifications without human interpretation as the primary endpoint. The device's stated function is to acquire and display images, so its "standalone" performance is its ability to produce good images.
-
Type of Ground Truth Used:
- Bench Testing (Phantoms): The ground truth is the physical properties of the phantoms and the expected performance characteristics based on established physics and engineering principles (e.g., a known object size for MTF, known density for CT number accuracy).
- Clinical Images: The ground truth for confirming "diagnostic quality" is expert consensus/opinion from American Board-Certified Radiologists. It's an assessment of whether the image contains sufficient information and clarity for diagnostic purposes, not necessarily a comparison to a biopsy or long-term outcome.
-
Sample Size for the Training Set:
- The document does not mention a training set in the context of typical AI/machine learning development. This device is a CT scanner hardware system, not an AI diagnostic algorithm that learns from training data. Therefore, the concept of a "training set" as it relates to AI models is not applicable here.
-
How Ground Truth for the Training Set Was Established:
- As stated above, the concept of a "training set" as applied to AI/machine learning development does not directly apply to this CT scanner hardware submission. The device's performance is based on its physical design and engineering, not on learning from a large dataset.
Ask a specific question about this device
(203 days)
JAK
CardIQ Suite is a non-invasive software application designed to provide an optimized application to analyze cardiovascular anatomy and pathology based on 2D or 3D CT cardiac non contrast and angiography DICOM data from acquisitions of the heart. It provides capabilities for the visualization and measurement of vessels and visualization of chamber mobility. CardIQ Suite also aids in diagnosis and determination of treatment paths for cardiovascular diseases to include, coronary artery disease, functional parameters of the heart, heart structures and follow-up for stent placement, bypasses and plaque imaging. CardIQ Suite provides calcium scoring, a non-invasive software application, that can be used with non-contrasted cardiac images to evaluate calcified plaques in the coronary arteries, heart valves and great vessels such as the aorta. The clinician can use the information provided by calcium scoring to monitor the progression/regression of calcium in coronary arteries overtime, and this information may aid the clinician in their determination of the prognosis of cardiac disease. CardIQ Suite also provides an estimate of the volume of heart fat for informational use.
CardIQ Suite is a non-invasive software application designed to work with DICOM CT data acquisitions of the heart. It is a collection of tools that provide capabilities for generating measurements both automatically and manually, displaying images and associated measurements in an easy-to-read format and tools for exporting images and measurements in a variety of formats.
CardIQ Suite provides an integrated workflow to seamlessly review calcium scoring and coronary CT angiography (CCTA) data. Calcium Scoring has a fully automatic capability which will detect calcifications within the coronary arteries, label the coronary arteries according to regional territories and generate a total and per territory calcium score based on the AJ 130 and Volume scoring methods. Interactive tools allow editing of both the auto scored coronary lesions and other calcified lesions such as aortic valve, mitral valve as well as other general cardiac structures. Calcium scoring results can be compared with two percentile guide databases to better understand a patient's percentage of risk based on age, gender, and ethnicity. Additionally, for these non-contrasted exams, the heart fat estimation automatically estimates values within the heart that constitute adipose tissue, typically between –200 and –30 Hounsfield Units.
Calcium Scoring results can be exported as DICOM SR, batch axial SCPT, or a PDF report to assist with integration into structured reporting templates. Images can be saved and exported for sharing with referring physicians, incorporating into reports and archiving as part of the CT examination.
The Multi-Planar Reformat (MPR) Cardiac Review and Coronary Review steps provide an interactive toolset for review of cardiac exams. Coronary CTA datasets can be reviewed utilizing the double oblique angles to visually track the path of the coronary arteries as well as to view the common cardiac chamber orientations. Cine capability for multi-phase data may be useful for visualization of cardiac structures in motion such as chambers, valves and arteries, automatic tracking and labeling will allow a comprehensive analysis of the coronaries. Vessel lumen diameter is calculated, and the minimum lumen diameter computed is shown in color along the lumen profile.
Distance measurement and ROI tools are available for quantitative evaluation of the anatomy. Vascular findings of interest can be identified and annotated by the user, and measurements can be calculated for centerline distances, cross-sectional diameter and area, and lumen minimum diameter.
Let's break down the acceptance criteria and study details for the CardIQ Suite device based on the provided FDA 510(k) clearance letter.
1. Table of Acceptance Criteria and Reported Device Performance
The document provides specific acceptance criteria and performance results for the novel or modified algorithms introduced in the CardIQ Suite.
Feature/Algorithm Tested | Acceptance Criteria | Reported Device Performance |
---|---|---|
New Heart Segmentation (non-contrast CT exams) | More than 90% of exams successfully segmented. | Met the acceptance criteria of more than 90% of the exams that are successfully segmented. |
New Heart Fat Volume Estimate (non-deep learning) | Average Dice score $\ge$ 90%. | Average Dice score is greater than or equal to 90%. (Note: Under or over estimation may occur due to inaccurate heart segmentation). |
New Lumen Diameter Quantification (non-deep learning) | Mean absolute difference between estimated diameters and reference device (CardIQ Xpress 2.0) diameters lower than the mean voxel size. | The mean absolute difference is lower than the mean voxel size, demonstrating sufficient agreement for lumen quantification. |
Modified Coronary Centerline Tracking | Performance is enhanced when compared to the predicate device. | Proven that the performance of these algorithms is enhanced when compared to the predicate device. |
Modified Coronary Centerline Labeling | Performance is enhanced when compared to the predicate device. | Proven that the performance of these algorithms is enhanced when compared to the predicate device. |
2. Sample Sizes Used for the Test Set and Data Provenance
- Heart Segmentation (non-contrast CT exams): 111 CT exams
- Heart Fat Volume Estimate: 111 CT exams
- Lumen Diameter Quantification: 94 CT exams with a total of 353 narrowings across all available test sets.
- Coronary Centerline Tracking and Labeling: "a database of retrospective CT exams." (Specific number not provided for this particular test, but it is part of the overall bench testing.)
Data Provenance: The document states that the CT exams used for bench testing were "collected from different clinical sites, with a variety of acquisition parameters, and pathologies." It also notes that this database is "retrospective." The country of origin is not explicitly stated in the provided text.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not explicitly state the number of experts used or their specific qualifications (e.g., radiologist with 10 years of experience) for establishing the ground truth for the test sets. The tests are described as "bench testing" and comparisons to a "reference device" (CardIQ Xpress 2.0) or to an expectation of "successfully segmented."
4. Adjudication Method for the Test Set
The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). The performance is reported based on comparisons to a reference device or meeting a quantitative metric (e.g., Dice score, successful segmentation percentage, mean absolute difference).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention or describe that a multi-reader multi-case (MRMC) comparative effectiveness study was done. The focus is on the performance of the algorithms themselves ('bench testing') and their enhancement compared to predicates, rather than human reader improvement with AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance
Yes, the studies described are standalone performance evaluations of the algorithms. They are referred to as "bench testing" and evaluate the device's algorithms directly against defined metrics or a reference device, without involving human readers in a diagnostic setting for performance comparison.
7. Type of Ground Truth Used
The type of ground truth used varies based on the specific test:
- Heart Segmentation (non-contrast CT exams) & Heart Fat Volume Estimate: The ground truth for these appears to be implicitly established by what constitutes "successfully segmented" or against which the "Dice score" is calculated. A "predefined HU threshold" is mentioned for heart fat, suggesting a quantitative, rule-based ground truth related to Hounsfield Units within segmented regions.
- Lumen Diameter Quantification: The ground truth for this was established by comparison to diameters from the reference device, CardIQ Xpress 2.0 (K073138).
- Coronary Centerline Tracking and Labeling: The ground truth for evaluating enhancement compared to the predicate is not explicitly defined but would likely involve some form of expert consensus or highly accurate manual delineation, which is then used to assess the "enhancement" of the new algorithm.
8. Sample Size for the Training Set
The document does not provide the sample size for the training set. It only mentions that the "new deep learning algorithm for heart segmentation of non-contrasted exams uses the same model as the previous existing heart segmentation algorithm for contrasted exams, however now the input is changed, and the model is trained and tested with the non-contrasted exams." Similarly for coronary tracking, it states the deep learning algorithm was "retrained to a finer resolution." However, no specific training set sizes are given.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly state how the ground truth for the training set was established. It is noted that the models were "trained," which implies the existence of a ground truth for the training data, but the methodology for its establishment (e.g., expert annotation, semi-automated methods) is not described in the provided text.
Ask a specific question about this device
(99 days)
JAK
The SCENARIA View system is indicated to acquire axial volumes of the whole body including the head. Images can be acquired in axial, helical, or dynamic modes. The SCENARIA View system can also be used for interventional needle guidance. Volume datasets acquired by a SCENARIA View system can be post-processed in the SCENARIA View system to provide additional information. Post-processing capabilities of the SCENARIA View software include multi-planar reconstruction (MPR), and volume rendering. Volume datasets acquired by a SCENARIA View system can be transferred to external devices via a DICOM standard interface.
The Low Dose CT Lung Cancer Screening Option for the SCENARIA View system is indicated for using low dose CT for lung cancer screening. The screening must be conducted with the established program criteria and protocols that have been approved and published by a governmental body, a professional medical society, and/or FUJIFILM Corporation.
The SCENARIA View system is intended for general populations.
The subject device SCENARIA View is a multi-slice CT system consists of a gantry, operator's workstation, patient table, high-frequency X-ray generator, and accessories. The system performance is similar to the predicate device.
The SCENARIA View system uses 128-slice CT technology, where the X-ray tube and detector assemblies are mounted on a frame that rotates continuously around the patient using slip ring technology. The solid-state detector assembly design collects up to 64 slices of data simultaneously. The X-ray sub-system features a high frequency generator, X-ray tube, and collimation system that produces a fan beam X-ray output. The system can operate in a helical (spiral) scan mode where the patient table moves during scanning. As the X-ray tube/detector assembly rotates around the patient, data is collected at multiple angles.
The collected data is then reconstructed into cross-sectional images by a high-speed reconstruction sub-system. The images are displayed on a Computer Workstation, stored, printed, and archived as required. The workstation is based on current PC technology using the Windows™ operating system.
Compared to the predicate device referenced within this submission, the subject devices support the following modifications:
- New features
- AutoPose is an AI-based function that recognizes a specific body part in an image of localization scan and then automatically sets the scan range and the image reconstruction range.
- RemoteRecon is a function of setting image reconstruction parameters that runs on the external personal computer (hereinafter referred to as "PC") connected to the CT system.
- Modified features
- The maximum load capacity of patient table type has been increased from 250kg to 300 kg.
- Motion corrected reconstruction is an image reconstruction feature that reduces motion artifacts. The feature has been modified to include applicability for chest examinations, which is a non-gated scan.
- AutoPositioning is a feature that assist in positioning the patient by camera images. The feature has been modified to include additional 12 body parts (Head and Neck, Neck, C-spine, Heart, Chest-Abdomen, Chest-Upper Abdomen, Abdomen-Pelvis, Abdomen, Pelvis, T-spine, L-spine, T-L-spine), in addition to the 2 body parts (Head, Chest) of the predicate device, with scanogram ranges displayed according to the selected protocol.
The provided FDA 510(k) Clearance Letter for SCENARIA View Phase 5.0 primarily focuses on demonstrating substantial equivalence to a predicate device (SCENARIA View 4.2). The document outlines non-clinical and some clinical tests, but it does not present a formal "acceptance criteria" table with specific quantitative metrics for the device performance of the new AI features (AutoPose, Body Still Shot) in the same way one might find for a novel AI/CADe device.
The "acceptance criteria" for this submission appear to be centered around workflow improvement and sufficient image quality when compared to manual or predicate methods, rather than hard quantitative performance targets. The study designs are more akin to usability studies and qualitative image reviews.
Here's an attempt to extract and interpret the information based on your requested structure, acknowledging the limitations of the provided text in terms of explicit acceptance criteria and standalone performance metrics for the AI components.
Acceptance Criteria and Device Performance for SCENARIA View Phase 5.0 (AI Components)
The provided document describes the acceptance criteria and study results for the new features AutoPose and Body Still Shot introduced in the SCENARIA View Phase 5.0 system. The acceptance criteria are largely qualitative, focusing on workflow improvement and sufficient image quality.
1. Table of Acceptance Criteria and Reported Device Performance
Feature/Metric | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
AutoPose | Reduce the number of steps in the scan range setting procedure compared to conventional manual operation. | All evaluated cases (across all regions) showed a reduction in the number of steps compared to manual scan range setting. Max manual adjustment steps (if needed) remained equivalent. |
Body Still Shot | Able to obtain images of sufficient quality with reduced motion artifacts. | Images reconstructed with and without Body Still Shot were reviewed, and the function was evaluated to be able to obtain images of sufficient quality. (No specific quantitative metric for "sufficient quality" is provided, implying qualitative assessment). |
Note on "Acceptance Criteria": The document does not explicitly list quantitative acceptance criteria in a table format for the AI features. The criteria listed above are inferred from the Methods
and Results
sections of the non-clinical and clinical tests described. The primary goal was to demonstrate workflow efficiency (AutoPose) and qualitative image improvement (Body Still Shot).
2. Sample Sizes and Data Provenance
-
AutoPose (Clinical Test):
- Total Cases: 50 (Head), 50 (Neck), 52 (Chest), 54 (Heart), 52 (Abdomen), 52 (Abdomen-Pelvis), 50 (Chest-Abdomen), 24 (T-L-Spine), 50 (C-Spine), 50 (T-Spine), 50 (L-Spine).
- Note: The table layout in the original document makes it unclear if Chest-Upper Abdomen had cases listed, but it's empty. Assuming 50 for Chest-Abdomen and 0 for Chest-Upper Abdomen as it's not specified.
- Total Sum (if all distinct): 50 + 50 + 52 + 54 + 52 + 52 + 50 + 50 + 50 + 50 + 24 = 504 cases.
- Data Provenance: Clinical sites in the USA.
- Retrospective/Prospective: Not explicitly stated, but the nature of evaluating steps in a procedure suggests it was likely a prospective workflow evaluation with certified technologists.
- Total Cases: 50 (Head), 50 (Neck), 52 (Chest), 54 (Heart), 52 (Abdomen), 52 (Abdomen-Pelvis), 50 (Chest-Abdomen), 24 (T-L-Spine), 50 (C-Spine), 50 (T-Spine), 50 (L-Spine).
-
Body Still Shot (Clinical Test):
- Total Cases: Not specified.
- Data Provenance: Not specified (only mentions "Japanese M.D." reviewers).
- Retrospective/Prospective: Not specified.
-
Training Set Sample Size:
- Not disclosed in the provided document. The document primarily details the validation/test set.
3. Number of Experts and Qualifications for Ground Truth
-
AutoPose (Clinical Test):
- Number of Experts: Not explicitly stated how many "certified radiological technologists" performed the evaluations, only that they were certified.
- Qualifications: "certified radiological technologists." No specific years of experience or other details are provided.
- Role in Ground Truth: Their assessment of the number of steps and the "expected position" for manual adjustment served as the comparison for AutoPose's performance.
-
Body Still Shot (Clinical Test):
- Number of Experts: Not explicitly stated how many "Japanese M.D." (Medical Doctors) reviewed the images, only that they were "Japanese M.D."
- Qualifications: "Japanese M.D." No specific specialty (e.g., radiologist), years of experience, or other details are provided.
- Role in Ground Truth: Their qualitative review ("evaluated to be able to obtain images of sufficient quality") served as the ground truth for image quality.
4. Adjudication Method for the Test Set
- AutoPose: Not explicitly stated. The results imply a direct comparison of workflow steps, but it's not mentioned if multiple technologists evaluated the same cases or how discrepancies were handled.
- Body Still Shot: Not explicitly stated how reviews were conducted if multiple M.D.s were involved (e.g., consensus, majority vote).
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done?: No, a formal MRMC comparative effectiveness study demonstrating how human readers improve with AI vs. without AI assistance was not performed or reported for either AutoPose or Body Still Shot.
- Effect Size: Not applicable, as no such study was presented. The studies were focused on workflow efficiency (AutoPose) and qualitative image quality (Body Still Shot).
6. Standalone (Algorithm Only) Performance
- Was it done?: The document does not describe a standalone AI performance study (e.g., precision, recall, F1-score for AutoPose's pose recognition; or a quantitative image quality metric for Body Still Shot). The evaluation of AutoPose was focused on the workflow impact, and Body Still Shot on perceived image quality by human reviewers.
7. Type of Ground Truth Used
- AutoPose:
- For Scan Range Setting: The "ground truth" or reference for the AutoPose evaluation was the manual scan range setting process and the expected optimal scan position as determined by certified radiological technologists. The metric was a reduction in the number of workflow steps.
- Body Still Shot:
- For Image Quality: The ground truth for image quality was based on the qualitative assessment and review by "Japanese M.D." to determine "sufficient quality." This is essentially expert consensus on image usability.
8. Sample Size for the Training Set
- The document does not provide information on the sample size used for training the AI models (AutoPose and Body Still Shot).
9. How Ground Truth for the Training Set was Established
- The document does not provide information on how the ground truth for the training set was established for the AI models.
Ask a specific question about this device
Page 1 of 64