Search Results
Found 1121 results
510(k) Data Aggregation
(281 days)
Ask a specific question about this device
(203 days)
Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)
OrthoMatic Spine provides the means to perform musculoskeletal measurements of the whole spine, in particular spine curve angle measurements.
The TimeLens provides the means to compare a region of interest between multiple time points.
The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.
An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.
Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)
Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc…). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.
When preparing the radiologist's reading workflow on a dedicated workplace or workstation, Syngo Carbon Clinicals can be called to generate additional results or renderings according to the user needs using the tools available.
This document describes performance evaluation for two specific tools within Syngo Carbon Clinicals (VA41): OrthoMatic Spine and TimeLens.
1. Table of Acceptance Criteria and Reported Device Performance
| Feature/Tool | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| OrthoMatic Spine | Algorithm's measurement deviations for major spinal measurements (Cobb angles, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment) must fall within the range of inter-reader variability. | Cumulative Distribution Functions (CDFs) demonstrated that the algorithm's measurement deviations fell within the range of inter-reader variability for the major Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment. This indicates the algorithm replicates average rater performance and meets clinical reliability acceptance criteria. |
| TimeLens | Not specified as a reader study/bench test was not required due to its nature as a simple workflow enhancement algorithm. | No specific quantitative performance metrics are provided, as clinical performance evaluation methods (reader studies) were deemed unnecessary. The tool is described as a "simple workflow enhancement algorithm". |
2. Sample Size Used for the Test Set and Data Provenance
-
OrthoMatic Spine:
- Test Set Sample Size: 150 spine X-ray images (75 frontal views, 75 lateral views) were used in a reader study.
- Data Provenance: The document states that the main dataset for training includes data from USA, Germany, Ukraine, Austria, and Canada. While this specifies the training data provenance, the provenance of the specific 150 images used for the reader study (test set) is not explicitly segregated or stated here. The study involved US board-certified radiologists, implying the test set images are relevant to US clinical practice.
- Retrospective/Prospective: Not explicitly stated, but the description of "collected" images and patients with various spinal conditions suggests a retrospective collection of existing exams.
-
TimeLens: No specific test set details are provided as a reader study/bench test was not required.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
-
OrthoMatic Spine:
- Number of Experts: Five US board-certified radiologists.
- Qualifications: US board-certified radiologists. No specific years of experience are mentioned.
- Ground Truth for Reader Study: The "mean values obtained from the radiologists' assessments" for the 150 spine X-ray images served as the reference for comparison against the algorithm's output.
-
TimeLens: Not applicable, as no reader study was conducted.
4. Adjudication Method for the Test Set
- OrthoMatic Spine: The algorithm's output was assessed against the mean values obtained from the five radiologists' assessments. This implies a form of consensus or average from multiple readers rather than a strict 2+1 or 3+1 adjudication.
- TimeLens: Not applicable.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- OrthoMatic Spine: A reader study was performed, which is a type of MRMC study. However, this was a standalone performance evaluation of the algorithm against human reader consensus, not a comparative effectiveness study with and without AI assistance for human readers. Therefore, there is no reported "effect size of how much human readers improve with AI vs without AI assistance." The study aimed to show the algorithm replicates average human rater performance.
- TimeLens: Not applicable.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- OrthoMatic Spine: Yes, a standalone performance evaluation of the OrthoMatic Spine algorithm (without human-in-the-loop assistance) was conducted. The algorithm's measurements were compared against the mean values derived from five human radiologists.
- TimeLens: The description suggests the TimeLens tool itself is a "simple workflow enhancement algorithm" and its performance was evaluated through non-clinical verification and validation activities rather than a specific standalone clinical study with an AI algorithm providing measurements.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- OrthoMatic Spine:
- For the reader study (test set performance evaluation): Expert consensus (mean of five US board-certified radiologists' measurements) was used to assess the algorithm's performance.
- For the training set: The initial annotations were performed by trained non-radiologists and then reviewed by board-certified radiologists. This can be considered a form of expert-verified annotation.
- TimeLens: Not specified, as no clinical ground truth assessment was required.
8. The Sample Size for the Training Set
- OrthoMatic Spine:
- Number of Individual Patients (Training Data): 6,135 unique patients.
- Number of Images (Training Data): A total of 23,464 images were collected within the entire dataset, which was split 60% for training, 20% for validation, and 20% for model selection. Therefore, the training set would comprise approximately 60% of both the patient count and image count. So, roughly 3,681 patients and 14,078 images.
- TimeLens: Not specified.
9. How the Ground Truth for the Training Set Was Established
- OrthoMatic Spine: Most images in the dataset (used for training, validation, and model selection) were annotated using a dedicated annotation tool (Darwin, V7 Labs) by a US-based medical data labeling company (Cogito Tech LLC). Initial annotations were performed by trained non-radiologists and subsequently reviewed by board-certified radiologists. This process was guided by written guidelines and automated workflows to ensure quality and consistency, with annotations including vertebral landmarks and key vertebrae (C7, L1, S1).
- TimeLens: Not specified.
Ask a specific question about this device
(262 days)
The ADVIA Centaur Anti-Thyroid Peroxidase II (aTPOII) assay is for in vitro diagnostic use in the quantitative measurement of autoantibodies against thyroid peroxidase in human serum and plasma (EDTA and lithium heparin) using the ADVIA Centaur XP system.
Anti-thyroid peroxidase (aTPO) measurements are used, in conjunction with a clinical assessment, as an aid in the diagnosis of autoimmune thyroiditis and/or Graves' disease.
The ADVIA Centaur Anti-Thyroid Peroxidase II (aTPOII) consists of:
- aTPOII ReadyPack® primary reagent pack (Lite Reagent, Solid Phase)
- aTPOII CAL
Devices sold separately and included in the ADVIA Centaur® Anti-Thyroid Peroxidase II (aTPOII) are: - ADVIA Centaur aTPOII MCM (MCM 1, MCM 2–4)
- ADVIA Centaur aTPOII QC
- ADVIA Centaur aTPOII DIL ReadyPack ancillary reagent pack
N/A
Ask a specific question about this device
(125 days)
syngo.CT Dual Energy is designed to operate with CT images based on two different X-ray spectra.
The various materials of an anatomical region of interest have different attenuation coefficients, which depend on the used energy. These differences provide information on the chemical composition of the scanned body materials. syngo.CT Dual Energy combines images acquired with low and high energy spectra to visualize this information. Depending on the region of interest, contrast agents may be used.
The general functionality of the syngo.CT Dual Energy application is as follows:
- Bone Marrow ²⁾
- Bone Removal ¹⁾
- Brain Hemorrhage ¹⁾
- Gout Evaluation ¹⁾
- Hard Plaques ¹⁾
- Heart PBV
- Kidney Stones ¹⁾ ²⁾ ³⁾
- Liver VNC ¹⁾
- Lung Mono ¹⁾
- Lung Perfusion ¹⁾
- Lung Vessels ¹⁾
- Monoenergetic ¹⁾ ²⁾
- Monoenergetic Plus ¹⁾ ²⁾
- Optimum Contrast ¹⁾ ²⁾
- Rho/Z ¹⁾ ²⁾
- SPP (Spectral Post-Processing Format) ¹⁾ ²⁾
- SPR (Stopping Power Ratio) ¹⁾ ²⁾
- Virtual Non-Calcium (VNCa) ¹⁾ ²⁾
- Virtual Unenhanced ¹⁾
The availability of each feature depends on the Dual Energy scan mode.
¹⁾ This functionality supports data from Siemens Healthineers Photon-Counting CT scanners acquired in QuantumPlus modes.
²⁾ This functionality supports data from Siemens Healthineers Photon-Counting CT scanners acquired in QuantumPeak modes.
³⁾ Kidney Stones is designed to support the visualization of the chemical composition of kidney stones and especially the differentiation between uric acid and non-uric acid stones. For full identification of the kidney stone, additional clinical information should be considered such as patient history and urine testing. Only a well-trained radiologist can make the final diagnosis upon consideration of all available information. The accuracy of identification is decreased in obese patients.
Dual energy offers functions for qualitative and quantitative post-processing evaluations. syngo.CT Dual Energy is a post-processing application consisting of several post-processing application classes that can be used to improve the visualization of the chemical composition of various energy dependent materials in the human body when compared to single energy CT. Depending on the organ of interest, the user can select and modify different application classes or parameters and algorithms.
Different body regions require specific tools that allow the correct evaluation of data sets. syngo.CT Dual Energy provides a range of application classes that meet the requirements of each evaluation type. The different application classes for the subject device can be combined into one workflow.
The product is intended to be used for at least 21-year-old humans.
N/A
Ask a specific question about this device
(22 days)
AI-Rad Companion Prostate MR is indicated for the processing and annotation of DICOM MR prostate images acquired in adult male populations that demonstrate indications of oncological abnormalities in the prostate.
The AI-Rad Companion Prostate MR software aims to support the radiologist and provides the following functionality:
• Viewing, analyzing, evaluating prostate MR images including DCE, ADC, T2 and DWI
• Hosting application for and provides interface to external Prostate MR AI plug-in device
• Accept/reject/edit the results generated by the plug-in software Prostate MR AI
AI-Rad Companion Prostate MR is a diagnostic aid in the interpretation of prostate MRI examinations acquired according to the PI-RADS standard.
AI-Rad Companion Prostate MR provides quantitative and qualitative information based on bi or multiparametric prostate MR DICOM images. It displays information on the segmented gland, prostate volume, and segmented lesions along with their classifications. This information can be used to support the reading and reporting of prostate MR studies, as well as the planning of prostate biopsies in the case of ultrasound guided MR-US fusion biopsies of the prostate gland.
The primary features of AI-Rad Companion Prostate MR include:
• Display of Automatic Segmentation and volume of the prostate gland as well as display of automatic segmentation, quantification and classification of lesions
• Manual Adjustment of gland and lesion segmentation and editing of lesion scores, diameter, and localization of the automated generated lesions
• Marking of new lesions
• Export of results as RTSS format for import into supporting ultrasound or fusion biopsy planning systems
Based on the provided FDA 510(k) clearance letter for AI-Rad Companion Prostate MR (K252608), there is no specific study described that proves the device meets predefined acceptance criteria for performance metrics (e.g., sensitivity, specificity, accuracy). The document primarily focuses on demonstrating substantial equivalence to a predicate device (AI-Rad Companion Prostate MR K193283) and adherence to non-clinical verification and validation standards for software development and risk management.
The document explicitly states: "No clinical tests were conducted to test the performance and functionality of the modifications introduced within AI-Rad Companion Prostate MR."
Therefore, a table of acceptance criteria and reported device performance, information about sample sizes, expert ground truth establishment, adjudication methods, multi-reader multi-case studies, standalone performance, and training set details are not available in this document as no clinical performance study for the modified device was performed.
The document emphasizes that modifications and improvements were verified and validated through non-clinical tests (software verification and validation, unit, system, and integration tests), which demonstrated conformity to industry standards and the predicate device's existing safety and effectiveness.
Here’s a breakdown of what is stated in the document regarding testing:
1. A table of acceptance criteria and the reported device performance:
- Not provided. The document does not include a table of specific clinical acceptance criteria (e.g., target sensitivity or specificity values) or reported device performance metrics against such criteria. The focus is on demonstrating that software enhancements do not adversely affect safety and effectiveness, assuming the predicate device's performance was already acceptable.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Not provided. Since no clinical performance study was conducted for this specific submission, details on test set sample sizes and data provenance are not presented.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not applicable. As no clinical study is reported, this information is not available.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not done. The document explicitly states "No clinical tests were conducted."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly stated for the modified device. While the device description mentions automatic segmentation and classification, the overall context emphasizes a "diagnostic aid" that "aims to support the radiologist" and has functionality to "Accept/reject/edit the results generated by the plug-in software Prostate MR AI." This suggests an interactive workflow where standalone performance is not the primary claim for this particular submission. The separate product, "Prostate MR AI (K241770)," which performs the core AI tasks, is likely where standalone performance would be detailed, but not in this document.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable for this submission, as no new clinical performance study is detailed for the modified device. The original predicate device's performance would have relied on a ground truth, but that information is not part of this document.
8. The sample size for the training set:
- Not provided. Since this submission is for an updated version of an already cleared device and no new clinical performance study is detailed, the training set size for the underlying AI model (likely part of K241770 or the predicate K193283) is not included here.
9. How the ground truth for the training set was established:
- Not provided. This information would typically be detailed in the original submission for the AI algorithm (likely K241770 or K193283), not in this update focused on software enhancements and substantial equivalence.
In summary, the provided document focuses on demonstrating that the enhancements and modifications to the AI-Rad Companion Prostate MR do not adversely affect the safety and effectiveness of the existing predicate device. It relies on non-clinical software verification and validation, and substantial equivalence arguments, rather than presenting a de novo clinical performance study with new acceptance criteria and results.
Ask a specific question about this device
(99 days)
The ACUSON Sequoia and Sequoia Select ultrasound imaging systems are intended to provide images of, or signals from, inside the body by an appropriately trained healthcare professional in a clinical setting for the following applications: Fetal, Abdominal, Pediatric, Neonatal Cephalic, Small Parts, OB/GYN (useful for visualization of the ovaries, follicles, uterus and other pelvic structures), Cardiac, Transesophageal, Pelvic, Vascular, Adult Cephalic, Musculoskeletal and Peripheral Vascular applications.
The system supports the Ultrasonically-Derived Fat Fraction (UDFF) measurement tool to report an index that can be useful as an aid to a physician managing adult and pediatric patients with hepatic steatosis.
The system also provides the ability to measure anatomical structures for fetal, abdominal, pediatric, small organ, cardiac, transrectal, transvaginal, peripheral vessel, musculoskeletal and calculation packages that provide information to the clinician that may be used adjunctively with other medical data obtained by a physician for clinical diagnosis purposes.
The ACUSON Origin and Origin ICE ultrasound imaging systems are intended to provide images of, or signals from, inside the body by an appropriately trained healthcare professional in a clinical setting for the following applications: Fetal, Abdominal, Pediatric, OB/GYN (useful for visualization of the ovaries, follicles, uterus and other pelvic structures), Cardiac, Transesophageal, Intracardiac, Vascular, Adult Cephalic, and Peripheral Vascular applications.
The catheter is intended for intracardiac and intra-luminal visualization of cardiac and great vessel anatomy and physiology as well as visualization of other devices in the heart of adult and pediatric patients. The catheter is intended for imaging guidance only, not treatment delivery, during cardiac interventional percutaneous procedures.
The system also provides the ability to measure anatomical structures for fetal, abdominal, pediatric, cardiac, peripheral vessel, and calculation packages that provide information to the clinician that may be used adjunctively with other medical data obtained by a physician for clinical diagnosis purposes.
The ACUSON Sequoia, Sequoia Select, Origin, and Origin ICE Diagnostic Ultrasound Systems (software version VC10) are multi-purpose, mobile, software-controlled, diagnostic ultrasound systems with an on-screen display of thermal and mechanical indices related to potential bio- effect mechanisms. The function of these ultrasound systems is to transmit, receive, process ultrasound echo data (distance and intensities information about body tissue) in various modes of operation and display it as ultrasound imaging, anatomical and quantitative measurements, calculations, analysis of the human body and fluid flow, etc. These ultrasound systems use a variety of transducers to provide imaging in all standard acquisition modes and also have comprehensive networking and DICOM capabilities.
The provided FDA 510(k) clearance letter and summary discuss the ACUSON Sequoia, Sequoia Select, Origin, and Origin ICE Diagnostic Ultrasound Systems. This document indicates a submission for software feature enhancements and workflow improvements, including an "AI Measure and AI Assist workflow efficiency feature" and "Liver Elastography optimization."
Here's an analysis of the acceptance criteria and the study information provided:
Acceptance Criteria and Reported Device Performance
The submission focuses on enhancements to existing cleared devices rather than a de novo AI device. Therefore, the "acceptance criteria" discussed are primarily related to the performance of the Liver Elastography optimization using phantom testing.
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Liver Elastography Optimization: The system's performance in measuring stiffness within calibrated elasticity phantoms for pSWE, Auto pSWE, and 2D SWE modes must meet manufacturer's accuracy and variability criteria. | The verification results for Liver Elastography optimization using calibrated elasticity phantoms met the acceptance criteria for accuracy and variability. Specific numerical values for accuracy and variability are not provided in this document. |
| Software Feature Enhancements and Workflow Improvements (including AI Measure and AI Assist): The modifications should not raise new or different questions of safety and effectiveness, and the features should continue to meet their intended use. | "All pre-determined acceptance criteria were met." The document states that the modifications do not raise new or different questions of safety and effectiveness, and the devices continue to meet their intended use. Specific performance metrics for the AI Measure and AI Assist features themselves are not detailed as quantitative acceptance criteria in this document. |
| General Device Safety and Effectiveness: Compliance with relevant medical device standards (e.g., IEC 60601 series, ISO 10993-1, IEC 62304, ISO 13485) and FDA guidance. | The device complies with a comprehensive list of international and FDA standards, and non-clinical verification testing addressed system-level requirements, design specifications, and risk control measures. |
Study Details for Liver Elastography Optimization (SWE Performance Testing)
The primary study mentioned in the document for performance evaluation is related to the Liver Elastography optimization.
-
Sample Size Used for the Test Set and the Data Provenance:
- Test Set: Calibrated elasticity phantoms. The specific number of phantoms used is not stated beyond "calibrated elasticity phantoms."
- Data Provenance: Not explicitly stated, but implies laboratory testing using commercially available or manufacturer-certified phantoms. Transducers listed were DAX, 5C1, 9C2, 4V1, and 10L4.
-
Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of those Experts:
- Ground Truth Establishment: The ground truth for the test set (phantom stiffness) was established by the phantom manufacturer, as they were "calibrated elasticity phantoms certified by the phantom manufacturer."
- Number/Qualifications of Experts: The document does not specify the number or qualifications of experts involved in the phantom's certification process or in the actual testing of the Siemens device. The testing appears to be objective, relying on the calibrated properties of the phantoms.
-
Adjudication Method for the Test Set:
- Adjudication Method: Not applicable. Phantom testing typically relies on quantitative measurements against known phantom properties, not human adjudication of results.
-
If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- MRMC Study: No, an MRMC comparative effectiveness study was not conducted according to this document. The submission focuses on technical enhancements and phantom validation for elastography, and system safety/effectiveness.
-
If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: The "SWE Performance Testing" with phantoms could be considered a form of standalone performance assessment as it evaluates the device's measurement capabilities against a known standard. However, the AI Measure and AI Assist features are described as "workflow efficiency features" where measurements are "automatically launched" after classification, implying an interaction with a human user rather than a fully standalone diagnostic output. No specific standalone performance metrics for the AI Measure/Assist components are provided.
-
The Type of Ground Truth Used:
- Ground Truth: For the elastography testing, the ground truth was the known stiffness values of the calibrated elasticity phantoms.
-
The Sample Size for the Training Set:
- Training Set Sample Size: The document does not provide information about a training set size for the AI Measure and AI Assist features or the elastography optimization. This type of 510(k) submission typically focuses on validation and verification of changes to an already cleared product, rather than detailing the initial development or training data for AI algorithms.
-
How the Ground Truth for the Training Set Was Established:
- Training Set Ground Truth: Not applicable, as information on a specific training set is not provided in this document.
Summary regarding AI components:
While the document mentions "AI Measure" and "AI Assist" as workflow efficiency features (e.g., launching relevant measurements after cardiac view classification), it does not provide detailed performance metrics, test set sizes, ground truth establishment, or clinical study information specifically for these AI components. The 510(k) emphasizes that these are "software feature enhancements and workflow improvements" that, along with other changes, do not raise new questions of safety and effectiveness, leading to substantial equivalence with the predicate device. The only detailed "performance testing" described is for the Liver Elastography optimization using phantoms. This suggests that the AI features themselves might have been validated through internal software verification and validation activities that are not detailed in this public summary, or their impact on diagnostic performance was considered incremental and not requiring specific clinical comparative studies for this particular submission.
Ask a specific question about this device
(142 days)
The intended use of the device YSIO X.pree is to visualize anatomical structures of human beings by converting an X-ray pattern into a visible image.
The device is a digital X-ray system to generate X-ray images from the whole body including the skull, chest, abdomen, and extremities. The acquired images support medical professionals to make diagnostic and/or therapeutic decisions.
YSIO X.pree is not for mammography examinations.
The YSIO X.pree is a radiography X-ray system. It is designed as a modular system with components such as a ceiling suspension with an X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless, and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.
The following modifications have been made to the cleared predicate device:
- Updated generator
- Updated collimator
- Updated patient table
- Updated Bucky Wall Stand
- New X.wi-D 24 portable wireless detector
- New virtual AEC selection
- New status indicator lights
The provided 510(k) clearance letter and summary for the YSIO X.pree device (K250738) indicate that the device is substantially equivalent to a predicate device (K233543). The submission primarily focuses on hardware and minor software updates, asserting that these changes do not impact the device's fundamental safety and effectiveness.
However, the provided text does not contain the detailed information typically found in a clinical study report regarding acceptance criteria, sample sizes, ground truth establishment, or expert adjudication for an AI-enabled medical device. This submission appears to be for a conventional X-ray system with some "AI-based" features like auto-cropping and auto-collimation, which are presented as functionalities that assist the user rather than standalone diagnostic algorithms requiring extensive efficacy studies for regulatory clearance.
Based on the provided document, here's an attempt to answer your questions, highlighting where information is absent or inferred:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria in terms of performance metrics (e.g., sensitivity, specificity, or image quality scores) with corresponding reported device performance values for the AI features. The "acceptance" appears to be qualitative and based on demonstrating equivalence to the predicate device and satisfactory usability/image quality.
If we infer acceptance criteria from the "Summary of Clinical Tests" and "Conclusion as to Substantial Equivalence," the criteria seem to be:
| Acceptance Criteria (Inferred) | Reported Device Performance (as stated in document) |
|---|---|
| Overall System: Intended use met, clinical needs covered, stability, usability, performance, and image quality are satisfactory. | "The clinical test results stated that the system's intended use was met, and the clinical needs were covered." |
| New Wireless Detector (X.wi-D24): Images acquired are of adequate radiographic quality and sufficiently acceptable for radiographic usage. | "All images acquired with the new detector were adequate and considered to be of adequate radiographic quality." and "All images acquired with the new detector were sufficiently acceptable for radiographic usage." |
| Substantial Equivalence: Safety and effectiveness are not affected by changes. | "The subject device's technological characteristics are same as the predicate device, with modifications to hardware and software features that do not impact the safety and effectiveness of the device." and "The YSIO X.pree, the subject of this 510(k), is similar to the predicate device. The operating environment is the same, and the changes do not affect safety and effectiveness." |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: Not explicitly stated as a number of cases or images. The "Customer Use Test (CUT)" was performed at two university hospitals.
- Data Provenance: The Customer Use Test (CUT) was performed at "Universitätsklinikum Augsburg" in Augsburg, Germany, and "Klinikum rechts der Isar, Technische Universität München" in Munich, Germany. The document states "clinical image quality evaluation by a US board-certified radiologist" for the new detector, implying that the images themselves might have originated from the German sites but were reviewed by a US expert. The study design appears to be prospective in the sense that the new device was evaluated in a clinical setting in use rather than historical data being analyzed.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts
- Number of Experts: For the overall system testing (CUT), it's not specified how many clinicians/radiologists were involved in assessing "usability," "performance," and "image quality." For the new wireless detector (X.wi-D24), it states "a US board-certified radiologist."
- Qualifications of Experts: For the new wireless detector's image quality evaluation, the expert was a "US board-certified radiologist." No specific experience level (e.g., years of experience) is provided.
4. Adjudication Method for the Test Set
No explicit adjudication method (e.g., 2+1, 3+1 consensus) is described for the clinical evaluation or image quality assessment. The review of the new detector was done by a single US board-certified radiologist, not multiple independent readers with adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and what was the effect size of how much human readers improve with AI vs. without AI assistance.
- MRMC Study: No MRMC comparative effectiveness study is described where human readers' performance with and without AI assistance was evaluated. The AI features mentioned (Auto Cropping, Auto Thorax Collimation, Auto Long-Leg/Full-Spine collimation) appear to be automatic workflow enhancements rather than diagnostic AI intended to directly influence reader diagnostic accuracy.
- Effect Size: Not applicable, as no such study was conducted or reported.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done.
The document does not describe any standalone performance metrics for the AI-based features (Auto Cropping, Auto Collimation). These features seem to be integrated into the device's operation to assist the user, rather than providing a diagnostic output that would typically be evaluated in a standalone study. The performance of these AI functions would likely be assessed as part of the overall "usability" and "performance" checks.
7. The Type of Ground Truth Used
- For the overall system and the new detector, the "ground truth" seems to be expert opinion/consensus (qualitative clinical assessment) on the system's performance, usability, and the adequacy of image quality for radiographic use. There is no mention of pathology, outcomes data, or other definitive "true" states related to findings on the images.
8. The Sample Size for the Training Set
The document does not provide any information about a training set size for the AI-based auto-cropping and auto-collimation features. This is typical for 510(k) submissions of X-ray systems where such AI features are considered ancillary workflow tools rather than primary diagnostic aids.
9. How the Ground Truth for the Training Set was Established
Since no training set information is provided, there is no information on how ground truth was established for any training data.
In summary: The 510(k) submission for the YSIO X.pree focuses on demonstrating substantial equivalence for an updated X-ray system. The "AI-based" features appear to be workflow automation tools that were assessed as part of general system usability and image quality in a "Customer Use Test" and a limited clinical image quality evaluation for the new detector. It does not contain the rigorous quantitative performance evaluation data for AI software as might be seen for a diagnostic AI algorithm that requires a detailed clinical study for clearance.
Ask a specific question about this device
(71 days)
The Siemens PET/CT systems are combined X-Ray Computed Tomography (CT) and Positron Emission Tomography (PET) scanners that provide registration and fusion of high resolution physiologic and anatomic information.
The CT component produces cross-sectional images of the body by computer reconstruction of X-Ray transmission data from either the same axial plane taken at different angles or spiral planes taken at different angles. The PET subsystem images and measures the distribution of PET radiopharmaceuticals in humans for the purpose of determining various metabolic (molecular) and physiologic functions within the human body and utilizes the CT for fast attenuation correction maps for PET studies and precise anatomical reference for the fused PET and CT images.
The system maintains independent functionality of the CT and PET devices, allowing for single modality CT and/or PET diagnostic imaging.
These systems are intended to be utilized by appropriately trained health care professionals to aid in detecting, localizing, diagnosing, staging and restaging of lesions, tumors, disease and organ function for the evaluation of diseases and disorders such as, but not limited to, cardiovascular disease, neurological disorders and cancer. The images produced by the system can also be used by the physician to aid in radiotherapy treatment planning and interventional radiology procedures.
This system can be used for low dose lung cancer screening in high risk populations.*
*As defined by professional medical societies. Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
Biograph Trinion PET/CT systems are combined multi-slice X-Ray Computed Tomography and Positron Emission Tomography scanners. This system is designed for whole body oncology, neurology and cardiology examinations. Biograph Trinion PET/CT systems provide registration and fusion of high-resolution metabolic and anatomic information from the two major components of each system (PET and CT). Additional components of the system include a patient handling system and acquisition and processing workstations with associated software.
Biograph Trinion VK20 software is a command-based program used for patient management, data management, scan control, image reconstruction and image archival and evaluation. All images conform to DICOM imaging format requirements.
Biograph PET/CT systems, which are the subject of this application, are substantially equivalent to the commercially available Biograph Trinion VK10 family of PET/CT systems (K233677). Differences compared to the commercially available Biograph Trinion systems include:
-
The commercially available SOMATOM go.All and go.Top systems with VB10 (K233650) software have been incorporated into the Biograph Trinion VK20 systems, including commercially available CT features.
-
Additional PET axial field of view (FoV) systems allowing for more scalability.
-
Additional patient communication and comfort features.
-
PET respiratory gating with an external gating device has been implemented.
The Biograph Trinion models may also use the names Biograph Mission, Biograph Wonder, Biograph Ambition and Biograph Devotion for marketing purposes.
The provided FDA 510(k) clearance letter for the Biograph Trinion PET/CT system primarily focuses on demonstrating substantial equivalence to a predicate device and adherence to recognized performance standards. It indicates that "all performance testing met the predetermined acceptance values," but does not provide specific numerical acceptance criteria or reported device performance for an AI/algorithm component, nor does it detail a study proving the device meets AI-specific acceptance criteria. The context suggests the "performance testing" refers to general PET/CT system performance, not AI-driven diagnostic assistance.
Therefore, many of the requested details, particularly those related to a standalone AI algorithm's performance, human-in-the-loop studies, dataset characteristics (sample size, provenance), and ground truth establishment methods for an AI component, are not available in the provided text.
Based on the information available in the document, here's what can be extracted and inferred, with explicit notes where information is missing or not applicable in the context of an AI study.
Acceptance Criteria and Reported Device Performance
The document states that "all performance testing met the predetermined acceptance values." However, it does not specify what those acceptance values were or the precise reported performance metrics beyond this general statement. The tests conducted were primarily related to the physical performance of the PET/CT system as per NEMA NU 2:2024 and NEMA XR 25:2019 standards, not specifically an AI component for diagnostic aid.
Table of Acceptance Criteria and Reported Device Performance (Based on available information for the PET/CT system):
| Performance Metric (PET/CT system) | Acceptance Criteria (Stated as "predetermined acceptance values") | Reported Device Performance |
|---|---|---|
| Spatial Resolution | Met acceptance values | Met acceptance values |
| Scatter Fraction, Count Losses, and Randoms | Met acceptance values | Met acceptance values |
| Sensitivity | Met acceptance values | Met acceptance values |
| Accuracy: Corrections for Count Losses and Randoms | Met acceptance values | Met acceptance values |
| Image Quality, Accuracy of Corrections | Met acceptance values | Met acceptance values |
| Time-of-Flight Resolution | Met acceptance values | Met acceptance values |
| PET-CT Coregistration Accuracy | Met acceptance values | Met acceptance values |
| No AI-specific performance metrics detailed | Not specified in document | Not specified in document |
Study Details (Focusing on AI-related aspects where applicable, and general system testing otherwise)
-
Sample size used for the test set and the data provenance:
- For System Performance (NEMA tests): The document does not specify a "test set" in terms of patient data. NEMA tests typically involve phantom studies rather than patient data. Thus, sample size and data provenance are not applicable in the traditional sense for these tests.
- For AI Component: The document does not provide any information on a test set (patient cases, images) or data provenance (e.g., country of origin, retrospective/prospective) for validating an AI component for diagnostic assistance. The descriptions are entirely about the physical PET/CT system.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For System Performance: Ground truth for NEMA tests is established by physical measurements and calibration standards, not human experts.
- For AI Component: This information is not provided in the document as there's no mention of an AI-driven diagnostic aid requiring expert-established ground truth.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- For System Performance: Not applicable.
- For AI Component: This information is not provided in the document.
-
If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- The document does not indicate that an MRMC study was performed for an AI component. The focus is on the substantial equivalence of the PET/CT hardware and software to a predicate device, and compliance with performance standards for the imaging system itself.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The document does not detail any standalone algorithm performance testing. The performance testing described is for the integrated PET/CT system's physical and functional characteristics.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For System Performance: Ground truth for NEMA tests involves physical phantoms and established measurement protocols.
- For AI Component: This information is not provided in the document.
-
The sample size for the training set:
- This information is not provided in the document, as there is no mention of an AI model that undergoes a separate training process requiring a distinct training set.
-
How the ground truth for the training set was established:
- This information is not provided in the document, as there is no mention of an AI model's training set.
Summary of Device and Performance Information from Document:
The provided 510(k) clearance letter for the Biograph Trinion is for a PET/CT imaging system, not an AI-based diagnostic software. The "performance testing" described in the document pertains to the physical and functional aspects of the PET/CT scanner (e.g., spatial resolution, sensitivity, image quality) as measured against industry standards (NEMA NU 2:2024). The clearance is based on proving substantial equivalence to a predicate device and adherence to these well-established performance standards for imaging hardware.
Therefore, the detailed questions regarding AI acceptance criteria, AI test set characteristics, human-in-the-loop studies, and AI ground truth establishment are not addressed in this document because the device being cleared is the imaging system itself, not an AI software component for image analysis or diagnostic support. The document implies that the system can be used for certain clinical applications (like lung cancer screening), but it doesn't describe an automated AI system within the device that requires separate clinical validation with reader studies or large patient datasets.
Ask a specific question about this device
(74 days)
The Cios Spin is a mobile X-ray system designed to provide X-ray imaging of the anatomical structures of patients during clinical applications. Clinical applications may include but are not limited to interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care, and emergency room procedures. The patient population may include pediatric patients.
The Cios Spin (VA31A) mobile fluoroscopic C-arm X-ray System is designed for the surgical environment. The Cios Spin provides comprehensive image acquisition modes to support orthopedic and vascular procedures. The system consists of two major components:
a. The C-arm with X-ray source on one side and the flat panel detector on the opposite side. The c-arm can be angulated in both planes and be lifted vertically, shifted to the side and move forward/backward by an operator.
b. The second unit is the image display station with a moveable trolley for the image processing and storage system, image display and documentation. Both units are connected to each other with a cable.
The following modifications were made to the Predicate Device the Cios Spin Mobile X-ray System cleared under Premarket Notification K210054 on February 5, 2021. Siemens Medical Solutions USA, Inc. submits this Traditional 510(k) to request clearance for the Subject Device Cios Spin (VA31A). The following modification is incorporated in the Predicate Device to create the Subject Device, for which Siemens is seeking 510(k) clearance:
- Software updated from VA30 to VA31A to support the below software features
A. Updated Retina 3D for optional enlarged 3D Volume of 25cm x 25cm x 16cm
B. Introduction of NaviLink 3D Lite
C. Universal Navigation Interface (UNI)
D. Updated InstantLink with Extended NXS Interface - Updated Collimator
- Updated FLC Imaging system PC with new PC hardware Updated AppHost PC with High Performance Graphic Card
- New Eaton UPS 5P 850i G2 as successor of UPS 5P 850i due to obsolescense
Based on the provided FDA 510(k) clearance letter for the Siemens Cios Spin (VA31A), here's an analysis of the acceptance criteria and the study proving the device meets them:
Important Note: The provided document is a 510(k) summary, which often summarizes testing without providing granular details on study design, sample sizes, and ground truth establishment to the same extent as a full clinical study report. Therefore, some information requested (e.g., specific number of experts for ground truth, adjudication methods) may not be explicitly stated in this summary. The focus of this 510(k) is primarily on demonstrating substantial equivalence to a predicate device, especially for software and hardware modifications, rather than a de novo effectiveness study.
Acceptance Criteria and Reported Device Performance
The 510(k) summary primarily focuses on demonstrating that the modifications to the Cios Spin (VA31A) do not introduce new safety or effectiveness concerns compared to its predicate device (Cios Spin VA30) and a reference device (CIARTIC Move VB10A) that incorporates some of the new features. The acceptance criteria are implicitly tied to meeting various industry standards and demonstrating functionality and safety through non-clinical performance testing.
Table 1: Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Criteria (Implicit/Explicit from Text) | Reported Device Performance / Evidence |
|---|---|---|
| Software Functionality | Software specifications met acceptance criteria as stated in test plans. | "All test results met all acceptance criteria." |
| Enlarged Volume Field of View (Retina 3D) | Functionality and performance of new 25cm x 25cm x 16cm 3D volume. | "A non-clinical test 'Enlarged Volume Field of View' testing were conducted." The feature was cleared in the CIARTIC Move (K233748), implying its performance was previously validated. |
| NaviLink 3D Lite Functions | Functionality and performance of the new navigation interface. | Part of software updates VA31A; "All test results met all acceptance criteria." |
| Universal Navigation Interface (UNI) | Functionality and performance of UNI. | Part of software updates VA31A; "All test results met all acceptance criteria." UNI was present in the reference device CIARTIC Move (K233748). |
| InstantLink with Extended NXS Interface | Functionality and performance of updated interface. | Part of software updates VA31A; "All test results met all acceptance criteria." |
| Electrical Safety | Compliance with IEC 60601-1, IEC 60601-2-43, IEC 60601-2-54. | "The system complies with the IEC 60601-1, IEC 60601-2-43, and IEC 60601-2-54 standards for safety." |
| Electromagnetic Compatibility (EMC) | Compliance with IEC 60601-1-2. | "The system complies with... the IEC 60601-1-2 standard for EMC." |
| Human Factors/Usability | Device is safe and effective for intended users, uses, and environments. Human factors addressed. | "The Human Factor Usability Validation showed that Human factors are addressed in the system test according to the operator's manual and in clinical use tests with customer reports and feedback forms." |
| Risk Mitigation | Identified hazards are controlled; risk analysis completed. | "The Risk analysis was completed, and risk control was implemented to mitigate identified hazards." |
| Overall Safety & Effectiveness | No new issues of safety or effectiveness introduced by modifications. | "Results of all conducted testing and clinical assessments were found acceptable and do not raise any new issues of safety or effectiveness." |
| Compliance with Standards/Regulations | Adherence to various 21 CFR regulations and standards (e.g., ISO 14971, IEC 62304). | Extensive list of complied standards, including 21 CFR sections 1020.30, 1020.32, and specific IEC/ISO standards mentioned in Section 9. |
Study Details Proving Device Meets Acceptance Criteria
The study described is primarily a non-clinical performance testing and software verification and validation effort rather than a traditional clinical trial.
-
Sample sizes used for the test set and data provenance:
- Test Set Sample Size: Not explicitly stated as a "sample size" in the context of patients or images for performance evaluation. The testing described is "Unit, Subsystem, and System Integration testing" and "software verification and regression testing." This type of testing uses a diverse set of test cases designed to cover functionality, performance, and safety requirements. For the "Enlarged Volume Field of View," it's a non-clinical test, likely using phantoms or simulated data.
- Data Provenance: Not applicable in terms of patient data provenance for the non-clinical and software testing described. This is bench testing and software validation. Customer reports and feedback forms are mentioned for human factors, but specific details on their origin (country, etc.) are not provided. The manufacturing site is Kemnath, Germany.
-
Number of experts used to establish the ground truth for the test set and qualifications of those experts:
- Not explicitly stated. For non-clinical performance and software testing, "ground truth" is typically established by engineering specifications, known correct outputs for given inputs, and compliance with industry standards. If clinical use tests involved subjective evaluation, the number and qualifications of experts are not detailed, but they are implied to be "healthcare professionals" (operators are "adequately trained").
-
Adjudication method for the test set:
- Not applicable/Not explicitly stated. For software and bench testing, adjudication usually refers to a process of resolving discrepancies in ratings or measurements. Given the nature of this submission (software/hardware modifications and non-clinical testing), formal clinical adjudication methods (like 2+1, 3+1 for image reviews) are not described as part of the primary evidence. Acceptance is based on test cases meeting predefined engineering requirements.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No. An MRMC study was not conducted. This 510(k) is for a mobile X-ray system with software and hardware updates, not an AI-assisted diagnostic device where evaluating human reader performance with and without AI would be relevant. The "AI" mentioned (Retina 3D, NaviLink 3D) refers to advanced imaging/navigation features, not machine learning for diagnostic interpretation.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The "non-clinical test 'Enlarged Volume Field of View' testing" and other "Unit, Subsystem, and System Integration testing" for functionality and performance are essentially standalone tests of the device's components and software without immediate human interpretation in a diagnostic loop. The acceptance criteria for these tests refer to technical performance endpoints, not diagnostic accuracy.
-
The type of ground truth used:
- Engineering Specifications and Standard Compliance: For the performance and safety testing, the "ground truth" is adherence to predefined engineering requirements (e.g., image dimensions, system response times, electrical safety limits) and compliance with national and international industry standards (e.g., IEC 60601 series, ISO 14971, NEMA PS 3.1).
- For the Human Factors Usability Validation, "customer reports and feedback forms" serve as a form of "ground truth" regarding user experience and usability.
-
The sample size for the training set:
- Not applicable. This submission describes modifications to an X-ray imaging system, not the development of a machine learning algorithm that requires a separate training set. The existing software (VA30) was updated to VA31A. The "training" for the software itself would have occurred during its initial development, not for this specific 510(k) submission.
-
How the ground truth for the training set was established:
- Not applicable. As above, this information is not relevant to this specific 510(k) submission, as it focuses on modifications to an existing device rather than the development of a new AI/ML algorithm requiring a training set and its associated ground truth.
Ask a specific question about this device
(115 days)
This computed tomography system is intended to generate and process cross-sectional images of patients by computer reconstruction of x-ray transmission data.
The images delivered by the system can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions.
This CT system can be used for low dose lung cancer screening in high risk populations*.
*As defined by professional medical societies. Please refer to clinical literature, including the results of the National Lung Screening Trial (N Engl J Med 2011; 365:395-409) and subsequent literature, for further information.
Siemens intends to update the software version syngo CT VB20 (update) for the following NAEOTOM Alpha class CT systems:
Dual Source NAEOTOM CT scanner systems:
- NAEOTOM Alpha (trade name ex-factory CT systems: NAEOTOM Alpha.Peak; trade name installed base CT systems with SW upgrade only: NAEOTOM Alpha)
For simplicity, the product name of NAEOTOM Alpha will be used throughout this submission instead of the trade name NAEOTOM Alpha.Peak.
- NAEOTOM Alpha.Pro
Single Source NAEOTOM CT scanner system:
- NAEOTOM Alpha.Prime
The subject devices NAEOTOM Alpha (trade name ex-factory CT systems: NAEOTOM Alpha.Peak) and NAEOTOM Alpha.Pro with software version SOMARIS/10 syngo CT VB20 (update) are Computed Tomography X-ray systems which feature two continuously rotating tube-detector systems, denominated as A- and B-systems respectively (dual source NAEOTOM CT scanner system).
The subject device NAEOTOM Alpha.Prime with software version SOMARIS/10 syngo CT VB20 (update) is a Computed Tomography X-ray system which features one continuously rotating tube-detector systems, denominated as A-system (single source NAEOTOM CT scanner system).
The detectors' function is based on photon-counting technology.
In this submission, the above-mentioned CT scanner systems are jointly referred to as subject devices by "NAEOTOM Alpha class CT scanner systems".
The NAEOTOM Alpha class CT scanner systems with SOMARIS/10 syngo CT VB20 (update) produce CT images in DICOM format, which can be used by trained staff for post-processing applications commercially distributed by Siemens and other vendors. The CT images can be used by a trained staff as an aid in diagnosis, treatment and radiation therapy planning as well as for diagnostic and therapeutic interventions. The radiation therapy planning support includes, but is not limited to, Brachytherapy, Particle Therapy including Proton Therapy, External Beam Radiation Therapy, Surgery. The computer system delivered with the CT scanner is able to run optional post-processing applications.
Only trained and qualified users, certified in accordance with country-specific regulations, are authorized to operate the system. For example, physicians, radiologists, or technologists. The user must have the necessary U.S. qualifications in order to diagnose or treat the patient with the use of the images delivered by the system.
The platform software for the NAEOTOM Alpha class CT scanner systems is syngo CT VB20 (update) (SOMARIS/10 syngo CT VB20 (update)). It is a command-based program used for patient management, data management, X-ray scan control, image reconstruction, and image archive/evaluation. The software platform provides plugin software interfaces that allow for the use of specific commercially available post-processing software algorithms in an unmodified form from the cleared stand-alone post-processing version.
Software version syngo CT VB20 (update) (SOMARIS/10 syngo CT VB20 (update)) shall support additional software features compared to the software version of the predicate devices NAEOTOM Alpha class CT systems with syngo CT VB20 (SOMARIS/10 syngo CT VB20) cleared in K243523.
Software version SOMARIS/10 syngo CT VB20 (update) will be offered ex-factory and as optional upgrade for the existing NAEOTOM Alpha class systems.
The bundle approach is feasible for this submission since the subject devices have similar technological characteristics, software operating platform, and supported software characteristics. All subject devices will support previously cleared software and hardware features in addition to the applicable modifications as described within this submission. The intended use remains unchanged compared to the predicate devices.
The provided document describes the acceptance criteria and a study that proves the device meets those criteria for the NAEOTOM Alpha CT Scanner Systems. However, the document primarily focuses on demonstrating substantial equivalence to a predicate device and safety and effectiveness based on non-clinical testing and adherence to standards, rather than detailing a specific clinical performance study with defined acceptance criteria for a diagnostic aid.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a specific table of acceptance criteria with corresponding performance metrics in the way one would typically find for a diagnostic AI device (e.g., sensitivity, specificity, AUC). Instead, it states that:
- Acceptance Criteria for Software: "The test specification and acceptance criteria are related to the corresponding requirements." and "The test results show that all of the software specifications have met the acceptance criteria."
- Acceptance Criteria for Features: "Test results show that the subject devices...is comparable to the predicate devices in terms of technological characteristics and safety and effectiveness and therefore are substantially equivalent to the predicate devices."
- Performance Claim: "The conclusions drawn from the non-clinical and clinical tests demonstrate that the subject devices are as safe, as effective, and perform as well as or better than the predicate devices."
The closest the document comes to defining and reporting on "performance criteria" for a specific feature, beyond basic safety and technical functionality, are for the HD FoV 5.0 and ZeeFree RT algorithms.
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| HD FoV 5.0 algorithm: As safe and effective as HD FoV 4.0. | HD FoV 5.0 algorithm: Bench test results comparing it to HD FoV 4.0 based on physical and anthropomorphic phantoms. Performance was also evaluated by board-approved radio-oncologists and medical physicists via a retrospective blinded rater study. No specific metrics (e.g., image quality scores, diagnostic accuracy) are provided in this summary. |
| ZeeFree RT reconstruction: | ZeeFree RT reconstruction: |
| - No relevant errors in CT values and noise in homogeneous water phantom. | - Bench test results show it "does not affect CT values and noise levels in a homogenous water phantom outside of stack-transition areas compared to the non-corrected standard reconstruction." |
| - No relevant errors in CT values in phantoms with tissue-equivalent inserts (even with metals and iMAR). | - Bench test results show it "introduces no relevant errors in terms of CT values measured in a phantom with tissue-equivalent inserts, even in the presence of metals and in combination with the iMAR algorithm." |
| - No relevant geometrical distortions in a static torso phantom. | - Bench test results show it "introduces no relevant geometrical distortions in a static torso phantom." |
| - No relevant deteriorations of position or shape in a dynamic thorax phantom (spherical shape with various breathing motions). | - Bench test results show it "introduces no relevant deteriorations of the position or shape of a dynamic thorax phantom when moving a spherical shape according to regular, irregular, and patient breathing motion." Also states it "can be successfully applied to phantom data if derived from a suitable motion phantom demonstrating its correct technical function on the tested device." |
| - Successfully applied to 4D respiratory-gated images (Direct i4D). | - Bench test results show it "can successfully be applied to 4D respiratory-gated sequence images (Direct i4D)." |
| - Enables optional reconstruction of stack artifact-corrected images which reduce misalignment artifacts where present in standard images. | - Bench test results show it "enables the optional reconstruction of stack artefact corrected images, which reduce the strength of misalignment artefacts, if such stack alignment artefacts are identified in non-corrected standard images." |
| - Does not introduce relevant new artifacts not present in non-corrected standard reconstruction. | - Bench test results show it "does not introduce relevant new artefacts, which were previously not present in the non-corrected standard reconstruction." Also states it "does not introduce new artifacts, which were previously not present in the non-corrected standard reconstruction, even in presence of metals." |
| - Independent from physical detector width of acquired data. | - Bench test results show it "is independent from the physical detector width of the acquired data." |
2. Sample Size Used for the Test Set and Data Provenance
The document mentions "physical and anthropomorphic phantoms" for HD FoV 5.0 and "homogeneous water phantom" and "phantom with tissue-equivalent inserts," and "dynamic thorax phantom" for ZeeFree RT. It also refers to "retrospective blinded rater studies of respiratory 4D CT examinations performed at two institutions" for ZeeFree RT, but does not specify the sample size (number of cases/patients) or the country of origin for these real-world examination datasets. The data provenance (retrospective/prospective) is stated for the rater study for ZeeFree RT as retrospective, but not for the HD FoV 5.0 rater study (though implied by "retrospective blinded rater study").
3. Number of Experts and Qualifications for Ground Truth
For the HD FoV 5.0 and ZeeFree RT rater studies, the experts were "board-approved radio-oncologists and medical physicists." The number of experts is not specified, nor is their specific years of experience.
4. Adjudication Method for the Test Set
The document explicitly states "retrospective blinded rater study" for HD FoV 5.0 and ZeeFree RT. However, it does not specify the adjudication method (e.g., 2+1, 3+1, none) if there were multiple raters and disagreements.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document states that for HD FoV 5.0 and ZeeFree RT, "the performance of the algorithm was evaluated by board-approved radio-oncologists and medical physicists by means of retrospective blinded rater study." This indicates a reader study, which is often a component of an MRMC study.
However, the study described does not appear to be comparing human readers with AI assistance vs. without AI assistance. Instead, for HD FoV 5.0, it's comparing the new algorithm's results to its predecessor, HD FoV 4.0. For ZeeFree RT, it's comparing the reconstruction to "Standard reconstruction" and assessing if it introduces errors or new artifacts. It's an evaluation of the algorithm's output, not necessarily a direct measure of human reader improvement with AI assistance. Therefore, no effect size for human reader improvement with AI vs. without AI assistance is reported because this specific type of comparative effectiveness study was not described.
6. Standalone (Algorithm Only) Performance Study
Yes, standalone (algorithm only) performance was conducted. The bench testing described for both HD FoV 5.0 and ZeeFree RT involves detailed evaluations of the algorithms' outputs using phantoms and comparing them to established standards or previous versions. For example, for ZeeFree RT, the bench test objectives include demonstrating that it "introduces no relevant errors in terms of CT values and noise levels measured in a homogeneous water phantom" and "does not introduce relevant new artefacts." This is an assessment of the algorithm's direct output.
7. Type of Ground Truth Used
The ground truth used primarily appears to be:
- Phantom-based measurements: For HD FoV 5.0 (physical and anthropomorphic phantoms) and ZeeFree RT (homogeneous water phantom, tissue-equivalent inserts, static torso phantom, dynamic thorax phantom). These phantoms have known properties which serve as ground truth for evaluating image quality metrics.
- Expert Consensus/Interpretation: For HD FoV 5.0 and ZeeFree RT, it involved "board-approved radio-oncologists and medical physicists" in "retrospective blinded rater studies." This suggests the experts' interpretations (potentially comparing image features or diagnostic quality) formed a part of the ground truth or served as the primary evaluation method. The text doesn't specify if there was a pre-established "true" diagnosis or condition for these clinical cases, or if the experts were rating image quality or agreement with a reference standard.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set for any of the algorithms or software features. This document is a 510(k) summary, which generally focuses on justification for substantial equivalence rather than detailed algorithm development specifics.
9. How the Ground Truth for the Training Set Was Established
The document does not describe how the ground truth for the training set was established, as it does not provide information about the training set itself.
Ask a specific question about this device
Page 1 of 113