Search Results
Found 13 results
510(k) Data Aggregation
(45 days)
syngo.via MI Workflows; Scenium; syngo MBF
syngo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as positron emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR). syngo.via MI workflows can perform harmonization of SUV (PET) across different PET systems or different PET reconstruction methods.
syngo.via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
syngo.via MI Workflows (including Scenium and syngo MBF applications) is a multi-modality post-processing software only medical device intended to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The syngo.via MI Workflows applications are part of a larger syngo.via client/server system which is intended to be installed on common IT hardware. The hardware itself is not seen as part of the syngo.via MI Workflows medical device.
The syngo.via MI Workflows software addresses the needs of the following typical users of the product:
- Reading Physician / Radiologist – Reading physicians are doctors who are trained in interpreting patient scans from PET, SPECT and other modality scanners. They are highly detail oriented and analyze the acquired images for abnormalities, enabling ordering physicians to accurately diagnose and treat scanned patients. Reading physicians serve as a liaison between the ordering physician and the technologists, working closely with both.
- Technologist – Nuclear medicine technologists operate nuclear medicine scanners such as PET and SPECT to produce images of specific areas and states of a patient's anatomy by administering radiopharmaceuticals to patients orally or via injection. In addition to administering the scan, the technologist must properly select the scan protocol, keep the patient calm and relaxed, monitor the patient's physical health during the protocol and evaluate the quality of the images. Technologists work very closely with physicians, providing them with quality-checked scan images.
The software has been designed to integrate the clinical workflow for the above users into a server-based system that is consistent in design and look with the base syngo.via platform and other syngo.via software applications. This ensures a similar look and feel for radiologists that may review multiple types of studies from imaging modalities other than Molecular Imaging, such as MR.
The syngo.via MI workflows software supports integration through DICOM transfers of positron emission tomography (PET) or nuclear medicine (NM) data, as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR).
Although data is automatically imported into the server based on predefined configurations through the hospital IT system, data can also be manually imported from external media, including CD, external mass storage devices, etc.
The Siemens syngo.via platform and the applications that reside on it, including syngo.via MI Workflows, are distributed via electronic medium. The Instructions for Use is also delivered via electronic medium.
syngo.via MI Workflows includes 2 workflows (syngo.MM Oncology and syngo.MI General) as well as the Scenium neurology software application and the syngo MBF cardiology software application which are launched from the OpenApps framework within the MI General workflow.
Here's a breakdown of the acceptance criteria and study details for the syngo.via MI Workflows, Scenium, and syngo MBF devices:
Acceptance Criteria and Reported Device Performance
For Lung and Lung Lobe Segmentation:
Acceptance Criteria Category | Specific Criteria | Reported Device Performance (Subject Device vs. Predicate) |
---|---|---|
New Organs | Average Dice coefficient per organ > 0.8 OR Average Symmetric Surface Distance (ASSD) per organ = predicate. | The average Dice coefficient for the 20 subjects was higher for each lobe in the subject device than in the predicate device, although not greater than a +0.03 difference for all lobes. |
For PERCIST Liver Reference Region Placement (Binary Liver Mask, input to the algorithm):
Acceptance Criteria Category | Specific Criteria | Reported Device Performance |
---|---|---|
New/Existing Organs | Average Dice coefficient > 0.8 OR Average Symmetric Surface Distance (ASSD) |
Ask a specific question about this device
(29 days)
syngo.via MI Workflows; Scenium; syngo MBF
syngo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as positron emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR), syngo.via MI workflows can perform harmonization of SUV (PET) across different PET systems or different PET reconstruction methods.
syngo.via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
syngo.via MI Workflows (including Scenium and syngo MBF applications) is a multi-modality postprocessing software only medical device intended to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The syngo.via MI Workflows applications are part of a larger syngo.via client/server system which is intended to be installed on common IT hardware. The hardware itself is not seen as part of the syngo.via MI Workflows medical device.
The syngo.via MI Workflows software addresses the needs of the following typical users of the product:
- Reading Physician / Radiologist – Reading physicians are doctors who are trained in interpreting patient scans from PET, SPECT and other modality scanners. They are highly detail oriented and analyze the acquired images for abnormalities, enabling ordering physicians to accurately diagnose and treat scanned patients. Reading physicians serve as a liaison between the ordering physician and the technologists, working closely with both.
- Technologist – Nuclear medicine technologists operate nuclear medicine scanners such as PET and SPECT to produce images of specific areas and states of a patient's anatomy by administering radiopharmaceuticals to patients orally or via injection. In addition to administering the scan, the technologist must properly select the scan protocol, keep the patient calm and relaxed, monitor the patient's physical health during the protocol and evaluate the quality of the images. Technologists work very closely with physicians, providing them with quality-checked scan images.
The software has been designed to integrate the clinical workflow for the above users into a serverbased system that is consistent in design and look with the base syngo.via platform and other syngo.via software applications. This ensures a similar look and feel for radiologists that may review multiple types of studies from imaging modalities other than Molecular Imaging, such as MR.
The syngo.via MI workflows software supports integration through DICOM transfers of positron emission tomography (PET) or nuclear medicine (NM) data, as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR).
Although data is automatically imported into the server based on predefined configurations through the hospital IT system, data can also be manually imported from external media, including CD, external mass storage devices, etc.
The Siemens syngo.via platform and the applications that reside on it, including syngo.via MI Workflows, are distributed via electronic medium. The Instructions for Use is also delivered via electronic medium.
syngo.via MI Workflows includes 2 workflows (syngo.MM Oncology and syngo.MI General) as the Scenium neurology software application and the syngo MBF cardiology software application which are launched from the OpenApps framework within the MI General workflow.
Here's a breakdown of the acceptance criteria and study information for the Siemens syngo.via MI Workflows, including Scenium, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Scenium Centiloid Score Calibration (Florbetapir) | Strong agreement with standard method | R² = 0.97 |
Scenium Centiloid Score Calibration (Florbetaben) | Strong agreement with standard method | R² = 0.98 |
Scenium Centiloid Score Calibration (Flutemetamol) | Strong agreement with standard method | R² = 0.95 |
Scenium Centiloid Score Validation (Amyvid™) vs. ADNI CL | Strong agreement with ADNI CL values | SceniumCL = 1.044 × ADNI CL – 0.712; R² = 0.97 |
Scenium Centiloid Score Validation (Neuraceq™) vs. ADNI CL | Strong agreement with ADNI CL values | SceniumCL = 1.095 × ADNI CL – 7.241; R² = 0.98 |
Scenium Centiloid Score (Amyloid PET) Agreement with Visual Reading | Excellent agreement with visual-based classification | Area Under ROC Curve = 0.9872 (optimal CL cut-off value of 26, sensitivity 92.0%, specificity 96.3%) |
2. Sample Size and Data Provenance for Test Set
- Calibration Data:
- Sample Size: Not explicitly stated, but "calibration of PET images and their corresponding SUVr and CL reference data were obtained from the GAAIN website." This implies a sufficiently large dataset for method calibration.
- Provenance: GAAIN website (Global Alzheimer's Association Interactive Network) – likely a multinational, retrospective dataset of clinical trial data.
- Validation Data (ADNI):
- Sample Size: Not explicitly stated, but "two independent datasets" were used for validation against ADNI CL values for florbetaben. ADNI (Alzheimer's Disease Neuroimaging Initiative) is a large, multi-center, prospective observational study primarily conducted in North America.
- Provenance: ADNI (Alzheimer's Disease Neuroimaging Initiative), likely primarily from the USA and Canada. Prospective given the nature of ADNI.
- Validation Data (Visual Reading Agreement):
- Sample Size: 162 patients (69 females, 93 males)
- Provenance: Retrospective review of patients with Mild Cognitive Impairment (MCI) who underwent A-PET. The specific country of origin is not mentioned.
3. Number of Experts and Qualifications for Ground Truth (Test Set)
- Calibration Data (GAAIN): The "standard method" for Centiloid scale calculation (Klunk et al.²) implies a consensus-derived or established expert-validated process. The number and specific qualifications of experts involved in the original GAAIN data curation are not detailed but are assumed to be highly qualified specialists in PET imaging and Alzheimer's research.
- Validation Data (ADNI): The ADNI Centiloid values are established through rigorous, expert-driven protocols. The text states "ADNI CL values," implying the ground truth was derived from the ADNI project's established methods, which involve numerous qualified experts in neurology, radiology, and nuclear medicine.
- Validation Data (Visual Reading Agreement): Patients were classified as "negative" by consensus. The number and specific qualifications of experts involved in this consensus are not explicitly stated, but it would typically involve experienced nuclear medicine physicians or radiologists specializing in neuroimaging.
4. Adjudication Method for the Test Set
- Calibration Data (GAAIN): Not explicitly stated, but the "standard method" for Centiloid score calculation suggests an established, perhaps algorithmic, adjudication or consensus process applied to the raw data.
- Validation Data (ADNI): Not explicitly stated, but the ADNI's established protocols for data analysis and Centiloid score determination would inherently involve robust, multi-expert consensus or adjudicated methods.
- Validation Data (Visual Reading Agreement): Patients were "classified as 'negative' by consensus." This indicates that multiple experts reviewed the images and reached an agreement on the classification. The specific method (e.g., 2+1, 3+1) is not provided.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done comparing human readers with AI assistance vs. without AI assistance. The study primarily focuses on validating the device's output (Centiloid scores) against established standards and visual interpretations, not on human workflow improvement with AI.
6. Standalone (Algorithm Only) Performance
- Yes, standalone performance was done for the Scenium Centiloid scoring feature. The studies directly compare Scenium's calculated Centiloid scores (algorithm output) against "standard method" values (from GAAIN) and ADNI CL values. The agreement with visual reading also assesses the algorithm's standalone diagnostic accuracy in classifying patients.
7. Type of Ground Truth Used
- Expert Consensus / Established Methodology:
- For the calibration, the ground truth was the "standard method" of Centiloid estimation as prescribed in Klunk et al.², using reference data from GAAIN, which is an established, expert-driven consortium.
- For validation, it involved "ADNI CL values," which are considered an established ground truth in Alzheimer's research.
- For the visual reading agreement, the ground truth was "visual-based classification" determined by expert consensus.
8. Sample Size for the Training Set
- The text does not explicitly mention a "training set" for the Scenium Centiloid scoring algorithm. The process described is a "calibration" using data from GAAIN to derive transformation equations, and then "validation" on independent datasets. It's possible the calibration data acts as a form of training/development set.
- Calibration Data (GAAIN): Sample size not explicitly stated for the "calibration analysis" dataset.
9. How the Ground Truth for the Training Set was Established
- As a "training set" isn't explicitly defined, we refer to the calibration process. The ground truth for the calibration (or equations derivation) was established using "calibration of PET images and their corresponding SUVr and CL reference data obtained from the GAAIN website." This reference data itself would have been established through rigorous scientific methods and likely expert consensus within the GAAIN consortium, adhering to the "level-2 calibration analysis prescribed in Klunk et al.²" to ensure a standardized and reliable ground truth.
Ask a specific question about this device
(146 days)
syngo.via MI Workflows; Scenium; syngo MBF
syngo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as positron emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR). syngo.via MI workflows can perform harmonization of SUV (PET) across different PET systems or different PET reconstruction methods.
syngo via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
syngo.via MI Workflows (including Scenium and syngo MBF applications) is a multi-modality postprocessing software only medical device intended to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The syngo.via MI Workflows applications are part of a larger syngo.via client/server system which is intended to be installed on common IT hardware. The hardware itself is not seen as part of the syngo.via MI Workflows medical device.
The syngo.via MI Workflows software addresses the needs of the following typical users of the product:
- . Reading Physician / Radiologist – Reading physicians are doctors who are trained in interpreting patient scans from PET, SPECT and other modality scanners. They are highly detail oriented and analyze the acquired images for abnormalities, enabling ordering physicians to accurately diagnose and treat scanned patients. Reading physicians serve as a liaison between the ordering physician and the technologists, working closely with both.
- . Technologist – Nuclear medicine technologists operate nuclear medicine scanners such as PET and SPECT to produce images of specific areas and states of a patient's anatomy by administering radiopharmaceuticals to patients orally or via injection. In addition to administering the scan, the technologist must properly select the scan protocol, keep the patient calm and relaxed, monitor the patient's physical health during the protocol and evaluate the quality of the images. Technologists work very closely with physicians, providing them with quality-checked scan images.
The software has been designed to integrate the clinical workflow for the above users into a serverbased system that is consistent in design and look with the base syngo.via platform and other syngo.via software applications. This ensures a similar look and feel for radiologists that may review multiple types of studies from imaging modalities other than Molecular Imaging, such as MR.
syngo.via MI workflows software supports integration through DIC emission tomography (PET) or nuclear medicine (NM) data, as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR).
Although data is automatically imported into the server based on predefined configurations through the hospital IT system, data can also be manually imported from external media, including CD, external mass storage devices, etc.
The Siemens syngo.via platform and the applications that reside on it, including syngo.via MI Workflows, are distributed via electronic medium. The Instructions for Use is also delivered via electronic medium.
syngo.via MI Workflows includes 2 workflows (syngo.MM Oncology and syngo.MI General) as the Scenium neurology software application and the syngo MBF cardiology software application which are launched from the OpenApps framework within the MI General workflow.
Here's a breakdown of the acceptance criteria and the study information for the Syngo.via MI Workflows, Scenium, and Syngo MBF device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on two areas of performance evaluation: Organ Segmentation and Tau Workflow Support. The acceptance criteria for Organ Segmentation are explicitly stated, while for Tau Workflow, the criteria are implied through correlation and agreement with existing methods.
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Organ Segmentation | All organs must meet criteria for either the average DICE coefficient or the average symmetric surface distance (ASSD: average surface distance between algorithm result and manual ground truth annotation). | All organs met criteria for either the average DICE coefficient or the ASSD. (Specific numerical values for DICE or ASSD are not provided in this summary). |
Tau Workflow Support (SUVRs) | Good correlations and agreement with an MR-based method and MR-based segmentations for SUVRs calculated on individual and composite Braak VOIs using the new pipeline and masks. | Comparisons showed good correlations and agreement between the two sets of values (new pipeline vs. MR-based method) on more than 700 flortaucipir images from ADNI. |
2. Sample Size Used for the Test Set and Data Provenance
- Organ Segmentation: Not explicitly stated. The algorithm used was "originally cleared within syngo.via RT Image Suite (K201444) and carried into the reference predicate device (syngo.via RT Image Suite, K220783)." This suggests the data provenance for this algorithm was tied to those previous clearances. The document implies the segmentation quality was assessed, but the specific test set size for this current submission is not provided.
- Tau Workflow Support: "more than 700 flortaucipir images from ADNI".
- Data Provenance (Tau Workflow): "ADNI" (Alzheimer's Disease Neuroimaging Initiative). This is a prospective, multi-center, North American study. The exact countries of origin of the individual images are not specified but ADNI is a U.S. led initiative with international participation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Organ Segmentation: For manual ground truth annotation, the number of experts and their qualifications are not specified.
- Tau Workflow Support: The ground truth for the "MR-based method and MR-based segmentations" used for comparison is from existing methods mentioned in the references. The number and qualifications of experts involved in establishing this historical ground truth are not specified in this document.
4. Adjudication Method for the Test Set
- Organ Segmentation: An adjudication method is not explicitly stated. The process involved "comparing a manually annotated ground truth with the algorithm result." It's common for a single expert or a consensus of experts to establish manual ground truth, but the method for resolving discrepancies or reaching consensus is not detailed here.
- Tau Workflow Support: An adjudication method is not explicitly stated. The comparison was made between the device's calculated SUVRs and those from an "MR-based method and MR-based segmentations." This implies a comparison against a pre-established or validated method rather than a multi-reader adjudication specifically for this study.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done for this submission. The document explicitly states: "Clinical testing was not conducted for this submission." The evaluations focused on standalone performance and agreement with existing methods.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Performance
- Yes, standalone performance was done.
- Organ Segmentation: The segmentation algorithm's performance (DICE coefficient and ASSD) was assessed by comparing its output directly against manually annotated ground truth. This is a standalone evaluation.
- Tau Workflow Support: The "SUVRs calculated on individual and composite Braak VOIs using our pipeline and our masks" were compared to an "MR-based method." This directly assesses the algorithm's standalone quantification capabilities.
7. Type of Ground Truth Used
- Organ Segmentation: Expert consensus (manual annotation) is implied ("manually annotated ground truth").
- Tau Workflow Support: Reference method (MR-based method and MR-based segmentations) and potentially expert consensus that established those reference methods. The references provided suggest established research pipelines for flortaucipir processing and ADNI publications, which would typically involve expert interpretation and validation.
8. Sample Size for the Training Set
- Organ Segmentation: The document states the algorithm is the "same algorithm originally cleared within syngo.via RT Image Suite (K201444) and carried into the reference predicate device (syngo.via RT Image Suite, K220783)." The training set size for this re-used algorithm is not specified in this document, but would have been part of the original clearance.
- Tau Workflow Support: The training set size for the tau quantification workflow is not specified.
9. How the Ground Truth for the Training Set Was Established
- Organ Segmentation: The method for establishing ground truth for the training set of the deep-learning algorithm is not specified in this document. Given it's a deep-learning algorithm, it would typically involve expert-labeled data, but the details are not provided.
- Tau Workflow Support: The method for establishing ground truth for the training set (if applicable) for the tau quantification workflow is not specified. It mentions using the AAL atlas as a basis for defining Braak regions, which is a pre-existing anatomical atlas.
Ask a specific question about this device
(30 days)
syngo.via MI WorkFlows, Scenium, syngo MBF
syngo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as position emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR), syngo.via MI workflows can perform harmonization of SUV (PET) across different PET systems or different PET reconstruction methods.
syngo.via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
syngo.via MI Workflows is a multi-modality post-processing software only medical device, which is intended to be installed on common IT hardware. This hardware must fulfill the defined requirements. The hardware itself is not seen as part of the medical device.
The Siemens syngo.via platform (K191040) and the applications that reside on it are distributed via electronic medium. The Instructions for Use also delivered via electronic medium.
synqo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as positron emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR).
syngo.via MI Workflows enable visualization of information that would otherwise have to be visually compared disjointedly. syngo.via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
Scenium assists in the display and analysis of images within the MI Neurology workflow of syngo.via MI Workflows. This software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium consists of four workflows:
- Database Comparison
- -Striatal Analvsis
- -Cortical Analysis
- -Subtraction
The Scenium workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies, such as dementia (i.e., Alzheimer's), movement disorders (i.e., Parkinson's) and seizure analysis (i.e., Epilepsy),
syngo MBF is a software only product intended for visualization, assessment and quantification of medical images: specifically providing quantitative blood flow measurements of PET images. The software is launched from the OpenApps Framework within the MI Cardiology workflow within syngo.Via MI Workflows. The application supports dynamic Rubidium – PET and dynamic Ammonia – PET images. The application provides visualization and measurement tools, for qualitative and quantitative visualization and assessment of the input data. It provides automatic and manual tools to orient and segment the myocardium. The software calculates measurements of myocardial blood flow, and provides tools, such as a database comparison workflow, for the Clinician to assess these results.
The provided text describes modifications to the syngo.via MI Workflows
software (specifically VB60A, Scenium VE40A, and syngo MBF VB30A versions) and asserts their substantial equivalence to a predicate device (syngo.via MI Workflows VB50A, Scenium VE30A, and syngo MBF VB20A, K201195). However, it does not contain a detailed description of acceptance criteria or a specific study proving the device meets those criteria in the typical sense of a clinical or performance validation study with quantitative metrics, expert adjudication, or MRMC data.
Instead, the document focuses on:
- Regulatory Compliance: Adherence to FDA regulations (21 CFR 892.2050, 21 CFR Part 807.87(h)), recognized standards (ISO 14971, EN ISO 13485, IEC 62304, NEMA PS 3.1-3.20, IEC 62366-1, ISO 15223-1), and cybersecurity guidelines.
- Functional Equivalence: Stating that the new features do not alter the existent technological characteristics or raise new issues of safety and effectiveness compared to the predicate device.
- Verification and Validation (V&V): A general statement that "Verification and Validation activities have been successfully performed on the software package, including assurance that functions work as designed, performance requirements and specifications have been met, and that all hazard mitigations have been fully implemented. All testing has met the predetermined acceptance values."
Without specific performance metrics and a detailed study design provided in the given text, it is not possible to fully populate all components of your request. I will extract what information is present and indicate where information is Not Provided (NP).
Acceptance Criteria and Device Performance (as inferred from the document)
The document broadly states that "All testing has met the predetermined acceptance values." However, it does not explicitly define these "predetermined acceptance values" in a quantitative table. The primary acceptance criteria appear to be substantial equivalence, functional correctness, and adherence to safety and quality standards.
Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|
Functional correctness of new features: | "functions work as designed" |
- Updated syngo.CT LungCAD Integration | (Implied: Integrated correctly) |
- Visualization of 4D data in all layouts | (Implied: Works as intended) |
- FAST Ranges Enhancements | (Implied: Enhanced as intended) |
- Auto Layout Improvements | (Implied: Improved as intended) |
- Gaussian filtering of PET Data | (Implied: Works correctly) |
- Interactive Spectral Imaging | (Implied: Works correctly) |
- Usability Improvements | (Implied: Improved as intended) |
- OpenApps framework for ISAs (Cedars, Corridor 4DM, syngo MBF) | (Implied: Framework supports ISAs) |
- Spill-Over Factors (within syngo MBF) | (Implied: Implemented and works) |
- Automatic window/level for each frame (within syngo MBF) | (Implied: Works correctly) |
- Global Time Activity Curve (within syngo MBF) | (Implied: Works correctly) |
- Calibrated I123-FP-CIT normal databases in Striatal Analysis | (Implied: Databases accurate and integrated) |
Meet performance requirements and specifications | "performance requirements and specifications have been met" |
Implement all hazard mitigations (ISO 14971) | "all hazard mitigations have been fully implemented" |
Cybersecurity controls | "has specific cybersecurity controls to prevent unauthorized access, modifications, misuse or denial of use" |
Compliance with relevant standards and regulations | "adheres to recognized and established industry standards," compliance with 21 CFR 820 |
Not raise new issues of safety and effectiveness | "do not raise any new issues of safety and effectiveness as compared to the predicate device." |
Study Details:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: NP (Not provided in the document. The document refers to "Verification and Validation activities" and "All testing" but does not specify the number of cases or datasets used for these tests.)
- Data Provenance: NP (Not provided. It is not stated where the data for testing originated from, e.g., country of origin, or if it was retrospective or prospective data.)
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: NP (Given the nature of the modifications described – mainly functional additions and improvements to existing workflows – it's unlikely a traditional "ground truth" for disease diagnosis was established for this specific submission beyond ensuring the software performs its intended technical functions. If expert review was part of the V&V, it is not detailed.)
- Qualifications of Experts: NP
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Adjudication Method: NP (This type of adjudication is typically for establishing diagnostic ground truth, which is not the focus of the described V&V for these software updates.)
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No. The document describes software workflow updates for viewing, manipulation, quantification, and analysis of medical images. It does not introduce an "AI" component intended to directly assist or change clinical decision-making in a way that would necessitate an MRMC study demonstrating improved human reader performance. The software is a tool for professionals, not an AI diagnostic assistant.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Standalone Performance: The V&V activities would have included testing of the software's algorithms and functions in a standalone manner to ensure they work as designed and meet specifications. However, specific metrics (e.g., accuracy, sensitivity, specificity for automated tasks) are NP for any specific algorithm. The "syngo MBF" module, for instance, calculates quantitative blood flow measurements, and its accuracy would have been part of the V&V, but no specific performance statistics are provided.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Type of Ground Truth: For the nature of these software updates, the "ground truth" would likely be technical correctness and adherence to algorithmic specifications rather than clinical outcomes or pathology. For example, ensuring that a 4D visualization works as intended, or that quantitative measurements (e.g., SUV harmonization, blood flow measurements) are mathematically correct and consistent with reference values or established methodologies. Detailed information about exactly how this "ground truth" was established (e.g., through phantom studies, simulations, or comparison with established clinical software/manual calculations) is NP.
-
The sample size for the training set:
- Training Set Sample Size: NP. The document does not mention training sets, which implies that the updates are not based on a machine learning model that would require a distinct training phase. These are described as functional additions and improvements to existing software, not new AI/ML algorithms.
-
How the ground truth for the training set was established:
- Training Set Ground Truth: NP, as no specific training set for (ML/AI) models is implied.
Ask a specific question about this device
(66 days)
syngo.via MI Workflows VB40A, Scenium
syngo.via MI Workflows are medical diagnostic applications for viewing, manipulation, 3D- visualization and comparison of medical images from multiple imaging modalities and/or multiple time-points. The apports functional data, such as PET or SPECT as well as anatomical datasets, such as CT or MR.
syngo.via MI Workflows enable visualization of information that would otherwise have to be visually compared disjointedly. syngo.via MI Workflows provide analytical tools to help the user assess, and document changes in morphological or functional activity at diagnostic and therapy follow-up examinations. syngo via MI Workflows can perform harmonization of SUV (PET) across different PET systems or different reconstruction methods.
syngo.via MI workflows support the interpretation of examinations and follow up documentation of findings within healthcare institutions, for example, in Radiology, Nuclear Medicine and Cardiology environments.
Note: The clinician retains the ultimate responsibility for making the pertinent diagnosis based on their standard practices and visual comparison of the separate unregistered images. syngo.via MI Workflows are a complement to these standard procedures.
The Scenium display and analysis software has been developed to aid the Clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing scans derived from FDGPET, amyloid-PET, and SPECT studies, calculation of uptake ratios between regions of interest, and subtraction between two functional scans.
syngo.via MI Workflows is a software-only medical device which will be delivered on CD-ROM / DVD to be installed onto the commercially available Siemens syngo.via software platform (K191040) by trained service personnel.
syngo.via MI Workflows is a medical diagnostic application for viewing, manipulation, 3Dvisualization and comparison of medical images from multiple imaqinq modalities and/or multiple time-points. The application supports functional data, such as PET or SPECT as well as anatomical datasets, such as CT or MR. The images can be viewed in a number of output formats including MIP and volume rendering.
syngo.via MI Workflows enable visualization of information that would otherwise have to be visually compared disjointedly. syngo.via MI Workflows provide analytical tools to help the user assess, and document changes in morphological or functional activity at diagnostic and therapy follow-up examinations. They additionally support the interpretation and evaluation of examinations and follow up documentation of findings within healthcare institutions, for example, in Radiology (Oncology), Nuclear Medicine and Cardiology environments.
Scenium display and analysis software sits within the MI Neurology workflow within syngo.via MI Workflows. This software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium consists of four workflows:
- -Database Comparison
- -Striatal Analysis
- Cortical Analysis -
- -Subtraction
These workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies, such as dementia (i.e., Alzheimer's), movement disorders (i.e., Parkinson's) and seizure analysis (i.e., Epilepsy).
The modifications to the syngo.via MI Workflows and Scenium (MI Neurology) software (K173897 and K173597) include the following new features:
Workflow | Workflow-specific Features |
---|---|
MM Oncology | Interactive Trending |
Hybrid VRT / MIP ranges | |
Spine and Rib labelling | |
MI Neurology (Scenium) | Factory Normals Database for DaTscan™ |
Export Subtraction and Z-score Images as DICOM | |
Z-score Image Overlay and Threshold Improvements | |
MI Reading / SPECT | |
Processing | Renal Enhancements (extrapolation of T1/2) |
Integrate Image Registration Activity | |
MI Cardiology | No updates / changes to third party applications within MI |
Cardiology or workflow functionality. |
The provided text does not contain detailed information about specific acceptance criteria or a dedicated study proving the device meets those criteria. The document is a 510(k) summary for the syngo.via MI Workflows VB40A, Scenium device, primarily focusing on demonstrating substantial equivalence to a predicate device.
However, it does mention that "Verification and Validation activities have been successfully performed on the software package, including assurance that functions work as designed, performance requirements and specifications have been met, and that all hazard mitigations have been fully implemented. All testing has met the predetermined acceptance values." This generally indicates that internal testing was conducted against defined acceptance criteria, but these criteria and the detailed results are not explicitly stated.
Therefore, many of the requested details cannot be extracted from the provided text.
Here's what can be gathered, with limitations:
1. A table of acceptance criteria and the reported device performance
- Not explicitly provided. The document states "All testing has met the predetermined acceptance values," but does not list specific acceptance criteria or the quantitative performance metrics achieved.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Not provided. The document does not discuss the sample size or provenance of data used for any performance testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not provided. This information is absent from the text.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not provided. The document does not describe any adjudication methods.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not provided. The document does not mention any MRMC comparative effectiveness studies or the impact of the device on human reader performance. The device is described as a diagnostic application for viewing, manipulation, 3D-visualization, and comparison, and its role is to "complement these standard procedures," but no specific reader studies are detailed.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Not explicitly stated as a standalone study in the context of clinical performance. The device itself is software-only, meaning its "standalone" functionality is its core operation. However, the document does not present specific performance metrics that would be associated with a standalone algorithm performance study (e.g., sensitivity, specificity for a particular pathology detection). It focuses on the software's ability to view, manipulate, and analyze images, and that "All testing has met the predetermined acceptance values," implying functional and performance testing, but not necessarily a clinical standalone performance study in the way AI algorithms are often evaluated for diagnostic accuracy.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not provided. The document does not specify how ground truth was established for any testing.
8. The sample size for the training set
- Not provided. The document does not discuss any training sets, suggesting that this device might not involve a machine learning model that requires a distinct training phase in the traditional sense, or at least that information is not part of this 510(k) summary. Given the device's description as an "Image Processing Software" that provides "analytical tools," it's more likely rule-based or using established algorithms rather than a deep learning model requiring extensive training data.
9. How the ground truth for the training set was established
- Not applicable / Not provided. As no training set information is given, this question cannot be answered.
In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence to predicate devices and adherence to quality systems and standards (ISO 14971, IEC 62304), rather than detailing specific clinical performance studies with acceptance criteria, ground truth, or reader study results. The statement about "All testing has met the predetermined acceptance values" is a general confirmation of internal verification and validation.
Ask a specific question about this device
(134 days)
Scenium VE20 Software
The Scenium display and analysis software has been developed to aid the clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing databases of normal patients and normal parameters derived from these databases, derived from FDG-PET, amyloid-PET, and SPECT studies, calculation of uptake ratios between regions of interest, and subtraction between two functional scans.
Scenium VE10 display and analysis software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium VE10 consists of three workflows:
- Database Comparison
- Ratio Analysis
- Subtraction
These workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies, such as dementia (i.e., Alzheimer's), movement disorders (i.e., Parkinson's) and seizure analysis (i.e., Epilepsy).
The modifications made to the Scenium VE10 software (K162339) to create the Scenium VE20 software include:
- The ability to create and support normal databases in the Striatal Analysis workflow
- DaTSCAN-SPECT normals database
- Improvements related to the analysis screen for reporting in Striatal Analysis
In addition, workflow structures changed within the VE20 release. Previously, the three workflows (Database Comparison, Ratio Analysis, and Subtraction) encompassed the Scenium software. Ratio Analysis has since been split into two separate workflows. Now, the following four workflows exist within Scenium VE20:
- Database Comparison
- Striatal Analysis
- Cortical Analysis
- Subtraction
These changes are based on current commercially available software features and do not change the technological characteristics of the device.
Scenium VE20 analysis software is intended to be run on commercially available software platforms such as the Siemens syngo.MI Workflow software platform (K150843).
The provided text is a 510(k) summary for the Scenium VE20 device, which is an image processing software for PET and SPECT scans. It primarily focuses on demonstrating substantial equivalence to a predicate device (Scenium VE10) rather than presenting a detailed study with specific acceptance criteria and performance metrics for the VE20 device itself.
Therefore, the document does not contain the detailed information required to fill out all sections of your request about acceptance criteria and a specific study proving the device meets them. The text states that "All testing has met the predetermined acceptance values" but does not elaborate on what those values were or what performance metrics were used to determine them for the Scenium VE20 specifically.
Here's what can be extracted and inferred, along with the information that is explicitly missing:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance | Comments |
---|---|---|
Specific quantitative acceptance criteria for Scenium VE20 performance (e.g., accuracy, precision, sensitivity, specificity for pathology detection/quantification) | Not explicitly stated in the document. The document broadly states "All testing has met the predetermined acceptance values." This suggests internal performance criteria were met, but the details are not provided for the public record. | The document implies that the modifications to VE20 (support for DaTSCAN-SPECT normals database and analysis screen improvements) did not change the fundamental technological characteristics or intended use. Therefore, any performance met by VE10 would implicitly be carried over, but no specific performance numbers for VE20 are given. |
Qualitative functional requirements (e.g., proper execution of workflows, accurate display of multimodality data) | "Verification and Validation activities have been successfully performed on the software package, including assurance that functions work as designed, performance requirements and specifications have been met..." | This confirms the software's functional integrity. |
Safety and Effectiveness (e.g., no new issues of safety and effectiveness compared to predicate) | "There have been no changes that raise any new issues of safety and effectiveness as compared to the predicate device." | This is a regulatory acceptance criterion for substantial equivalence. |
Compliance with standards (e.g., ISO 14971, ISO 13485, IEC 62304) | "Risk Management has been ensured via risk analyses in compliance with ISO 14971:2012... Siemens Medical Solutions USA, Inc. adheres to recognized and established industry standards for development including ISO 13485 and IEC 62304." | This indicates compliance with recognized standards. |
Study Details Provided
The document refers to "Verification and Validation activities" and "All testing" but does not describe a specific clinical or technical study designed to prove the device meets explicit acceptance criteria in the way a clinical trial or performance study would. Instead, it relies on demonstrating equivalence to an already cleared device.
-
Sample size used for the test set and the data provenance:
- Not explicitly stated for Scenium VE20. The document does not describe a separate clinical test set or its sample size.
- Data provenance: Not mentioned.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable / Not stated. No specific external "test set" and associated ground truth establishment process is described in this document for the Scenium VE20.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable / Not stated. No specific external "test set" and adjudication process is described.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not explicitly mentioned or described as part of this 510(k) submission. The device (Scenium VE20) is primarily an image processing software for quantification and comparison, aiding clinicians, but the submission doesn't present it as an AI assistant in the typical sense that would necessitate an MRMC study comparing human performance with and without its aid. The improvements are related to expanding database and analysis capabilities.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Not explicitly described. The document states "Verification and Validation activities have been successfully performed on the software package, including assurance that functions work as designed, performance requirements and specifications have been met..." This implies internal testing of the algorithm's functions, but details of a formal "standalone" performance study are not provided. The device "aids the clinician" and its output facilitates "comparison," implying human interpretation remains central.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated. Given the nature of the software (quantification and comparison with normal databases), the ground truth for the underlying databases (FDG-PET, amyloid-PET, and SPECT studies, DaTSCAN-SPECT normals database) would likely have been established through clinical diagnosis, expert consensus, or follow-up outcomes, but this is not detailed in the context of VE20's "testing."
-
The sample size for the training set:
- Not explicitly stated for the Scenium VE20's development. The document mentions the "ability to create and support normal databases in the Striatal Analysis workflow" including a "DaTSCAN-SPECT normals database." The size and composition of these "normal databases" are not specified as "training sets" for a deep learning model, as the document doesn't explicitly state the use of AI/deep learning in the typical sense for image interpretation. This sounds more like statistical comparison to pre-existing normal population data.
-
How the ground truth for the training set was established:
- Not explicitly stated for the Scenium VE20's development. For the normal databases, the ground truth would inherently be "normal patients" (as described in the text "existing databases of normal patients"). How the "normal" status was confirmed for these patients in these databases is not elaborated upon in this submission.
In summary, this 510(k) submission focuses on demonstrating substantial equivalence to a predicate device by highlighting that the modifications do not alter the fundamental technological characteristics or intended use, and that internal verification and validation activities confirmed the software functions as designed and met its requirements. It does not provide the kind of detailed performance study and acceptance criteria that might be found in submissions for novel AI-powered diagnostic devices.
Ask a specific question about this device
(30 days)
Scenium
The Scenium display and analysis software has been developed to aid the Clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing scans derived from FDG-PET, amyloid-PET, and SPECT studies, calculation of uptake ratios between regions of interest, and subtraction between two functional scans.
Scenium VD20 display and analysis software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium VD20 consists of three workflows:
- Database Comparison -
- -Ratio Analysis
- -Subtraction
These workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies, such as dementia (i.e., Alzheimer's), movement disorders (i.e., Parkinson's) and seizure analysis (i.e., Epilepsy).
The modifications made to the Scenium VD20 software (K150192) to create the Scenium VE10 software include:
- Enabled comparative quantification between Scenium workflows -
- User interface and usability improvements to improve consistency with the syngo.via platform
- Addition of new normal databases to the Database Comparison workflow
- o SPECT HMPAO (Chang AC) normal database
- PET Florbetaben normal database o
- -Stereotactic Surface Projection (SSP) for Amyloid uptake images can now be viewed
These changes are based on current commercially available software features and do not change the technological characteristics of the device.
Scenium VE10 analysis software is intended to be run on commercially available software platforms such as the Siemens syngo.MI Workflow software platform (K150843).
The provided text describes the regulatory clearance for the "Scenium VE10 Software" to be marketed as substantially equivalent to a predicate device. While it mentions "Performance Testing / Safety and Effectiveness" and that "All testing has met the predetermined acceptance values," it does not provide the specific details regarding the acceptance criteria, the study design, or the results that would allow for a comprehensive answer to your request.
Specifically, the document states:
- "Verification and Validation activities have been successfully performed on the software package, including assurance that functions work as designed, performance requirements and specifications have been met, and that all hazard mitigations have been fully implemented. All testing has met the predetermined acceptance values."
This is a general statement about the completion of testing and meeting acceptance values, but it lacks the actual criteria and reported performance.
Therefore, I cannot provide the detailed information you requested, such as:
- A table of acceptance criteria and the reported device performance: The document does not specify these.
- Sample size used for the test set and the data provenance: Not mentioned.
- Number of experts used to establish the ground truth and their qualifications: Not mentioned.
- Adjudication method for the test set: Not mentioned.
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size: Not mentioned, and the primary purpose of the software is described as aiding clinicians in assessment and quantification, implying it's a tool for human readers, but no MRMC study details are provided.
- If a standalone (algorithm only) performance was done: Not explicitly stated with outcomes. The software performs "automated analysis through quantification of mean pixel values located within standard regions of interest," which implies standalone processing, but performance metrics for this are not given.
- The type of ground truth used: Not mentioned.
- The sample size for the training set: Not mentioned.
- How the ground truth for the training set was established: Not mentioned.
The document focuses on the regulatory submission process and the determination of substantial equivalence to a predicate device (Scenium VD20 software, K150192) based on minor changes to the software (enabled comparative quantification, UI improvements, addition of new normal databases, and viewing SSP for Amyloid uptake images). It implies that the underlying technology and functionalities for aiding interpretation were already established with the predicate device.
To answer your questions, one would typically need access to the full 510(k) submission, which would contain the detailed studies and performance data. This document is a summary of the FDA's decision, not the full submission itself.
Ask a specific question about this device
(60 days)
Scenium
The Scenium display and analysis software has been developed to aid the Clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing scans derived from FDG-PET, amyloid-PET, and SPECT studies, calculation of uptake ratios between regions of interest, and subtraction between two functional scans.
Scenium VD10 display and analysis software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium VD10 consists of three workflows:
- Database Comparison
- Ratio Analysis
- Subtraction
These workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies with different imaging agents, such as using Amyloid imaging agents for dementia and Alzheimer's Disease, DaTSCAN(1-123) for Parkinson's Disease and FDG-PET for epileptic seizures.
The modifications made to the Scenium VD10 software (K133654) to create the Scenium VD20 software include:
- Customized databases can now be imported and exported in the Database Comparison workflow.
- Three new FDG databases normalized to the region of the cerebellum, in addition to whole brain, were added to the Database Comparison workflow.
- Deformable Registration for Amyloid PET has been integrated in the Ratio Analysis workflow with a different algorithm.
- Launch performance decreased the time taken to display patient data in the Ratio Analysis workflow.
This change is based on current commercially available software features and does not change the technological characteristics of the device.
Scenium VD20 Analysis software is intended to be run on commercially available software platforms such as the Siemens syngo.MI Workflow software platform (K133644) or commercially available Siemens scanners (e.g. symbia Intevo (K142006), Biograph mCT (K123737).
This document is a 510(k) premarket notification for the Scenium VD20 software. It claims substantial equivalence to the previously cleared Scenium VD10 software (K133654). The provided text does not include a detailed study proving the device meets specific acceptance criteria with reported performance metrics, expert qualifications, or detailed study methodologies. Instead, it focuses on demonstrating that the modifications introduced in Scenium VD20 do not alter the fundamental technological characteristics or indications for use compared to the predicate device, and thus do not raise new safety and effectiveness concerns.
Therefore, many of the requested details about acceptance criteria and a specific study are not present in this regulatory document.
However, based on the information provided, we can infer some aspects and highlight what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
The document states: "All testing has met the predetermined acceptance values." This is a general statement and does not provide specific quantitative acceptance criteria or reported performance values.
Acceptance Criteria | Reported Device Performance |
---|---|
Not explicitly defined in terms of specific performance metrics (e.g., accuracy, sensitivity, specificity, processing time for new features). The general criterion is that new features do not raise new safety/effectiveness concerns and function as designed. | "All testing has met the predetermined acceptance values." |
Launch performance decreased the time taken to display patient data in the Ratio Analysis workflow. | This indicates an improvement, but no specific quantitative performance (e.g., time reduced from X to Y seconds) is provided. |
Functionality of new features (Customized databases import/export, new FDG databases, Deformable Registration for Amyloid PET with a different algorithm) | "functions work as designed," "performance requirements and specifications have been met." (No specific quantitative performance metrics are given for these new functionalities). |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size for Test Set: Not specified. The document mentions "Verification and Validation activities" and "All testing," but does not detail the size of the dataset used for these tests.
- Data Provenance: Not specified. There is no mention of the country of origin of the data or whether it was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth and Their Qualifications:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified. The document focuses on software changes and their implications, not on clinical validation involving human readers or expert consensus on a test set.
4. Adjudication Method:
- Not specified. Since details about expert review or test sets are missing, an adjudication method is also not mentioned.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No, an MRMC comparative effectiveness study is not mentioned or described. The documentation focuses on demonstrating substantial equivalence based on technical modifications and internal verification/validation, not on comparing human reader performance with and without AI assistance for the new features.
6. Standalone (Algorithm Only) Performance Study:
- No specific standalone performance study with detailed metrics (e.g., sensitivity, specificity, accuracy) is presented in this document. The verification and validation activities likely included internal testing of the algorithms, but explicit results demonstrating standalone performance are absent. The device itself is described as "display and analysis software" intended to "aid the Clinician," implying it is a tool for human-in-the-loop use rather than a fully autonomous diagnostic algorithm.
7. Type of Ground Truth Used:
- Not specified. Given the focus on software modifications and features, the "ground truth" for the verification and validation tests would likely relate to the expected outputs of the algorithms, internal consistency, and comparison with previous versions, rather than a clinical ground truth like pathology or long-term outcomes data.
8. Sample Size for the Training Set:
- Not applicable / Not specified. This device is described as "display and analysis software" and the modifications are primarily related to database functionalities, workflow integration, and an algorithmic change for deformable registration. There is no indication that this update involved a machine learning model that required a specific "training set" in the conventional sense of deep learning or AI model development. The "new FDG databases" are likely pre-computed reference datasets, not training data for a learning algorithm within Scenium VD20 itself.
9. How the Ground Truth for the Training Set Was Established:
- Not applicable / Not specified. As there's no mention of a traditional training set for a new machine learning algorithm, there's no information on how a ground truth for such a set would have been established.
Summary of what the document focuses on instead:
The document primarily provides evidence for substantial equivalence to a predicate device (Scenium VD10, K133654) by stating that:
- The modifications (customized database import/export, new FDG databases, deformable registration algorithm change, improved launch performance) do not change the technological characteristics of the device.
- There are no differences in the Indications for Use.
- No new issues of safety and effectiveness are raised by the modifications.
- Risk Management (ISO 14971:2012) and adherence to industry standards (ISO 13485, IEC 62304) were followed.
- Verification and Validation activities were successfully performed, and "All testing has met the predetermined acceptance values." This last point is the closest to an "acceptance criterion" statement, though it lacks specificity.
Ask a specific question about this device
(93 days)
SCENIUM
The Scenium display and analysis software has been developed to aid the Clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing scans derived from FDG-PET, amyloid-PET, and SPECT studies, calculation of uptake ratios between regions of interest, and subtraction between two functional scans.
Scenium VD10 display and analysis software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process acquired image data.
Scenium VD10 consists of three main workflows: Database Comparison Workflow, Ratio Analysis Workflow, and Subtraction Workflow. These workflows are used to assist the clinician with the visual evaluation, assessment and quantification of pathologies, such as dementia (ie. Alzheimer's), movement disorders (ie. Parkinson's), and seizure analysis (ie. Epilepsy).
The provided text states that "Performance and functional testing were conducted for Scenium VD10, and all performance requirements and specifications were met." However, it does not provide specific acceptance criteria or detailed results of the study that proves the device meets these criteria. It primarily focuses on the device description, indications for use, and a comparison to predicate devices for substantial equivalence.
Therefore, many of the requested details about acceptance criteria, study design, sample sizes, ground truth establishment, and MRMC studies are not available in the given document.
Here's a breakdown of what can be extracted and what information is missing:
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified | "all performance requirements and specifications were met." |
Explanation: The document generally states that performance and functional testing were conducted and that all requirements were met, but it does not list the specific acceptance criteria (e.g., sensitivity, specificity, accuracy thresholds) or provide quantitative performance metrics for the device against these criteria.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample size for the test set: Not specified.
- Data provenance: Not specified (country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of experts: Not specified.
- Qualifications of experts: Not specified.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication method: Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC study: Not mentioned or described. The document focuses on the device's ability to "aid the Clinician," suggesting it's a tool for assistance, but it does not quantify human reader improvement.
- Effect size: Not specified.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone performance: Not explicitly stated as a separate study with specific metrics. The device is described as "aids the Clinician" and "enabling automated analysis," which implies standalone functionality, but a dedicated standalone performance study with results is not detailed.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of ground truth: Not specified.
8. The sample size for the training set
- Sample size for training set: Not specified.
9. How the ground truth for the training set was established
- Ground truth for training set: Not specified.
Summary of available information regarding the study:
The document mentions that "Performance and functional testing were conducted for Scenium VD10, and all performance requirements and specifications were met." It also states that "risk management is ensured via a risk analysis in compliance with ISO 14971:2007 and ISO 14971:2012 to identify and provide mitigation to potential hazards."
However, this summary does not contain the detailed study plan, methodology, or results that would typically be present to demonstrate how the device meets specific acceptance criteria in terms of accuracy, sensitivity, specificity, etc., with human-read or pathology-confirmed ground truth. The information provided is high-level and confirms that internal testing was done and deemed successful by the manufacturer, but lacks the granular details requested.
Ask a specific question about this device
(34 days)
SCENIUM
The Scenium display and analysis software has been developed to aid the Clinician in the assessment and quantification of pathologies taken from PET and SPECT scans.
The software is deployed via medical imaging workplaces and is organized as a series of workflows which are specific to use with particular drug and disease combinations.
The software aids in the assessment of human brain scans enabling automated analysis through quantification of mean pixel values located within standard regions of interest. It facilitates comparison with existing scans derived from FDG-PET, amyloid-PET, and SPECT studies and calculation of uptake ratios between regions of interest.
Scenium 3.0 display and analysis software enables visualization and appropriate rendering of multimodality data, providing a number of features which enable the user to process the acquired image data. Scenium 3.0 is post processing and does not control the scanning features of the system.
The provided text is a 510(k) summary for the Scenium 3.0 device. A 510(k) submission generally establishes substantial equivalence to a predicate device and does not typically involve the presentation of detailed acceptance criteria or extensive clinical studies in the same way a PMA submission might.
Based on the provided document, here's what can be inferred about acceptance criteria and studies:
1. Table of Acceptance Criteria and Reported Device Performance:
The document lists no specific quantitative acceptance criteria or detailed device performance metrics in a tabular format. The submission focuses on demonstrating substantial equivalence to predicate devices (Scenium 2.0, NeuroTrans3D, Brass) rather than proving performance against predefined clinical thresholds.
2. Sample Size Used for the Test Set and Data Provenance:
The document does not specify any sample size used for a test set or the provenance of any data used for testing (e.g., country of origin, retrospective or prospective). This type of information would typically be detailed in a separate clinical or performance evaluation report, which is not part of this 510(k) summary.
3. Number of Experts Used to Establish Ground Truth and Qualifications:
The document does not mention the use of experts to establish ground truth for a test set or their qualifications. The submission is based on the functional and technological equivalence of the Scenium 3.0 software to predicate devices for aiding clinicians in assessment and quantification, not on a new clinical claim requiring expert-adjudicated ground truth.
4. Adjudication Method for the Test Set:
No adjudication method (e.g., 2+1, 3+1, none) is mentioned because the document does not describe a clinical test set that would require such a method.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
The document does not describe an MRMC comparative effectiveness study or indicate an effect size for human readers improving with or without AI assistance. This type of study is more common for devices making claims about improving diagnostic accuracy or efficiency with AI. Scenium 3.0 is described as post-processing software that aids in analysis and quantification, implying it's a tool for clinicians rather than an AI-driven diagnostic system in the conventional sense that would undergo MRMC.
6. Standalone (Algorithm Only) Performance Study:
The document does not describe a standalone performance study of the algorithm without human-in-the-loop performance. The device is presented as a tool to "aid the Clinician" and "facilitates comparison," indicating it's designed to be used in conjunction with human interpretation.
7. Type of Ground Truth Used:
The document does not specify the type of ground truth used as it does not detail a study that would necessitate it. The focus is on the software's ability to "enable automated analysis through quantification of mean pixel values located within standard regions of interest" and "facilitate comparison with existing scans," suggesting its functionality is related to data processing and visualization rather than a direct diagnostic output requiring a definitive ground truth.
8. Sample Size for the Training Set:
The document does not provide information on the sample size for a training set. This is because the submission emphasizes the software's functionality and its substantial equivalence to predicate devices, not the development and validation of a new machine learning algorithm that would require a distinct training set.
9. How Ground Truth for the Training Set Was Established:
As no training set is mentioned, there is no information on how its ground truth was established.
Summary of Device and Evidence:
The Scenium 3.0 is a Picture Archiving and Communication System (PACS) and emission computed tomography system software. It's designed to aid clinicians in the assessment and quantification of pathologies from PET and SPECT brain scans by enabling automated analysis through quantification of mean pixel values in regions of interest and facilitating comparison with existing scans.
The study presented in the 510(k) summary is a demonstration of substantial equivalence to legally marketed predicate devices (Scenium 2.0, NeuroTrans3D, Brass). The evidence provided focuses on:
- Indications for Use: The Scenium 3.0's indications are similar to those of the predicate devices, aiding clinicians in assessment and quantification of PET/SPECT brain scans.
- Technological Characteristics: The software is described as similar in uses and applications to the predicate devices, all assisting clinicians with visual evaluation, assessment, and quantification.
- Safety and Effectiveness: The device is stated to be designed and manufactured under Quality System Regulations (21 CFR 820) and complies with relevant standards (21 CFR 892.1200, 21 CFR 892.2050, ISO 14971, ISO 62304).
The acceptance criteria for a 510(k) submission like this are implicit: demonstrating that the new device is as safe and effective as a legally marketed predicate device, and does not raise new questions of safety or effectiveness. The study's "acceptance" is the FDA's clearance, indicating that Siemens Medical Solutions USA, Inc. successfully demonstrated substantial equivalence. Quantitative performance data, clinical study details, and expert adjudication are generally not required for this type of submission unless the new device introduces novel technology or makes new clinical claims that fall outside the scope of predicate devices.
Ask a specific question about this device
Page 1 of 2