Search Results
Found 3 results
510(k) Data Aggregation
(198 days)
AI-Rad Companion Organs RT is a post-processing software intended to automatically contour DICOM CT and MR predefined structures using deep-leaming-based algorithms.
Contours that are generated by AI-Rad Companion Organs RT may be used as input for clinical workflows including external beam radiation therapy treatment planning. AI-Rad Companion Organs RT must be used in conjunction with appropriate software such as Treatment Planning Systems and Interactive Contouring applications, to review, edit, and accept contours generated by AI-Rad Companion Organs RT.
The output of AI-Rad Companion Organs RT are intended to be used by trained medical professionals.
The software is not intended to automatically detect or contour lesions.
AI-Rad Companion Organs RT provides automatic segmentation of pre-defined structures such as Organs-at-risk (OAR) from CT or MR medical series, prior to dosimetry planning in radiation therapy. AI-Rad Companion Organs RT is not intended to be used as a standalone diagnostic device and is not a clinical decision-making software.
CT or MR series of images serve as input for AI-Rad Companion Organs RT and are acquired as part of a typical scanner acquisition. Once processed by the AI algorithms, generated contours in DICOM-RTSTRUCT format are reviewed in a confirmation window, allowing clinical user to confirm or reject the contours before sending to the target system. Optionally, the user may select to directly transfer the contours to a configurable DICOM node (e.g., the TPS, which is the standard location for the planning of radiation therapy).
The output of AI-Rad Companion Organs RT must be reviewed and, where necessary, edited with appropriate software before accepting generated contours as input to treatment planning steps. The output of AI-Rad Companion Organs RT is intended to be used by qualified medical professionals. The qualified medical professional can perform a complementary manual editing of the contours or add any new contours in the TPS (or any other interactive contouring application supporting DICOM-RT objects) as part of the routine clinical workflow.
Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for AI-Rad Companion Organs RT:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria and reported performance are detailed for both MR and CT contouring algorithms.
MR Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
MR Contouring Organs | The average segmentation accuracy (Dice value) of all subject device organs should be equivalent or better than the overall segmentation accuracy of the predicate device. The overall fail rate for each organ/anatomical structure is smaller than 15%. | Dice [%]: 85.75% (95% CI: [82.85, 87.58]) |
ASSD [mm]: 1.25 (95% CI: [0.95, 2.02]) | ||
Fail [%]: 2.75% | ||
(Compared to Reference Device MRCAT Pelvis (K182888)) | AI-Rad Companion Organs RT VA50 – all organs: 86% (83-88) | |
AI-Rad Companion Organs RT VA50 – common organs: 82% (78-84) | ||
MRCAT Pelvis (K182888) – all organs: 77% (75-79) |
CT Contouring Algorithm Performance
Validation Testing Subject | Acceptance Criteria | Reported Device Performance (Average) |
---|---|---|
Organs in Predicate Device | All the organs segmented in the predicate device are also segmented in the subject device. The average (AVG) Dice score difference between the subject and predicate device is smaller than 3%. | (The document states "equivalent or had better performance than the predicate device" implicitly meeting this, but does not give a specific numerical difference.) |
New Organs for Subject Device | Baseline value defined by subtracting the reference value using 5% error margin in case of Dice and 0.1 mm in case of ASSD. The subject device in the selected reference metric has a higher value than the defined baseline value. | Regional Averages: |
Head & Neck: Dice 76.5% | ||
Head & Neck lymph nodes: Dice 69.2% | ||
Thorax: Dice 82.1% | ||
Abdomen: Dice 88.3% | ||
Pelvis: Dice 84.0% |
2. Sample Sizes Used for the Test Set and Data Provenance
- MR Contouring Algorithm Test Set:
- Sample Size: N = 66
- Data Provenance: Retrospective study, data from multiple clinical sites across North America & Europe. The document further breaks this down for different sequences:
- T1 Dixon W: 30 datasets (USA: 15, EU: 15)
- T2 W TSE: 36 datasets (USA: 25, EU: 11)
- Manufacturer: All Siemens Healthineers scanners.
- CT Contouring Algorithm Test Set:
- Sample Size: N = 414
- Data Provenance: Retrospective study, data from multiple clinical sites across North American, South American, Asia, Australia, and Europe. This dataset is distributed across three cohorts:
- Cohort A: 73 datasets (Germany: 14, Brazil: 59) - Siemens scanners only
- Cohort B: 40 datasets (Canada: 40) - GE: 18, Philips: 22 scanners
- Cohort C: 301 datasets (NA: 165, EU: 44, Asia: 33, SA: 19, Australia: 28, Unknown: 12) - Siemens: 53, GE: 59, Philips: 119, Varian: 44, Others: 26 scanners
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The ground truth annotations were "drawn manually by a team of experienced annotators mentored by radiologists or radiation oncologists."
- "Additionally, a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist using validated medical image annotation tools."
- The exact number of individual annotators or experts is not specified beyond "a team" and "a board-certified radiation oncologist." Their specific experience level (e.g., "10 years of experience") is not given beyond "experienced" and "board-certified."
4. Adjudication Method for the Test Set
- The document implies a consensus/adjudication process: "a quality assessment including review and correction of each annotation was done by a board-certified radiation oncologist." This suggests that initial annotations by the "experienced annotators" were reviewed and potentially corrected by a higher-level expert. The specific number of reviewers for each case (e.g., 2+1, 3+1) is not explicitly stated, but it was at least a "team" providing initial annotations followed by a "board-certified radiation oncologist" for quality assessment/correction.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
- No, the document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The validation studies focused on the standalone performance of the algorithm against expert-defined ground truth.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, the performance validation described in section 10 ("Performance Software Validation") is a standalone (algorithm only) performance study. The metrics (Dice, ASSD, Fail Rate) compare the algorithm's output directly to the established ground truth. The device produces contours that must be reviewed and edited by trained medical professionals, but the validation tests the AI's direct output.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus/manual annotation. It was established by "manual annotation" by "experienced annotators mentored by radiologists or radiation oncologists" and subsequently reviewed and corrected by a "board-certified radiation oncologist." Annotation protocols followed NRG/RTOG guidelines.
8. The Sample Size for the Training Set
- MR Contouring Algorithm Training Set:
- T1 VIBE/Dixon W: 219 datasets
- T2 W TSE: 225 datasets
- Prostate (T2W): 960 datasets
- CT Contouring Algorithm Training Set: The training dataset sizes vary per organ group:
- Cochlea: 215
- Thyroid: 293
- Constrictor Muscles: 335
- Chest Wall: 48
- LN Supraclavicular, Axilla Levels, Internal Mammaries: 228
- Duodenum, Bowels, Sigmoid: 332
- Stomach: 371
- Pancreas: 369
- Pulmonary Artery, Vena Cava, Trachea, Spinal Canal, Proximal Bronchus: 113
- Ventricles & Atriums: 706
- Descending Coronary Artery: 252
- Penile Bulb: 854
- Uterus: 381
9. How the Ground Truth for the Training Set Was Established
- For both training and validation data, the ground truth annotations were established using the "Standard Annotation Process." This involved:
- Annotation protocols defined following NRG/RTOG guidelines.
- Manual annotations drawn by a team of experienced annotators mentored by radiologists or radiation oncologists using an internal annotation tool.
- A quality assessment including review and correction of each annotation by a board-certified radiation oncologist using validated medical image annotation tools.
- The document explicitly states that the "training data used for the training of the algorithm is independent of the data used to test the algorithm."
Ask a specific question about this device
(77 days)
Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)
The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.
An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.
Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)
Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc...). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.
The provided text is a 510(k) summary for Syngo Carbon Clinicals (K232856). It focuses on demonstrating substantial equivalence to a predicate device through comparison of technological characteristics and non-clinical performance testing. The document does not describe acceptance criteria for specific device performance metrics or a study that definitively proves the device meets those criteria through clinical trials or quantitative bench testing with specific reported performance values.
Instead, it relies heavily on evaluating the fit-for-use of algorithms (Lesion Quantification and Organ Segmentation) that were previously studied and cleared as part of predicate or reference devices, and ensuring their integration into the new device without modification to the core algorithms. The non-clinical performance testing for Syngo Carbon Clinicals focuses on verification and validation of changes/integrations, and conformance to relevant standards.
Therefore, many of the requested details about acceptance criteria and reported device performance cannot be extracted directly from this document. However, I can provide information based on what is available.
Acceptance Criteria and Study for Syngo Carbon Clinicals
Based on the provided 510(k) summary, formal acceptance criteria with specific reported performance metrics for the Syngo Carbon Clinicals device itself are not explicitly detailed in a comparative table against a clinical study's results. The submission primarily relies on the equivalency of its components to previously cleared devices and non-clinical verification and validation.
The "study" proving the device meets acceptance criteria is fundamentally a non-clinical performance testing, verification, and validation process, along with an evaluation of previously cleared algorithms from predicate/reference devices for "fit for use" in the subject device.
Here's a breakdown of the requested information based on the document:
1. Table of Acceptance Criteria and Reported Device Performance
As mentioned, a direct table of specific numerical acceptance criteria and a corresponding reported device performance from a clinical study is not present. The document describes acceptance in terms of:
Feature/Algorithm | Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|---|
Lesion Quantification Algorithm | "Fit for use" in the subject device, with design mitigations for drawbacks/limitations identified in previous studies of the predicate device. | "The results of phantom and reader studies conducted on the Lesion Quantification Algorithm, in the predicate device, were evaluated for fit for use in the subject device and it was concluded that the Algorithm can be integrated in the subject device with few design mitigations to overcome the drawbacks/limitations specified in these studies. These design mitigations were validated by non-Clinical performance testing and were found acceptable." |
(No new specific performance metrics are reported for Syngo Carbon Clinicals, but rather acceptance of the mitigations). | ||
Organ Segmentation Algorithm | "Fit for use" in the subject device without any modifications, based on previous studies of the reference device. | "The results of phantom and reader studies conducted on the Organ Segmentation Algorithm, in the reference device, were evaluated for fit for use in the subject device. And it was concluded that the Algorithm can be integrated in the subject device without any modifications." |
(No new specific performance metrics are reported for Syngo Carbon Clinicals). | ||
Overall Device Functionality | Conformance to specifications, safety, and effectiveness comparably to the predicate device. | "Results of all conducted testing were found acceptable in supporting the claim of substantial equivalence." (General statement, no specific performance metrics). Consistent with "Moderate Level of Concern" software. |
Software Verification & Validation | All software specifications met the acceptance criteria. | "The testing results support that all the software specifications have met the acceptance criteria." (General statement). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated for specific test sets in this document for Syngo Carbon Clinicals. The evaluations of the Lesion Quantification and Organ Segmentation algorithms refer to "phantom and reader studies" from their respective predicate/reference devices, but details on the sample sizes of those original studies are not provided here.
- Data Provenance: Not specified. The original "phantom and reader studies" for the algorithms were likely internal to the manufacturers or collaborators, but this document does not detail their origin (e.g., country, specific institutions). The text indicates these were retrospective studies (referring to prior evaluations).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
- Number of Experts: Not specified. The document mentions "reader studies" were conducted for the predicate/reference devices' algorithms, implying involvement of human readers/experts, but the number is not stated.
- Qualifications of Experts: Not specified. It can be inferred that these would be "trained medical professionals" as per the intended user for the device, but specific qualifications (e.g., radiologist with X years of experience) are not provided.
4. Adjudication Method for the Test Set
- Adjudication Method: Not specified for the historical "reader studies" referenced. This document does not detail the methodology for establishing ground truth or resolving discrepancies among readers if multiple readers were involved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Comparative Effectiveness Study: The document itself states, "No clinical studies were carried out for the product, all performance testing was conducted in a non-clinical fashion as part of verification and validation activities of the medical device." Therefore, no MRMC comparative effectiveness study for human readers with and without AI assistance for Syngo Carbon Clinicals was performed or reported in this submission. The device is a set of advanced visualization tools, not an AI-assisted diagnostic aid that directly impacts reader performance in a comparative study mentioned here.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
- Standalone Performance: The core algorithms (Lesion Quantification and Organ Segmentation) were evaluated in "phantom and reader studies" as part of their previous clearances (predicate/reference devices). While specific standalone numerical performance metrics for these algorithms (e.g., sensitivity, specificity, accuracy) are not reported in this document, the mention of "phantom" studies suggests a standalone evaluation component. The current submission, however, evaluates these previously cleared algorithms for "fit for use" within the new Syngo Carbon Clinicals device, implying their standalone performance was considered acceptable from their original clearances.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Not explicitly detailed. The referenced "phantom and reader studies" imply that for phantoms, the ground truth would be known (e.g., physical measurements), and for reader studies, it would likely involve expert consensus or established clinical benchmarks. However, the exact method for establishing ground truth in those original studies is not provided here.
8. The Sample Size for the Training Set
- Sample Size for Training Set: Not specified in this 510(k) summary. The document mentions that the deep learning algorithm for organ segmentation was "cleared as part of the reference device syngo.via RT Image suite (K220783)." This implies that any training data for this algorithm would have been part of the K220783 submission, not detailed here for Syngo Carbon Clinicals.
9. How the Ground Truth for the Training Set was Established
- Ground Truth for Training Set: Not specified in this 510(k) summary. As with the training set size, this information would have been part of the original K220783 submission for the organ segmentation algorithm and is not detailed in this document.
Ask a specific question about this device
(146 days)
syngo.via molecular imaging (MI) workflows comprise medical diagnostic applications for viewing, manipulation, quantification, analysis and comparison of medical images from single or multiple imaging modalities with one or more time-points. These workflows support functional data, such as positron emission tomography (PET) or nuclear medicine (NM), as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR). syngo.via MI workflows can perform harmonization of SUV (PET) across different PET systems or different PET reconstruction methods.
syngo via MI workflows are intended to be utilized by appropriately trained health care professionals to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The images and results produced by the syngo.via MI workflows can also be used by the physician to aid in radiotherapy treatment planning.
syngo.via MI Workflows (including Scenium and syngo MBF applications) is a multi-modality postprocessing software only medical device intended to aid in the management of diseases, including those associated with oncology, cardiology, neurology, and organ function. The syngo.via MI Workflows applications are part of a larger syngo.via client/server system which is intended to be installed on common IT hardware. The hardware itself is not seen as part of the syngo.via MI Workflows medical device.
The syngo.via MI Workflows software addresses the needs of the following typical users of the product:
- . Reading Physician / Radiologist – Reading physicians are doctors who are trained in interpreting patient scans from PET, SPECT and other modality scanners. They are highly detail oriented and analyze the acquired images for abnormalities, enabling ordering physicians to accurately diagnose and treat scanned patients. Reading physicians serve as a liaison between the ordering physician and the technologists, working closely with both.
- . Technologist – Nuclear medicine technologists operate nuclear medicine scanners such as PET and SPECT to produce images of specific areas and states of a patient's anatomy by administering radiopharmaceuticals to patients orally or via injection. In addition to administering the scan, the technologist must properly select the scan protocol, keep the patient calm and relaxed, monitor the patient's physical health during the protocol and evaluate the quality of the images. Technologists work very closely with physicians, providing them with quality-checked scan images.
The software has been designed to integrate the clinical workflow for the above users into a serverbased system that is consistent in design and look with the base syngo.via platform and other syngo.via software applications. This ensures a similar look and feel for radiologists that may review multiple types of studies from imaging modalities other than Molecular Imaging, such as MR.
syngo.via MI workflows software supports integration through DIC emission tomography (PET) or nuclear medicine (NM) data, as well as anatomical datasets, such as computed tomography (CT) or magnetic resonance (MR).
Although data is automatically imported into the server based on predefined configurations through the hospital IT system, data can also be manually imported from external media, including CD, external mass storage devices, etc.
The Siemens syngo.via platform and the applications that reside on it, including syngo.via MI Workflows, are distributed via electronic medium. The Instructions for Use is also delivered via electronic medium.
syngo.via MI Workflows includes 2 workflows (syngo.MM Oncology and syngo.MI General) as the Scenium neurology software application and the syngo MBF cardiology software application which are launched from the OpenApps framework within the MI General workflow.
Here's a breakdown of the acceptance criteria and the study information for the Syngo.via MI Workflows, Scenium, and Syngo MBF device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily focuses on two areas of performance evaluation: Organ Segmentation and Tau Workflow Support. The acceptance criteria for Organ Segmentation are explicitly stated, while for Tau Workflow, the criteria are implied through correlation and agreement with existing methods.
Feature | Acceptance Criteria | Reported Device Performance |
---|---|---|
Organ Segmentation | All organs must meet criteria for either the average DICE coefficient or the average symmetric surface distance (ASSD: average surface distance between algorithm result and manual ground truth annotation). | All organs met criteria for either the average DICE coefficient or the ASSD. (Specific numerical values for DICE or ASSD are not provided in this summary). |
Tau Workflow Support (SUVRs) | Good correlations and agreement with an MR-based method and MR-based segmentations for SUVRs calculated on individual and composite Braak VOIs using the new pipeline and masks. | Comparisons showed good correlations and agreement between the two sets of values (new pipeline vs. MR-based method) on more than 700 flortaucipir images from ADNI. |
2. Sample Size Used for the Test Set and Data Provenance
- Organ Segmentation: Not explicitly stated. The algorithm used was "originally cleared within syngo.via RT Image Suite (K201444) and carried into the reference predicate device (syngo.via RT Image Suite, K220783)." This suggests the data provenance for this algorithm was tied to those previous clearances. The document implies the segmentation quality was assessed, but the specific test set size for this current submission is not provided.
- Tau Workflow Support: "more than 700 flortaucipir images from ADNI".
- Data Provenance (Tau Workflow): "ADNI" (Alzheimer's Disease Neuroimaging Initiative). This is a prospective, multi-center, North American study. The exact countries of origin of the individual images are not specified but ADNI is a U.S. led initiative with international participation.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Organ Segmentation: For manual ground truth annotation, the number of experts and their qualifications are not specified.
- Tau Workflow Support: The ground truth for the "MR-based method and MR-based segmentations" used for comparison is from existing methods mentioned in the references. The number and qualifications of experts involved in establishing this historical ground truth are not specified in this document.
4. Adjudication Method for the Test Set
- Organ Segmentation: An adjudication method is not explicitly stated. The process involved "comparing a manually annotated ground truth with the algorithm result." It's common for a single expert or a consensus of experts to establish manual ground truth, but the method for resolving discrepancies or reaching consensus is not detailed here.
- Tau Workflow Support: An adjudication method is not explicitly stated. The comparison was made between the device's calculated SUVRs and those from an "MR-based method and MR-based segmentations." This implies a comparison against a pre-established or validated method rather than a multi-reader adjudication specifically for this study.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done for this submission. The document explicitly states: "Clinical testing was not conducted for this submission." The evaluations focused on standalone performance and agreement with existing methods.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Performance
- Yes, standalone performance was done.
- Organ Segmentation: The segmentation algorithm's performance (DICE coefficient and ASSD) was assessed by comparing its output directly against manually annotated ground truth. This is a standalone evaluation.
- Tau Workflow Support: The "SUVRs calculated on individual and composite Braak VOIs using our pipeline and our masks" were compared to an "MR-based method." This directly assesses the algorithm's standalone quantification capabilities.
7. Type of Ground Truth Used
- Organ Segmentation: Expert consensus (manual annotation) is implied ("manually annotated ground truth").
- Tau Workflow Support: Reference method (MR-based method and MR-based segmentations) and potentially expert consensus that established those reference methods. The references provided suggest established research pipelines for flortaucipir processing and ADNI publications, which would typically involve expert interpretation and validation.
8. Sample Size for the Training Set
- Organ Segmentation: The document states the algorithm is the "same algorithm originally cleared within syngo.via RT Image Suite (K201444) and carried into the reference predicate device (syngo.via RT Image Suite, K220783)." The training set size for this re-used algorithm is not specified in this document, but would have been part of the original clearance.
- Tau Workflow Support: The training set size for the tau quantification workflow is not specified.
9. How the Ground Truth for the Training Set Was Established
- Organ Segmentation: The method for establishing ground truth for the training set of the deep-learning algorithm is not specified in this document. Given it's a deep-learning algorithm, it would typically involve expert-labeled data, but the details are not provided.
- Tau Workflow Support: The method for establishing ground truth for the training set (if applicable) for the tau quantification workflow is not specified. It mentions using the AAL atlas as a basis for defining Braak regions, which is a pre-existing anatomical atlas.
Ask a specific question about this device
Page 1 of 1