Search Results
Found 1 results
510(k) Data Aggregation
(222 days)
Voxel Dosimetry (00859873006226)
Voxel Dosimetry is a software application for nuclear medicine. Based on user input, Voxel Dosimetry calculates a volumetric map of the distribution of absorbed radiation dose on the voxel level for patients who have been administered with radioisotopes. Voxel Dosimetry presents the results to the user and the result can be stored for future analysis.
Voxel Dosimetry is intended for patients of any age and gender undergoing radionuclide therapy.
Voxel Dosimetry is only intended for calculating dose for FDA approved radiopharmaceuticals. Voxel Dosimetry should not be used to deviate from approved product dosing and administration instructions. Refer to the product's prescribing information for instructions.
Voxel Dosimetry is a standalone software application designed to assist the user in absorbed dose calculations at voxel level using a single volumetric image or a time series of images taken after the treatment dose is given to the patient.
Voxel Dosimetry can perform absorbed dose calculations at an organ level (VOI) for right and left kidneys, right and left lungs, liver and spleen, utilizing deep learning based semi-automatic segmentation. The results of the organ segmentation are always displayed overlaid on the CT and functional images for the user to review, and changes can be made manually to all or part of an organ region. The intended workflow is that the user shall review and correct the segmentation before approving the final result.
The provided FDA 510(k) clearance letter and summary for Voxel Dosimetry (K243919) describe performance data to support substantial equivalence. While it states that algorithms perform as expected and meet acceptance criteria, it lacks specific details on the acceptance criteria themselves and the full experimental setup. The information is high-level and does not provide the granular data typically found in a full study report.
Based on the provided text, here's an attempt to describe the acceptance criteria and study proving the device meets them, with explicit notes on what information is not present in the document.
Acceptance Criteria and Device Performance Study for Voxel Dosimetry
The Voxel Dosimetry software (K243919) underwent non-clinical performance evaluation of its algorithms to demonstrate that added features perform as expected and meet pre-set acceptance criteria, thereby supporting the safety and substantial equivalence of the device.
1. Table of Acceptance Criteria and Reported Device Performance
The FDA 510(k) summary lists several new features and states that testing showed results met acceptance criteria. However, the specific quantitative acceptance criteria (e.g., "DICE coefficient > 0.9," "mean error 0.95") are not explicitly stated in the provided document. Similarly, the numerical results for the reported device performance are also not provided. The document only indicates that the results "meet the acceptance criteria."
Feature Tested | Acceptance Criteria (Quantified) | Reported Device Performance (Quantified) |
---|---|---|
Non-rigid alignment (CT studies) | Not specified in document | Met acceptance criteria (compared to manual method) |
Semi-automatic organ segmentation (deep learning) | Not specified in document | Met acceptance criteria (compared to manual segmentation) |
Lesion (region of interest) segmentation reproducibility | Not specified in document | Met acceptance criteria |
Single time point studies (Time activity curve integration) | Not specified in document | Met acceptance criteria (compared to scientific computing language) |
Dose calculation implementation (GPU vs. CPU) | Not specified in document | Met acceptance criteria |
Organ based dose calculation | Not specified in document | Met acceptance criteria (compared to state-of-the-art device) |
2. Sample Sizes Used for the Test Set and Data Provenance
The document does not specify the sample sizes used for the test sets for any of the performance evaluations. It also does not provide information on the data provenance (e.g., country of origin of the data, whether it was retrospective or prospective).
3. Number of Experts and Qualifications for Ground Truth Establishment
For "manual segmentation" and "manual method" comparisons, human experts were presumably involved in establishing the ground truth. However, the document does not specify the number of experts used or their qualifications (e.g., radiologist with X years of experience).
4. Adjudication Method for the Test Set
The document does not provide any information regarding the adjudication method used (e.g., 2+1, 3+1, none) for the test set ground truth establishment.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not mention if a multi-reader multi-case (MRMC) comparative effectiveness study was conducted to evaluate how much human readers improve with AI vs. without AI assistance. The performance evaluations described are primarily focused on the algorithmic performance and comparisons to existing methods or internal consistency.
6. Standalone (Algorithm Only) Performance
Yes, standalone performance was evaluated for the algorithms. The text explicitly states, "Non-clinical performance testing for added features shows that the algorithms perform as expected and results were within pre-set acceptance criteria." The comparisons mentioned (e.g., against manual segmentation, scientific computing language, state-of-the-art devices) indicate an assessment of the algorithm's output directly.
7. Type of Ground Truth Used
The types of ground truth used, as inferred from the text, include:
- Manual method/manual segmentation: For non-rigid alignment and semi-automatic organ segmentation, the device's output was compared against manual methods, implying human-derived ground truth.
- Scientific computing language: For single time point studies, integration results were compared to those from "a scientific computing language widely referenced in medical publications," which serves as a highly robust computational ground truth.
- State-of-the-art devices: For organ-based dose calculation, the device's results were compared to those from a "state-of-the-art device," essentially using an established, clinically validated device as ground truth.
- Internal consistency/reproducibility: For lesion segmentation and GPU vs. CPU comparison, reproducibility and consistency were a key aspect of the ground truth assessment.
8. Sample Size for the Training Set
The document does not provide any information about the sample size used for the training set for the deep learning-based semi-automatic organ segmentation.
9. How Ground Truth for the Training Set Was Established
The document does not specify how the ground truth for the training set was established for the deep learning model. It only mentions that the semi-automatic organ segmentation was "using deep learning" and was "tested against manual segmentation" for the test set.
Ask a specific question about this device
Page 1 of 1