Search Results
Found 1 results
510(k) Data Aggregation
(164 days)
Opulus™ Lymphoma Precision is a software device that uses a machine learning-based algorithm to automate segmentation and visualization of lesions along with automation of measurement of total metabolic tumor volume within whole-body FDG-PET/CT scans of patients with FDG-avid lymphomas.
Opulus™ Lymphoma Precision is used to assist trained interpreting physicians with visualization of suspected lesions and calculation of total volume of all lesions in a body. This information can be used in addition to the standard of care image interpretation of FDG-PET/CT scans. Opulus™ Lymphoma Precision annotated images can be reviewed by an appropriately trained physician.
The algorithm is assistive, and requires a radiologist review, who will make the final decision on FDG-PET/CT image interpretation.
Opulus™ Lymphoma Precision is an assistive tool which can be used by physicians to automate the labor intensive task of quantifying disease burden in whole-body FDG-PET/CT scans of patients already diagnosed with FDG-avid lymphomas. It does so by using a machine learning methodology to localize and segment FDG-PET activity ('hot-spots' on FDG-PET scans) of lymphoma lesions within a PET/CT image. Opulus™ Lymphoma Precision does not screen for or diagnose lymphoma. It is intended for patients already diagnosed with FDG-avid lymphoma.
The following is a list of key functionalities algorithm performs to accomplish the proposed intended use.
- localization and segmentation,
- visualization of lymphoma-related tumor lesions
- quantification of Total Metabolic Tumor Volume (TMTV)
Opulus™ Lymphoma Precision aids the efficiency of medical professionals by automatically generating tumor boundary Regions of Interest (ROIs) and quantifying TMTV, which is a tedious task when performed manually. The physician has the option to accept/reject the output generated by the device. The user does not have the ability to modify the device output.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for Opulus™ Lymphoma Precision:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document does not explicitly present a table of "acceptance criteria" with pass/fail thresholds. However, it does state the objectives of the performance validation study and the results that demonstrate the device's agreement and accuracy. We'll infer the implicit acceptance criteria from these objectives.
| Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance |
|---|---|---|
| Agreement of TMTV (Cubic Root) | Demonstrate an acceptable difference between aTMTV (algorithm) and mTMTV (manual ground truth). | Mean difference: -0.20 cm (95% CI: -0.50, 0.10) for TMTV < 2.5 cm |
| Mean difference: -0.23 cm (95% CI: -0.38, -0.09) for TMTV ≥ 2.5 cm | ||
| Accuracy in Lesion Segmentation (Dice Similarity Coefficient - DSC) | Demonstrate accuracy in lesion segmentation by comparing aTMTV-generated contours and ground truth. | Mean DSC score: 0.70 (95% CI: 0.66, 0.73) |
Note: The document states the objectives were to "demonstrate agreement" and "accuracy," implying that the reported performance values were deemed acceptable by the FDA for clearance. Specific numerical thresholds for acceptance were not explicitly presented in this document.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 182 unique patients' FDG-PET/CT scans.
- Data Provenance:
- Country of Origin: Multiple geographical locations across the U.S., Canada, Europe, Australia, and Taiwan.
- Retrospective/Prospective: Not explicitly stated, but the description of "fully independent dataset" collected from "various gender, ethnicity and age" and "scanners from different manufacturers and models" suggests a retrospective collection of existing scans.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Three radiologists/nuclear medicine physicians were used for each ground truth establishment. The document also mentions a "pool of nine radiologists" from which these three were randomly selected.
- Qualifications of Experts: Radiologists/nuclear medicine physicians with expertise in interpreting PET/CT scans from patients with FDG-avid lymphoma.
4. Adjudication Method for the Test Set
The ground truth for each scan was based on the "independent input from three radiologists randomly selected from a pool of nine radiologists." This suggests a consensus-based adjudication, likely majority vote or discussion to reach agreement among the three, though the exact mechanism (e.g., 2+1, 3+1, pre-defined rules for disagreement) is not specified as "Adjudication method." However, it implies a multi-reader consensus approach.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- Was it done?: No, an MRMC comparative effectiveness study that specifically measures the improvement of human readers with AI vs. without AI assistance was not described in this 510(k) summary. The study focused on the algorithm's performance against expert-established ground truth, not on human-in-the-loop performance.
- Effect Size: Not applicable, as an MRMC comparative effectiveness study was not presented.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study was Done
- Was it done?: Yes. The performance evaluation describes the algorithm's output (aTMTV and segmentation contours) being compared directly against the established ground truth (mTMTV and ground truth contours). This is a standalone performance assessment.
- Performance Metrics: Mean difference in TMTV (cubic root) and Dice Similarity Coefficient (DSC).
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert consensus. Specifically, it was established by three radiologists/nuclear medicine physicians with expertise in interpreting PET/CT scans from patients with FDG-avid lymphoma. This type of ground truth is often referred to as "expert consensus ground truth."
8. The Sample Size for the Training Set
- Sample Size: Not explicitly stated. The document mentions that the performance validation study data was "a fully independent dataset from the training and characterization datasets," but it does not provide the specific sample size for the training set itself.
9. How the Ground Truth for the Training Set was Established
- How Established: Not explicitly detailed for the training set. The document states that the performance validation dataset "was not available to the algorithm developers during the algorithm training," implying the training data also had ground truth established, but the method for the training set's ground truth is not described in this summary. It's common practice for training data ground truth to also be established by experts, possibly with varying levels of adjudication or annotation protocols compared to the test set, but this information is absent in the provided text.
Ask a specific question about this device
Page 1 of 1