Search Results
Found 1 results
510(k) Data Aggregation
(25 days)
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- . Visualization (standard image viewing tools, MIPs, and reformats)
- . Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
- . Communication and storage (DICOM import, retrieval, and study storage)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a breakdown of the acceptance criteria and study details for the TumorSight Viz device, based on the provided document:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria implicitly relate to the device's performance in comparison to expert variability and the predicate device. The study aims to demonstrate that the error in measurements produced by TumorSight Viz is consistent with the variability observed among expert radiologists.
The table below summarizes the performance metrics from the validation testing, which serves as the reported device performance against the implicit acceptance criterion of being comparable to inter-radiologist variability.
| Measurement Description | Units | Acceptance Criteria (Implicit: Comparable to Inter-Radiologist Variability) | Reported Device Performance (Mean Abs. Error ± Std. Dev.) |
|---|---|---|---|
| Tumor Volume (n=184) | cubic centimeters (cc) | Error consistent with inter-radiologist variability (NA for direct comparison) | 5.22 ± 15.58 |
| Tumor-to-breast volume ratio (n=184) | % | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.51 ± 1.48 |
| Tumor longest dimension (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.60 ± 1.93 |
| Tumor-to-nipple distance (n=200) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.20 ± 1.37 |
| Tumor-to-skin distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.63 ± 0.61 |
| Tumor-to-chest distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.91 ± 1.14 |
| Tumor center of mass (n=184) | centimeters (cm) | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.72 ± 1.42 |
Segmentation Accuracy:
| Performance Measurement | Metric | Acceptance Criteria (Implicit: Adequate for intended use) | Reported Device Performance (Mean ± Std. Dev.) |
|---|---|---|---|
| Tumor segmentation (n=184) | Volumetric Dice | Adequate for intended use | 0.75 ± 0.24 |
| Tumor segmentation (n=184) | Surface Dice | Adequate for intended use | 0.88 ± 0.24 |
Comparison to Predicate Device and Inter-Radiologist Variability:
| Performance Measurement | N | Metric | Predicate/TumorSight Viz (Mean ± Std. Dev.) | TumorSight Viz/Ground Truth (Mean ± Std. Dev.) | Predicate/Ground Truth (Mean ± Std. Dev.) | Inter-radiologist Variability (Mean ± Std. Dev.) |
|---|---|---|---|---|---|---|
| Longest Dimension | 197 | Abs. Distance Error | 1.33 cm ± 1.80 cm | 1.59 cm ± 1.93 cm | 1.27 cm ± 1.34 cm | 1.30 cm ± 1.34 cm |
| Tumor to Skin | 197 | Abs. Distance Error | 0.24 cm ± 0.39 cm | 0.61 cm ± 0.60 cm | 0.55 cm ± 0.48 cm | 0.51 cm ± 0.48 cm |
| Tumor to Chest | 197 | Abs. Distance Error | 0.64 cm ± 1.13 cm | 0.89 cm ± 1.12 cm | 0.69 cm ± 0.88 cm | 0.97 cm ± 1.16 cm |
| Tumor to Nipple | 195 | Abs. Distance Error | 0.89 cm ± 1.03 cm | 1.15 cm ± 1.30 cm | 1.01 cm ± 1.23 cm | 1.03 cm ± 1.30 cm |
| Tumor Volume | 197 | Abs. Volume Error | 4.42 cc ± 11.03 cc | 5.22 cc ± 15.58 cc | 6.50 cc ± 21.40 cc | NA |
The study concludes that "all tests met the acceptance criteria, demonstrating adequate performance for our intended use," and that the "differences in error between the mean absolute errors (MAE) for the predicate and subject device are clinically acceptable because they are on the order of one to two voxels for the mean voxel size in the dataset. These differences are clinically insignificant."
2. Sample Size and Data Provenance
- Test Set (Validation Dataset) Sample Size: 216 patients, corresponding to 217 samples (when accounting for bilateral disease).
- Data Provenance:
- Country of Origin: U.S. (from more than 7 clinical sites).
- Retrospective/Prospective: Not explicitly stated, but the description of data collection and review for ground truth suggests it was retrospective. The data was "obtained" and "collected," implying pre-existing data.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three (3)
- Qualifications: U.S. Board Certified radiologists. No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- Method: Majority Consensus (2+1). For each case, two radiologists independently reviewed measurements and segmentation appropriateness. "In cases where the two radiologists did not agree on whether the segmentation was appropriate, a third radiologist provided an additional opinion and established a ground truth by majority consensus."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, an MRMC study comparing human readers with and without AI assistance was not performed as described in the document. The study primarily focused on the standalone performance of the AI algorithm (TumorSight Viz) and its comparison to the predicate device, with ground truth established by expert radiologists. It did compare the device's measurements to inter-radiologist variability, but not in a human-in-the-loop setup.
- Effect Size: Not applicable, as an MRMC comparative effectiveness study was not performed.
6. Standalone (Algorithm Only) Performance
- Was it done? Yes. The performance metrics listed in the tables (Mean Absolute Error, Volumetric Dice, Surface Dice) are indicators of the standalone performance of the TumorSight Viz algorithm against the established ground truth.
7. Type of Ground Truth Used
- Type: Expert Consensus. The ground truth was established by three (3) U.S. Board Certified radiologists through a defined review and adjudication process (majority consensus).
- For measurements: Radiologists measured various characteristics including longest dimensions and tumor to landmark distances.
- For segmentation: Radiologists reviewed and deemed the candidate segmentation "appropriate."
8. Sample Size for the Training Set
- Training Dataset: 676 samples.
- Tuning Dataset: 240 samples.
- Total Patients for Training and Tuning: 833 patients (corresponding to 916 samples total for training and tuning).
9. How the Ground Truth for the Training Set was Established
The document states that the training and tuning data were used to "train and tune the device," but it does not explicitly describe how the ground truth for this training data was established. It only details the ground truth establishment for the validation dataset. It is common for deep learning models to require labeled data for training, but the process for obtaining these labels for the training set is not provided here.
Ask a specific question about this device
Page 1 of 1