Search Results
Found 3 results
510(k) Data Aggregation
(29 days)
TumorSight Viz
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing and analyzing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- Visualization (standard image viewing tools, MIPs, and reformats)
- Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a breakdown of the acceptance criteria and the study details for the TumorSight Viz device, based on the provided FDA 510(k) clearance letter:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance metrics, where the device's performance is deemed "adequate" and "clinically acceptable" if the variability is similar to inter-radiologist variability or differences in error are clinically insignificant.
Measurement Description | Units | Acceptance Criterion (Implicit) | Reported Device Performance (Mean Abs. Error ± Std. Dev.) |
---|---|---|---|
Tumor Volume (n=218) | cubic centimeters (cc) | Similar to inter-radiologist variability | 5.2 ± 12.5 |
Tumor-to-breast volume ratio (n=218) | % | Clinically acceptable | 0.4 ± 1.2 |
Tumor longest dimension (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 1.02 cm ± 1.33 cm) | 1.32 ± 1.65 |
Tumor-to-nipple distance (n=241) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.88 cm ± 1.12 cm) | 1.17 ± 1.55 |
Tumor-to-skin distance (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.42 cm ± 0.45 cm) | 0.60 ± 0.52 |
Tumor-to-chest distance (n=242) | centimeters (cm) | Similar to inter-radiologist variability (e.g., 0.79 cm ± 1.14 cm) | 0.86 ± 1.22 |
Tumor center of mass (n=218) | centimeters (cm) | Clinically acceptable | 0.60 ± 1.47 |
Segmentation Accuracy | |||
Volumetric Dice (n=218) | High agreement with reference standard | 0.76 ± 0.26 | |
Surface Dice (n=218) | High agreement with reference standard (particularly for 3D rendering) | 0.92 ± 0.21 |
The document states: "We found that all tests met the acceptance criteria, demonstrating adequate performance for our intended use." This indicates that the reported performance metrics were considered acceptable by the regulatory body. For measurements where inter-radiologist variability is provided (e.g., longest dimension, tumor-to-skin), the device's error is compared to this variability. For other metrics, the acceptance is based on demonstrating "adequate performance," implying that the reported values themselves were within a predefined acceptable range.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: 266 patients (corresponding to 267 samples, accounting for bilateral disease).
- Data Provenance:
- Country of Origin: U.S.
- Retrospective/Prospective: The document does not explicitly state "retrospective" or "prospective." However, the description of "DCE-MRI were obtained from... patients" and establishment of ground truth by reviewing images suggests a retrospective acquisition of data for validation. The mention of "All patients had pathologically confirmed invasive, early stage or locally advanced breast cancer" further supports a retrospective gathering of existing patient data.
- Clinical Sites: More than eight (8) clinical sites in the U.S.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: Three (3) U.S. Board Certified radiologists.
- Qualifications: U.S. Board Certified radiologists. (No specific experience in years is mentioned, but Board Certification implies a high level of expertise.)
4. Adjudication Method for the Test Set
- Adjudication Method: 2+1 (as described in the document).
- For each case, two radiologists independently measured various characteristics and determined if the candidate segmentation was appropriate.
- In cases of disagreement between the first two radiologists ("did not agree on whether the segmentation was appropriate"), a third radiologist provided an additional opinion, and the ground truth was established by majority consensus.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done
The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance is directly measured and compared.
Instead, it compares the device's performance to:
- Ground Truth: Radiologist consensus measurements.
- Predicate Device: Its own previous version.
- Inter-radiologist Variability: The inherent variability between human expert readers.
Therefore, no effect size of how much human readers improve with AI vs. without AI assistance is provided, as this type of MRMC study was not detailed.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The sections titled "Performance Tests" and the tables detailing "Validation Testing (Mean Abs. Error ± Std. Dev.)" describe the algorithm's performance in comparison to the established ground truth. This is a standalone evaluation, as it assesses the device's output intrinsically against expert-derived truth without measuring human interaction or improvement. The statement "The measurements generated from the device result directly from the segmentation methodology and are an inferred reflection of the performance of the deep learning algorithm" supports this.
7. The Type of Ground Truth Used
- Type of Ground Truth: Expert Consensus (specifically, pathologist-confirmed lesions measured and evaluated by a consensus of U.S. Board Certified radiologists). The initial diagnosis of early-stage or locally advanced breast cancer for patient selection was based on pathology ("biopsy proven"). However, the ground truth for measurements and segmentation appropriateness for the study was established by radiologists.
8. The Sample Size for the Training Set
- Sample Size for Training Set: One thousand one hundred fifty-six (1156) patients/samples.
9. How the Ground Truth for the Training Set Was Established
The document states: "DCE-MRI were obtained from one thousand one hundred fifty-six (1156) patients from more than fifteen (15) clinical sites in the U.S. for use in training and tuning the device."
However, the document does not explicitly detail how the ground truth for this training set was established. It describes the ground truth establishment method only for the validation dataset (by three U.S. Board Certified radiologists with 2+1 adjudication). For training data, it is common practice to use similar rigorous methods for labeling, but the specifics are not provided in this excerpt.
Ask a specific question about this device
(25 days)
TumorSight Viz
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- . Visualization (standard image viewing tools, MIPs, and reformats)
- . Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
- . Communication and storage (DICOM import, retrieval, and study storage)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a breakdown of the acceptance criteria and study details for the TumorSight Viz device, based on the provided document:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria implicitly relate to the device's performance in comparison to expert variability and the predicate device. The study aims to demonstrate that the error in measurements produced by TumorSight Viz is consistent with the variability observed among expert radiologists.
The table below summarizes the performance metrics from the validation testing, which serves as the reported device performance against the implicit acceptance criterion of being comparable to inter-radiologist variability.
Measurement Description | Units | Acceptance Criteria (Implicit: Comparable to Inter-Radiologist Variability) | Reported Device Performance (Mean Abs. Error ± Std. Dev.) |
---|---|---|---|
Tumor Volume (n=184) | cubic centimeters (cc) | Error consistent with inter-radiologist variability (NA for direct comparison) | 5.22 ± 15.58 |
Tumor-to-breast volume ratio (n=184) | % | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.51 ± 1.48 |
Tumor longest dimension (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.60 ± 1.93 |
Tumor-to-nipple distance (n=200) | centimeters (cm) | Error consistent with inter-radiologist variability | 1.20 ± 1.37 |
Tumor-to-skin distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.63 ± 0.61 |
Tumor-to-chest distance (n=202) | centimeters (cm) | Error consistent with inter-radiologist variability | 0.91 ± 1.14 |
Tumor center of mass (n=184) | centimeters (cm) | Error consistent with inter-radiologist variability (NA for direct comparison) | 0.72 ± 1.42 |
Segmentation Accuracy:
Performance Measurement | Metric | Acceptance Criteria (Implicit: Adequate for intended use) | Reported Device Performance (Mean ± Std. Dev.) |
---|---|---|---|
Tumor segmentation (n=184) | Volumetric Dice | Adequate for intended use | 0.75 ± 0.24 |
Tumor segmentation (n=184) | Surface Dice | Adequate for intended use | 0.88 ± 0.24 |
Comparison to Predicate Device and Inter-Radiologist Variability:
Performance Measurement | N | Metric | Predicate/TumorSight Viz (Mean ± Std. Dev.) | TumorSight Viz/Ground Truth (Mean ± Std. Dev.) | Predicate/Ground Truth (Mean ± Std. Dev.) | Inter-radiologist Variability (Mean ± Std. Dev.) |
---|---|---|---|---|---|---|
Longest Dimension | 197 | Abs. Distance Error | 1.33 cm ± 1.80 cm | 1.59 cm ± 1.93 cm | 1.27 cm ± 1.34 cm | 1.30 cm ± 1.34 cm |
Tumor to Skin | 197 | Abs. Distance Error | 0.24 cm ± 0.39 cm | 0.61 cm ± 0.60 cm | 0.55 cm ± 0.48 cm | 0.51 cm ± 0.48 cm |
Tumor to Chest | 197 | Abs. Distance Error | 0.64 cm ± 1.13 cm | 0.89 cm ± 1.12 cm | 0.69 cm ± 0.88 cm | 0.97 cm ± 1.16 cm |
Tumor to Nipple | 195 | Abs. Distance Error | 0.89 cm ± 1.03 cm | 1.15 cm ± 1.30 cm | 1.01 cm ± 1.23 cm | 1.03 cm ± 1.30 cm |
Tumor Volume | 197 | Abs. Volume Error | 4.42 cc ± 11.03 cc | 5.22 cc ± 15.58 cc | 6.50 cc ± 21.40 cc | NA |
The study concludes that "all tests met the acceptance criteria, demonstrating adequate performance for our intended use," and that the "differences in error between the mean absolute errors (MAE) for the predicate and subject device are clinically acceptable because they are on the order of one to two voxels for the mean voxel size in the dataset. These differences are clinically insignificant."
2. Sample Size and Data Provenance
- Test Set (Validation Dataset) Sample Size: 216 patients, corresponding to 217 samples (when accounting for bilateral disease).
- Data Provenance:
- Country of Origin: U.S. (from more than 7 clinical sites).
- Retrospective/Prospective: Not explicitly stated, but the description of data collection and review for ground truth suggests it was retrospective. The data was "obtained" and "collected," implying pre-existing data.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: Three (3)
- Qualifications: U.S. Board Certified radiologists. No specific years of experience are mentioned.
4. Adjudication Method for the Test Set
- Method: Majority Consensus (2+1). For each case, two radiologists independently reviewed measurements and segmentation appropriateness. "In cases where the two radiologists did not agree on whether the segmentation was appropriate, a third radiologist provided an additional opinion and established a ground truth by majority consensus."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was it done? No, an MRMC study comparing human readers with and without AI assistance was not performed as described in the document. The study primarily focused on the standalone performance of the AI algorithm (TumorSight Viz) and its comparison to the predicate device, with ground truth established by expert radiologists. It did compare the device's measurements to inter-radiologist variability, but not in a human-in-the-loop setup.
- Effect Size: Not applicable, as an MRMC comparative effectiveness study was not performed.
6. Standalone (Algorithm Only) Performance
- Was it done? Yes. The performance metrics listed in the tables (Mean Absolute Error, Volumetric Dice, Surface Dice) are indicators of the standalone performance of the TumorSight Viz algorithm against the established ground truth.
7. Type of Ground Truth Used
- Type: Expert Consensus. The ground truth was established by three (3) U.S. Board Certified radiologists through a defined review and adjudication process (majority consensus).
- For measurements: Radiologists measured various characteristics including longest dimensions and tumor to landmark distances.
- For segmentation: Radiologists reviewed and deemed the candidate segmentation "appropriate."
8. Sample Size for the Training Set
- Training Dataset: 676 samples.
- Tuning Dataset: 240 samples.
- Total Patients for Training and Tuning: 833 patients (corresponding to 916 samples total for training and tuning).
9. How the Ground Truth for the Training Set was Established
The document states that the training and tuning data were used to "train and tune the device," but it does not explicitly describe how the ground truth for this training data was established. It only details the ground truth establishment for the validation dataset. It is common for deep learning models to require labeled data for training, but the process for obtaining these labels for the training set is not provided here.
Ask a specific question about this device
(250 days)
TumorSight Viz
TumorSight Viz is intended to be used in the visualization and analysis of breast magnetic resonance imaging (MRI) studies for patients with biopsy proven early-stage or locally advanced breast cancer. TumorSight Viz supports evaluation of dynamic MR data acquired from breast studies during contrast administration. TumorSight Viz performs processing functions (such as image registration, subtractions, measurements, 3D renderings, and reformats).
TumorSight Viz also includes user-configurable features for visualizing and analyzing findings in breast MRI studies. Patient management decisions should not be made based solely on the results of TumorSight Viz.
TumorSight Viz is an image processing system designed to assist in the visualization and analysis of breast DCE-MRI studies.
TumorSight reads DICOM magnetic resonance images. TumorSight processes and displays the results on the TumorSight web application.
Available features support:
- Visualization (standard image viewing tools, MIPs, and reformats)
- Analysis (registration, subtractions, kinetic curves, parametric image maps, segmentation and 3D volume rendering)
- Communication and storage (DICOM import, retrieval, and study storage)
The TumorSight system consists of proprietary software developed by SimBioSys, Inc. hosted on a cloud-based platform and accessed on an off-the-shelf computer.
Here's a summary of the acceptance criteria and study details for TumorSight Viz, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
The acceptance criteria are implied by demonstrating that the device's performance (Mean Absolute Error and Dice Coefficients) is comparable to inter-radiologist variability and the predicate device, CADstream, and that "all tests met the acceptance criteria".
Measurement Description | Units | Acceptance Criteria (Implied) | Validation Testing (Mean Abs. Error ± Std. Dev.) |
---|---|---|---|
Tumor Volume (n=157) | cubic centimeters (cc) | Comparable to inter-radiologist variability | 6.48 ± 12.67 |
Tumor-to-breast volume ratio (n=157) | % | Comparable to inter-radiologist variability | 0.56 ± 0.93 |
Tumor longest dimension (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 1.48 ± 1.46 |
Tumor-to-nipple distance (n=161) | centimeters (cm) | Comparable to inter-radiologist variability | 1.00 ± 1.03 |
Tumor-to-skin distance (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 0.63 ± 0.60 |
Tumor-to-chest distance (n=163) | centimeters (cm) | Comparable to inter-radiologist variability | 0.94 ± 1.34 |
Tumor center of mass (n=157) | centimeters (cm) | Comparable to inter-radiologist variability | 0.735 ± 1.26 |
Performance Measurement | Metric | Acceptance Criteria (Implied) | Validation Testing (Mean ± Std. Dev.) |
---|---|---|---|
Tumor segmentation (n=157) | Volume Dice | Sufficient for indicating location, volume, surface agreement | 0.676 ± 0.289 |
Surface Dice | Sufficient for indicating location, volume, surface agreement | 0.873 ± 0.264 |
Additionally, for the direct comparison with the CADstream predicate device and ground truth:
Performance Measurement | Metric | TumorSight Viz / Ground Truth (Mean Abs. Error ± Std. Dev.) | CADStream / Ground Truth (Mean Abs. Error ± Std. Dev.) | Inter-radiologist Variability (Mean Abs. Error ± Std. Dev.) |
---|---|---|---|---|
Longest Dimension (n=136) | Abs. Distance Error | 1.40 cm ± 1.43 cm | 1.11 cm ± 1.52 cm | 1.17 cm ± 1.38 cm |
Tumor to Skin (n=136) | Abs. Distance Error | 0.61 cm ± 0.46 cm | 0.49 cm ± 0.56 cm | 0.49 cm ± 0.54 cm |
Tumor to Chest (n=136) | Abs. Distance Error | 0.77 cm ± 0.90 cm | 1.37 cm ± 1.01 cm | 0.79 cm ± 1.01 cm |
Tumor to Nipple (n=134) | Abs. Distance Error | 0.98 cm ± 1.06 cm | 0.80 cm ± 0.86 cm | 0.82 cm ± 0.98 cm |
Tumor Volume (n=134) | Abs. Distance Error | 6.69 cc ± 13.53 cc | 8.09 cc ± 17.42 cc | N/A (not provided for inter-radiologist variability) |
The document states: "The mean absolute error and variability between the automated measurements (Validation Testing) and ground truth for tumor volume (measured in cc) and landmark distances (measured in cm) was similar to the variability between device-to-radiologist measurements and inter-radiologist variability. This demonstrates that the error in measurements is consistent to the variability between expert readers." It also notes: "We found that all tests met the acceptance criteria, demonstrating adequate performance for our intended use." And for the comparison to the predicate: "The differences in error between the mean absolute errors (MAE) for the predicate and subject device are clinically acceptable because they are on the order of one to two voxels for the mean voxel size in the dataset. These differences are clinically insignificant."
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Validation (Test Set): 161 patients, corresponding to 163 samples (accounting for bilateral disease).
- Data Provenance: Obtained from six (6) clinical sites in the U.S. All patients had pathologically confirmed invasive, early stage or locally advanced breast cancer. The data was collected to ensure adequate coverage of MRI manufacturer and field strength and similarity with the broader U.S. population for patient age, breast cancer subtype, T stage, histologic subtype, and race/ethnicity. This data is retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Seven (7) U.S. Board Certified radiologists.
- Qualifications of Experts: U.S. Board Certified radiologists. Specific experience level (e.g., years of experience) is not explicitly stated beyond "expert readers."
4. Adjudication Method for the Test Set
- Adjudication Method: For each case, two radiologists independently measured various characteristics. If the two radiologists did not agree on whether the candidate segmentation was appropriate, a third radiologist provided an additional opinion and established a ground truth by majority consensus (2+1 adjudication).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- The document describes a performance comparison between TumorSight Viz, CADstream (predicate), and ground truth, as well as inter-radiologist variability. However, it does not describe an MRMC comparative effectiveness study directly measuring how much human readers improve with AI vs. without AI assistance. The study focuses on the standalone performance of TumorSight Viz and its comparability to a predicate device and human variability.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done
- Yes, a standalone study was done. The reported performance metrics (Mean Absolute Error, Dice Coefficients) for TumorSight Viz against a radiologist-established ground truth represent the standalone performance of the algorithm. The document explicitly states: "The measurements generated from the device result directly from the segmentation methodology and are an inferred reflection of the performance of the deep learning algorithm."
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: Pathologically confirmed breast cancer cases (for patient inclusion) combined with expert consensus of U.S. Board Certified radiologists for specific image measurements and segmentation appropriateness. The ground truth was established by two radiologists measuring characteristics, with a third radiologist adjudicating disagreements by majority consensus.
8. The sample size for the training set
- Training Set Sample Size: 390 samples (from 736 patients mentioned for training and tuning).
- Tuning Set Sample Size: 376 samples (from 736 patients mentioned for training and tuning).
9. How the ground truth for the training set was established
- The document states that 736 patients (766 samples) were used for "training and tuning the device." It explicitly mentions that for the validation set, "Seven (7) U.S. Board Certified radiologists reviewed 163 validation samples to establish the ground truth for the dataset..."
- The method for establishing ground truth for the training set is not explicitly detailed in the provided text. It is generally implied that such ground truth would also be established by experts, but the specifics are not given for the training portion.
Ask a specific question about this device
Page 1 of 1