Search Results
Found 1 results
510(k) Data Aggregation
(151 days)
ViewFinder is a dedicated softcopy review environment for both screening and diagnostic digital breast tomosynthesis. Its user interface and workflow have been optimized to support qualified interpreting physicians in both screening and diagnostic reading. Efficiency and reading quality are supported by various specialized features. ViewFinder provides visualization and image enhancement tools to aid a qualified interpreting physician in the review of digital breast tomosynthesis datasets. The qualified interpreting physician is responsible for making the diagnosis of the images presented.
ViewFinder is software which displays two Digital Breast Tomosynthesis (DBT) views of the same breast and dynamically indicates correlated (matched) tissue. The benefit is that clinicians can compare matched tissue quickly and with less cognitive load.
It works by simulating the tissue movements between Cranio-Caudal (CC) and Medio-Lateral Oblique (MLO) compressions and views for a gross approximation of tissue matching, followed by a second fine tuning using locked Artificial Intelligence (AI). Users operate the device by pointing the cursor at tissue in one view and ViewFinder indicates the matching tissue in the other view.
ViewFinder is a standalone software application or can be integrated into medical image management and processing systems. ViewFinder is an image viewing and processing software environment dedicated to breast image display.
It is designed to provide the performance required for the high data volume of DBT.
ViewFinder runs on a PC and can be used for digital breast tomosynthesis image review together with monitors cleared for mammography diagnostics.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Primary Performance Metric: Frequency that ground truth matching tissue is inside the predicted region (oval). | Total: 0.853 (58/68) |
Minimum Performance Threshold: 70% in aggregate and for the majority of subgroups. | The reported aggregate performance of 0.853 (85.3%) exceeds the 70% minimum threshold. Subgroup performance breakdown: |
- Feature Size 0-40 mm^2: 0.806 (25/31)
- Feature Size 40-200 mm^2: 0.750 (9/12)
- Feature Size 200-500 mm^2: 1.000 (10/10)
- Feature Size >500 mm^2: 0.933 (14/15)
- Density A: 1.000 (4/4)
- Density B: 0.786 (33/42)
- Density C: 0.950 (19/20)
- Density D: 1.000 (2/2)
All reported subgroup performances except one (Density D, Feature Size 0-40mm^2 with 0/0 and Density D, Feature Size >500mm^2 with 0/0) meet or exceed the 70% threshold. The 0/0 cases indicate no features in that specific subgroup. |
Study Details
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: 34 CC-MLO pairs of key points from 30 diverse cases. This represents 68 matches in total (34 in CC to MLO direction + 34 in MLO to CC direction). The total number of patients was 28.
- Data Provenance: Acquired from assessment clinics at King's College Hospital (KCH), London, UK. The data spans from April 2018 to November 2021. It is retrospective data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: One expert radiologist.
- Qualifications: "An expert radiologist at KCH." No specific experience duration is provided, but being an "expert radiologist" implies sufficient qualification.
-
Adjudication method for the test set:
- No multi-reader adjudication method (e.g., 2+1, 3+1) is explicitly mentioned for establishing the ground truth. The text states: "Ground truth annotations were made by an expert radiologist." A "technical review was conducted through visualization, checking bounding box placement, tissue matches, and that the sequence numbers and pairing were consistent" after the radiologist's annotations, which suggests a quality check rather than an independent adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted to assess human reader improvement with AI assistance. The study focuses purely on the standalone performance of the algorithm in identifying matching tissue.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone (algorithm only) performance study was conducted. The "Primary Performance Metric: Frequency" and "Total Registration Errors (TRE)" directly evaluate the algorithm's ability to map tissue without human intervention during the mapping process.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth was established by expert radiologist annotation. The expert identified "key points" and made "ground truth annotations" for matching tissue.
-
The sample size for the training set:
- The training data consisted of two unspecified datasets of FFDMs (Full Field Digital Mammograms) and 660 Tomos (Digital Breast Tomosynthesis cases).
-
How the ground truth for the training set was established:
- The ground truth for the training set was established through "annotations [that] included bounding boxes around landmark tissue." The "finding class was used during pre-training (cancerous, benign or normal) with significant representation in each class." This implies that the training data also had expert-derived annotations of tissue landmarks and their classification. The text states, "The algorithm was trained on a large, diversified set of Tomo cases with ground truth annotations made by a consultant radiologist."
Ask a specific question about this device
Page 1 of 1