Search Results
Found 1 results
510(k) Data Aggregation
(110 days)
The ScreenPoint Transpara™ system is intended for use as a concurrent reading aid for physicians interpreting screening mammograms from compatible FFDM systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes marks placed on suspicious soft tissue lesions and suspicious calcifications; region-based scores, displayed upon the physician's query, indicating the likelihood that cancer is present in specific regions; and an overall score indicating the likelihood that cancer is present on the mammogram. Patient management decisions should not be made solely on the basis of analysis by Transpara™.
Transpara™ is a software-only device for aiding radiologists with the detection and diagnosis of breast cancer in mammograms. The product consists of a processing server and an optional viewer. The software applies algorithms for recognition of suspicious calcifications and soft tissue lesions, which are trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissue. Processing results of Transpara™ can be transmitted to external destinations, such as medical imaging workstations or archives, using the DICOM mammography CAD SR protocol. This allows PACS workstations to implement the interface of Transpara™ in mammography reading applications.
Transpara™ automatically processes mammograms and the output of the device can be used by radiologists concurrently with the reading of mammograms. The user interface of Transpara™ has different functions:
- a) Activation of computer aided detection (CAD) marks to highlight locations where the device detected suspicious calcifications or soft tissue lesions. Only the most suspicious soft tissue lesions are marked to achieve a very low false positive rate.
- b) Regions can be queried using a pointer for interactive decision support. When the location of the queried region corresponds with a finding of Transpara™ a suspiciousness level of the region computed by the algorithms in the device is displayed. When Transpara™ has identified a corresponding region in another view of the same breast this corresponding region is also displayed to minimize interactions required from the user.
- c) Display of the exam based Transpara™ Score which categorizes exams on a scale of 1-10 with increasing likelihood of cancer.
Transpara™ is configured as a DICOM node in a network and receives its input images from another DICOM node, such as a mammography device or a PACS archive. The image analysis unit includes machine learning components trained to detect calcifications and soft tissue lesions and a component to pre-process images in such a way that images from different vendors can be processed by the same algorithms.
The provided text primarily focuses on the device's regulatory submission and comparisons to a predicate device, rather than a detailed clinical study demonstrating its performance against acceptance criteria. There is no explicit "acceptance criteria" table with performance metrics in the provided document, nor a detailed description of a multi-reader, multi-case (MRMC) study or standalone performance study with specific metrics and methodologies.
However, based on the available information, we can infer some aspects and highlight where information is missing for a complete response to your request.
Inferred Acceptance Criteria and Reported Device Performance (Table):
The document states, "Validation testing confirmed that algorithm performance has improved in comparison to Transpara 1.3.0 for the four manufacturers for which the device was already cleared and that for Fujifilm a similar performance is achieved." This implies that the acceptance criteria for the new version (1.5.0) were primarily:
- Improvement or maintenance of performance compared to the predicate device (Transpara 1.3.0) for existing compatible modalities.
- Achievement of similar performance for the newly supported Fujifilm modalities.
Without specific metrics (e.g., AUC, sensitivity, specificity at a defined operating point), it's impossible to create a quantitative table. The document only provides qualitative statements.
Acceptance Criteria (Inferred) | Reported Device Performance (Qualitative) |
---|---|
Algorithm performance for existing compatible modalities (Hologic, GE, Philips, Siemens) to be improved compared to Transpara 1.3.0. | "algorithm performance has improved in comparison to Transpara 1.3.0 for the four manufacturers for which the device was already cleared" |
Algorithm performance for newly supported Fujifilm modalities to be similar to Transpara 1.3.0. | "that for Fujifilm a similar performance is achieved." |
Effectiveness in detection of soft lesions and calcifications at an appropriate safety level. | "Based on results of verification and validation tests it is concluded that Transpara™ is effective in the detection of soft lesions and calcifications at an appropriate safety level in mammograms acquired with mammography devices for which the software has been validated." "Standalone performance tests demonstrate that Transpara™ 1.5.0 achieves better detection performance compared to the predicate device." (This broadly supports improvement, but lacks specific metrics like sensitivity/specificity/AUC). |
Study Details based on the provided text:
-
Sample sizes used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The document mentions "a multi-vendor test-set of mammograms acquired from multiple centers."
- Data Provenance:
- Country of Origin: Not specified.
- Retrospective or Prospective: Not explicitly stated, but "a multi-vendor test-set of mammograms acquired from multiple centers" for "asymptomatic women" suggests a retrospective collection of existing screening mammograms.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided for the validation test set. The document refers to "biopsy proven examples of breast cancer, benign lesions and normal tissue" for training, but not explicitly how the ground truth for the test set was established or by how many experts.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified. The document only mentions the use of "biopsy proven examples" for training and does not detail the ground truth establishment process for the test set.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC study was done, but not for the 1.5.0 version that is the subject of this 510(k) summary.
- The document states: "A pivotal reader study was conducted with the predicate device Transpara 1.3.0. This study provided evidence for safety and effectiveness of Transpara™."
- Therefore, no MRMC study details or effect sizes for human reader improvement with 1.5.0 assistance are provided in this document. The current submission relies on the standalone performance of 1.5.0 showing improvement over the predicate, and assumes the clinical benefit of the predicate carries over to the improved version.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, a standalone performance study was done. The document explicitly states: "Validation testing consisted of determining stand-alone performance of the algorithms in Transpara™ using a multi-vendor test-set..." and "Standalone performance tests demonstrate that Transpara™ 1.5.0 achieves better detection performance compared to the predicate device."
- Specific performance metrics (e.g., AUC, sensitivity, specificity, FROC analysis) are not provided in this summary, only the qualitative statements of "improved" or "similar" performance compared to the predicate device.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the training set, it explicitly states "biopsy proven examples of breast cancer, benign lesions and normal tissue."
- For the validation/test set, it states the test set included "mammograms of asymptomatic women," and while not explicitly stated, the context of cancer detection implies that cancer status (and thus ground truth) would be established through pathology or follow-up outcomes for positive cases, and freedom from cancer on follow-up for negative cases. The text doesn't explicitly detail the method of ground truth collection for the test set, but it implies a robust method to define "cancer" vs. "normal."
-
The sample size for the training set:
- Not specified. The document only mentions that the algorithms were "trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissue."
-
How the ground truth for the training set was established:
- "biopsy proven examples of breast cancer, benign lesions and normal tissue." This indicates that the ground truth for training data was established through histopathological confirmation from biopsies.
Ask a specific question about this device
Page 1 of 1