Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K241831
    Date Cleared
    2024-11-25

    (153 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ScreenPoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Transpara software is intended for use as a concurrent reading aid for physicians interpreting screening full-field digital mammography exams and digital breast tomosynthesis exams from compatible FFDM and DBT systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara.

    Device Description

    Transpara is a software only application designed to be used by physicians to improve interpretation of full-field digital mammography (FFMD) and digital breast tomosynthesis (DBT). Deep learning algorithms are applied to images for recognition of suspicious calcifications and soft tissue lesions (including densities, masses, architectural distortions, and asymmetries). Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue.

    Transpara offers the following functions which may be used at any time in the reading process, to improve detection and characterization of abnormalities and enhance workflow:

    • AI findings for display in the images to highlight locations where the device detects suspicious calcifications or soft tissue lesions, along with region scores per finding on a scale ranging from 1-100, with higher scores indicating a higher level of suspicion.
    • Links between corresponding regions in different views of the breast, which may be utilized to enhance user interfaces and workflow.
    • An exam-based score which categorizes exams with increasing likelihood of cancer on a scale of 1-10 or in three risk categories labeled as 'low', 'intermediate' or 'elevated'.

    The concurrent use indication implies that it is up to the users to decide how to use Transpara in the reading process. Transpara functions can be used before, during or after visual interpretation of an exam by a user.

    Results of Transpara are computed in a standalone processing appliance which accepts mammograms in DICOM format as input, processes them, and sends the processing output to a destination using the DICOM protocol in a standardized mammography CAD DICOM format. Common destinations are medical workstations, PACS and RIS. The system can be configured using a service interface. Implementation of a user interface for end users in a medical workstation is to be provided by third parties.

    AI/ML Overview

    The provided text describes the acceptance criteria and a study that proves the device, Transpara (2.1.0), meets these criteria.

    Here's an organized breakdown of the information requested:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the reported performance metrics. The study aims to demonstrate non-inferiority and superiority to the predicate device, Transpara 1.7.2. The key metrics reported are sensitivity at various specificity levels and Exam-based Area Under the Receiver Operating Characteristic Curve (AUC).

    Table 1: Acceptance Criteria (Implied by Performance Goals) and Reported Device Performance (Standalone without Temporal Analysis)

    MetricAcceptance Criteria (Implied/Target)Reported Performance (FFDM)Reported Performance (DBT)
    Sensitivity (Sensitive Mode @ 70% Specificity)Non-inferior & Superior to Predicate Device 1.7.2 (quantitative value not specified, but implied by comparison)97.4% (96.3 - 98.5)96.9% (95.5 - 98.3)
    Sensitivity (Specific Mode @ 80% Specificity)Non-inferior & Superior to Predicate Device 1.7.295.2% (93.7 - 96.7)95.1% (93.3 - 96.8)
    Sensitivity (Elevated Risk @ 97% Specificity)Non-inferior & Superior to Predicate Device 1.7.280.8% (78.0 - 83.6)78.4% (75.1 - 81.7)
    Exam-based AUCNon-inferior & Superior to Predicate Device 1.7.20.960 (0.953 - 0.966)0.955 (0.947 - 0.963)

    Table 2: Acceptance Criteria (Implied by Performance Goals) and Reported Device Performance (Standalone with Temporal Analysis - TA)

    MetricAcceptance Criteria (Implied/Target)Reported Performance (FFDM with TA)Reported Performance (DBT with TA)
    Sensitivity (Sensitive Mode @ 70% Specificity)Superior to performance without temporal comparison95.7% (93.7 - 97.6)94.6% (91.2 - 98.0)
    Sensitivity (Specific Mode @ 80% Specificity)Superior to performance without temporal comparison95.4% (93.4 - 97.4)91.0% (86.7 - 95.4)
    Sensitivity (Elevated Risk @ 97% Specificity)Superior to performance without temporal comparison82.7% (79.1 - 86.4)74.9% (68.3 - 81.4)
    Exam-based AUCSuperior to performance without temporal comparison0.958 (0.946 - 0.969)0.941 (0.921 - 0.958)

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Main Test Set (without temporal analysis): 10,207 exams (5,730 FFDM, 4,477 DBT).
        • Normal: 8,587 exams
        • Benign: 270 exams
        • Cancer: 1,350 exams (750 FFDM, 600 DBT)
      • Temporal Analysis Test Set: 5,724 exams (4,266 FFDM, 1,458 DBT).
        • Normal: 4,998 exams
        • Benign: 83 exams
        • Cancer: 643 exams (471 FFDM, 172 DBT)
      • Data Provenance: Independent dataset acquired from multiple centers in seven EU countries and the US. Retrospective in nature, as it was acquired and not used for algorithm development and included normal exams with at least one year follow-up. The data included images from various manufacturers (Hologic, GE, Philips, Siemens, Fujifilm).
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:

      • The document states that the cancer cases in the test set were "biopsy-proven cancer." It does not specify the number or qualifications of experts used to establish the ground truth for the entire test set (including normal and benign cases, and detailed lesion characteristics). The mechanism for establishing the "normal" and "benign" status is not explicitly detailed beyond "normal follow-up of at least one year."
    3. Adjudication Method for the Test Set:

      • The document does not explicitly describe an adjudication method involving multiple readers for establishing ground truth for the test set. The ground truth for cancer cases is stated as "biopsy-proven."
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

      • No, the document does not describe a Multi-Reader Multi-Case (MRMC) comparative effectiveness study. The performance assessment is a standalone evaluation of the algorithm's performance, not a human-in-the-loop study comparing human readers with and without AI assistance.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, standalone performance tests were conducted. The results presented in Tables 2 and 5 are for the algorithm's performance only.
    6. The Type of Ground Truth Used:

      • The primary ground truth for cancer cases is biopsy-proven cancer. For normal exams within the test set, the ground truth was established by "a normal follow-up of at least one year," implying outcomes data (absence of diagnosed cancer over a follow-up period).
    7. The Sample Size for the Training Set:

      • The document does not explicitly state the sample size of the training set. It mentions "Deep learning algorithms are applied to images for recognition of suspicious calcifications and soft tissue lesions... Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue."
    8. How the Ground Truth for the Training Set Was Established:

      • The ground truth for the training set was established using a "large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." This implies a similar methodology to the test set for cancer cases (biopsy verification) and likely clinical follow-up or expert consensus for benign/normal cases, though not explicitly detailed for the training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K232096
    Date Cleared
    2023-12-11

    (151 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Screenpoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Transpara Density is a software application intended for use with data from compatible digital breast tomosynthesis systems. Transpara Density utilises deep learning artificial intelligence algorithms to automatically determine volumetric breast density (VBD), breast volume, and an ACR BI-RADS 5th Edition breast density category to aid health care professionals in the assessment of breast tissue composition. It is not a diagnostic aid.

    Device Description

    Transpara Density is a software module that uses artificial intelligence techniques to assess breast density in mammography (DM) and breast tomosynthesis (DBT) images and provide support to radiologists in this task. The novel methods of Transpara Density, extend the capabilities of computer aided detection systems for mammography by providing radiologists with decision support via the output of density assessment.

    The Transpara Density outputs are:

    • Density Grade, in accordance with categories defined in the ACR BI-RADS Atlas 5th Edition (A = almost entirely fat; B = scattered fibroglandular densities; C = heterogeneously dense; and D = extremely dense)
    • Volumetric Breast Density in % .
    • . Breast volume in cm3

    Transpara Density is designed as an optional feature of Transpara. To operate in a clinical environment the software must be embedded in a software application that generates output in standardized formats (e.g. DICOM) and handles communication with external devices (such as PACS systems).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly derived from the performance goals demonstrated in the clinical studies.

    Performance Metric CategoryAcceptance Criteria (Implicitly from Study Results)Reported Device Performance
    Accuracy (VBD)Pearson correlation coefficient with physics model (van Engeland 2006) should be high.0.935 [95% CI: 0.931 - 0.938]
    Accuracy (Breast Volume)Pearson correlation coefficient with physics model (van Engeland 2006) should be high.0.997
    Accuracy (VBD vs. MRI)Pearson correlation coefficient with volumetric measurements from breast MRI should be high.0.908 [95% CI: 0.878 - 0.931]
    Reproducibility (CC vs. MLO)VBD in MLO and CC views of the same breast should be similar.Pearson correlation: 0.947 [95% CI: 0.945 - 0.948], Mean absolute deviation: 1.22% [95% CI: 1.19% - 1.24%]
    Reproducibility (Left vs. Right Breast)VBD in left and right breasts of the same patient should be similar.Pearson correlation: 0.953 [95% CI: 0.951 - 0.955], Mean absolute deviation: 1.14% [95% CI: 1.10% - 1.17%]
    Reproducibility (FFDM vs. DBT)VBD between FFDM and DBT acquisitions should be similar.Pearson correlation: 0.912 [95% CI: 0.904 - 0.920], Mean absolute deviation: 1.68% [95% CI: 1.57% - 1.78%]
    Agreement (FFDM vs. DBT - DG)Agreement in four-category DG values for FFDM and DBT should be high.Quadratically weighted kappa: 0.810 [95% CI: 0.787 - 0.835]
    Agreement with Human Readers (4-category DG)Overall accuracy of Transpara Density against human readers.70.8% [95% CI: 67.6% - 73.9%]
    Agreement with Human Readers (4-category DG Kappa)Cohen's quadratically weighted kappa against human readers.0.74 [95% CI: 0.70 - 0.79]
    Agreement with Human Readers (Dense vs. Non-Dense Accuracy)Overall accuracy of Transpara Density against human readers for dense vs. non-dense.88.9% [95% CI: 86.6% - 90.9%]
    Agreement with Human Readers (Dense vs. Non-Dense Kappa)Cohen's quadratically weighted kappa against human readers for dense vs. non-dense.0.78 [95% CI: 0.72 - 0.84]
    Dense vs. Non-Dense SensitivitySensitivity for dense vs. non-dense classification.87.3% [95% CI: 83.6% - 90.3%]
    Dense vs. Non-Dense SpecificitySpecificity for dense vs. non-dense classification.90.4% [95% CI: 87.2% - 92.9%]

    2. Sample Size Used for the Test Set and Data Provenance

    • Accuracy (Physics Model & MRI):
      • Physics Model Comparison: 5,468 exams.
      • MRI Comparison: 190 exams.
    • Reproducibility (CC vs. MLO, Left vs. Right Breast, FFDM vs. DBT):
      • CC vs. MLO and Left vs. Right Breast: 10,804 exams.
      • FFDM vs. DBT: 433 exams (where images of both modalities were available).
    • Agreement with Human Readers (Main Study): 800 women (400 DM and 400 DBT examinations).
    • Data Provenance:
      • The test data originated from multiple clinical centers in the US, UK, Turkey, and five EU countries (Netherlands, Sweden, Germany, Spain, Belgium, Italy).
      • The data collection sites are described as "representative for regular breast cancer screening and diagnostic assessment in hospitals."
      • The studies were retrospective, using existing data. The human reader study implies a retrospective collection of images to be reviewed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Eight (8) MQSA-qualified radiologists.
    • Qualifications of Experts: "MQSA-qualified radiologists according to the ACR BI-RADS Atlas 5th Edition." (MQSA stands for Mammography Quality Standards Act, indicating they are qualified to interpret mammograms clinically in the US).

    4. Adjudication Method for the Test Set

    • Panel Majority Vote: For each exam, a panel majority vote of the eight radiologists was computed to serve as the reference standard.
    • Tie Resolution: Ties in the panel majority vote were resolved by taking the majority vote of the three most experienced radiologists in the panel.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly described as being done to assess human reader improvement with AI assistance. The study focused on the standalone performance of the Transpara Density device against human reader consensus, not how the AI assists human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, extensive standalone performance testing was done. The entire "Summary of non-clinical performance data" section describes the device's performance in terms of accuracy, reproducibility, and agreement with human readers, all reflecting the algorithm's direct output. The conclusion explicitly states: "Standalone performance tests demonstrated that requirements were met."

    7. The Type of Ground Truth Used

    • Expert Consensus (Proxy for Ground Truth): For the agreement with human readers, the ground truth for the ACR BI-RADS 5th Edition breast density category was established by a panel majority vote of eight MQSA-qualified radiologists, with tie-breaking by the three most experienced.
    • Physics-Based Model / MRI Measurements (Reference for Accuracy): For the volumetric breast density (VBD) and breast volume (BV) accuracy assessments, the ground truth was based on:
      • A validated physics-based model described in literature (van Engeland 2006).
      • Volumetric measurements from breast MRI studies in the same patients.

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size for the training set. It only mentions that the "test data was not used for algorithm training and was not accessible to members of the research and development team."

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide details on how the ground truth for the training set was established. It only indicates that the test data was separate from the training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221347
    Device Name
    Transpara 1.7.2
    Date Cleared
    2022-08-03

    (86 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ScreenPoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Transpara® software is intended for use as a concurrent reading aid for physicians interpreting screening full-field digital mammography exams and digital breast tomosynthesis exams from compatible FFDM and DBT systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara®.

    Device Description

    Transpara® is a software only application designed to be used by physicians to improve interpretation of digital mammography and digital breast tomosynthesis. The system is intended to be used as a concurrent reading aid to help readers with detection and characterization of potential abnormalities suspicious for breast cancer and to improve workflow. 'Deep learning' algorithms are applied to FFDM images and DBT slices for recognition of suspicious calcifications and soft tissue lesions (including densities, masses, architectural distortions, and asymmetries). Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue.

    Transpara® offers the following functions which may be used at any time during reading (concurrent use):

    • a) Computer aided detection (CAD) marks to highlight locations where the device detected suspicious calcifications or soft tissue lesions.
    • b) Decision support is provided by region scores on a scale ranging from 0-100, with higher scores indicating a higher level of suspicion.
    • c) Links between corresponding regions in different views of the breast, which may be utilized to enhance user interfaces and workflow.
    • d) An exam score which categorizes exams on a scale of 1-10 with increasing likelihood of cancer. The score is calibrated in such a way that approximately 10 percent of mammograms in a population of mammograms without cancer falls in each category.

    Results of Transpara® are computed in processing server which accepts mammograms or DBT exams in DICOM format as input, processes them, and sends the processing output to a destination using the DICOM protocol. Common destinations are medical workstations, PACS and RIS. The system can be configured using a service interface. Implementation of a user interface for end users in a medical workstation is to be provided by third parties.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for Transpara 1.7.2:

    Acceptance Criteria and Device Performance Study for Transpara 1.7.2

    The primary study conducted to prove the device meets acceptance criteria was a standalone performance test demonstrating non-inferiority to the predicate device (Transpara 1.7.0).

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the non-inferiority claims to the predicate device in terms of breast cancer detection performance (sensitivity and ROC AUC) at specified false positive rates. While specific pass/fail thresholds for non-inferiority margin are not explicitly given in this document, the statement "non-inferior to the performance of the predicate device Transpara 1.7.0" implies that metrics must meet or exceed the predicate's performance within a defined statistical margin.

    Metric (Implicit Acceptance Criteria)Reported Device Performance (Transpara 1.7.2)
    Non-inferiority in Cancer Detection Sensitivity for 2D Mammography compared to predicate device2D Sensitivity: 95.0% (93.5-96.4) at 0.30 FP/image
    Non-inferiority in ROC AUC for 2D Mammography compared to predicate device2D AUC: 0.945 (0.935-0.954)
    Non-inferiority in Cancer Detection Sensitivity for DBT Mammography compared to predicate deviceDBT Sensitivity: 93.2% (91.0-95.1) at 0.34 FP/volume
    Non-inferiority in ROC AUC for DBT Mammography compared to predicate deviceDBT AUC: 0.945 (0.936-0.954)
    Performance metrics for different types of findings (mass, calcifications, architectural distortions, asymmetries, combinations) and histological cancer types (invasive non-specific, DCIS, invasive lobular)Specific performance breakdowns by finding type and histology are not provided in this summary, but the test set included different types of findings and histological cancers.

    Conclusion: The study explicitly states, "Based on standalone testing it was concluded that Transpara 1.7.2 breast cancer detection performance for 2D and 3D mammograms of compatible devices is non-inferior to the performance of the predicate device Transpara 1.7.0."

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 10,690 exams.
      • FFDM: 5,867 exams (4,841 Normal, 149 Benign, 877 Cancer)
      • DBT: 4,823 exams (3,988 Normal, 240 Benign, 595 Cancer)
    • Data Provenance:
      • Acquisition: Acquired from multiple centers, collected from multiple clinical centers.
      • Geographic Origin: Seven EU countries and the US.
      • Retrospective/Prospective: The document does not explicitly state if the data was retrospective or prospective. However, the mention of "normal follow-up of at least one year" for inclusion of normal exams strongly suggests a retrospective collection of existing patient data.
      • Manufacturer Diversity: Included images from different manufacturers (2D: Hologic, GE, Philips, Siemens, Giotto and Fujifilm; 3D: Hologic, Siemens, General Electric and Fujifilm).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not specify the number of experts used to establish the ground truth or their qualifications. It only states that the training algorithms were "trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." For the test set, it mentions "biopsy-proven cancer regions" and "normal follow-up of at least one year" for normal exams, indicating a reliance on clinical outcomes rather than expert consensus for ground truth.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method for establishing ground truth for the test set. The reliance on "biopsy-proven" and "normal follow-up" suggests that ground truth was clinical outcome-based rather than expert-adjudicated review of images for the testing purposes.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, a MRMC comparative effectiveness study was not reported as part of this 510(k) summary to directly show human readers improve with AI vs. without AI assistance. The study described is a standalone performance test comparison to a predicate device.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone performance test was conducted and described in detail. "Standalone performance tests were conducted to demonstrate substantial equivalence with the predicate device."

    7. The Type of Ground Truth Used

    The ground truth used for the test set appears to be primarily clinical outcome data:

    • "biopsy-proven cancer regions" for positive cases.
    • "normal follow-up of at least one year" for normal cases.

    8. The Sample Size for the Training Set

    The document mentions that "Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue," but it does not provide a specific sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established through:

    • Biopsy-proven examples: For breast cancer and benign abnormalities.
    • Clinical outcomes/follow-up: For examples of normal tissue.

    This aligns with the ground truth establishment method for the test set, leveraging definitive clinical diagnoses (biopsy) and confirmed negative follow-up for normal cases.

    Ask a Question

    Ask a specific question about this device

    K Number
    K210404
    Device Name
    Transpara 1.7.0
    Date Cleared
    2021-06-02

    (112 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ScreenPoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Transpara® software is intended for use as a concurrent reading aid for physicians interpreting screening full-field digital mammography exams and digital breast tomosynthesis exams from compatible FFDM and DBT systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara®.

    Device Description

    Transpara® is a software only application designed to be used by physicians to improve interpretation of digital mammography and digital breast tomosynthesis. The system is intended to be used as a concurrent reading aid to help readers with detection and characterization of potential abnormalities suspicious for breast cancer and to improve workflow. 'Deep learning' algorithms are applied to FFDM images and DBT slices for recognition of suspicious calcifications and soft tissue lesions (including densities, masses, architectural distortions, and asymmetries). Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue.

    Transpara® offers the following functions which may be used at any time during reading (concurrent use):

    • a) Computer aided detection (CAD) marks to highlight locations where the device detected suspicious calcifications or soft tissue lesions.
    • b) Decision support is provided by region scores on a scale ranging from 0-100, with higher scores indicating a higher level of suspicion.
    • c) Links between corresponding regions in different views of the breast, which may be utilized to enhance user interfaces and workflow.
    • d) An exam score which categorizes exams on a scale of 1-10 with increasing likelihood of cancer. The score is calibrated in such a way that approximately 10 percent of mammograms in a population of mammograms without cancer falls in each category.

    Results of Transpara® are computed in processing server which accepts mammograms or DBT exams in DICOM format as input, processes them, and sends the processing output to a destination using the DICOM protocol in a standardized mammography CAD DICOM format. Common destinations are medical workstations, PACS and RIS. Transpara® is offered as a virtual machine and runs on pre-selected standard PC hardware as well as a dedicated virtual machine cluster. The system can be configured using a service interface. Implementation of a user interface for end users in a medical workstation is to be provided by third parties.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for Transpara 1.7.0, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as quantitative thresholds in the provided document (e.g., "AUC must be >= X"). Instead, the primary acceptance criteria for standalone performance appear to be non-inferiority compared to the predicate device (Transpara 1.6.0) across various performance metrics. The stated goal for AUC was to be "higher [or] non-inferior" compared to the predicate device.

    MetricAcceptance Criteria (Implicit)Reported Device Performance (Transpara 1.7.0)
    2D FFDM Performance
    Sensitivity (Calcifications)N/A (reported at a given FP rate)94.7% (95% CI: 91.7-96.7) at 0.11 FP/image
    Sensitivity (Soft Tissue Lesions)N/A (reported at given FP rates)80.2% (95% CI: 76.8-83.2) at 0.02 FP/image
    92.6% (95% CI: 90.2-94.6) at 0.17 FP/image
    AUC (relative to predicate)Higher or Non-inferior to predicate (0.929)0.949 (Difference: +0.021, CI: 0.013-0.038)
    DBT Performance
    SensitivityN/A (reported at a given FP rate)91.3% (95% CI: 88.1-93.6) at 0.3 FP/volume
    AUC (relative to predicate)Higher or Non-inferior to predicate (0.917)0.931 (Difference: +0.014, CI: 0.003-0.042)
    Fujifilm DBT Performance
    AUC (relative to predicate)Higher or Non-inferior to predicate (0.917)0.952

    Study Proving Device Meets Acceptance Criteria

    The study described is a standalone performance test designed to evaluate the non-inferiority of Transpara 1.7.0 compared to its predicate, Transpara 1.6.0.

    1. Sample Size Used for the Test Set and Data Provenance:

      • Total Exams: 7882 non-cancer exams and 1240 cancer exams.
      • 2D FFDM Exams: 4797 non-cancer, 819 cancer.
      • DBT Exams: 3085 non-cancer, 421 cancer.
      • Data Provenance: Acquired from multiple centers in seven EU countries and the US. The data was "independent" and "had not been used for development of the algorithms." This implies a retrospective collection, as it was a pre-existing dataset used for validation. The device manufacturers represented in the dataset were Hologic, GE, Philips, Siemens, and Fujifilm for 2D, and Hologic, Siemens, and Fujifilm for 3D.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • The document does not specify the number of experts or their qualifications used to establish the ground truth for the test set. It mentions that the training algorithms were based on a large database of "biopsy-proven examples," which strongly suggests that the ground truth for clinical relevance was established through biopsy results.
    3. Adjudication Method for the Test Set:

      • The document does not explicitly describe an adjudication method for the ground truth of the test set by human experts. The ground truth seems to be established primarily through biopsy results as mentioned in the device description section ("Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue."). For cancer cases, biopsy directly provides the ground truth. For non-cancer cases, it's likely based on follow-up showing no malignancy or negative biopsy.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study (i.e., human readers with vs. without AI assistance) was not conducted or reported in this 510(k) summary. The study focused solely on the standalone performance of the algorithm and its comparison to its predicate.
    5. Standalone (Algorithm Only) Performance Study:

      • Yes, a standalone performance study was conducted. The results reported (sensitivity, FP rates, and AUCs) are all measures of the algorithm's performance without human intervention.
    6. Type of Ground Truth Used:

      • The primary ground truth used for both training and evaluation was biopsy-proven examples of breast cancer, benign abnormalities, and normal tissue. This also implies negative follow-up for normal or benign cases where biopsy wasn't performed. The report explicitly mentions "biopsy-proven."
    7. Sample Size for the Training Set:

      • The document states, "Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." However, it does not specify the exact sample size for the training set.
    8. How Ground Truth for the Training Set Was Established:

      • The ground truth for the training set was established using "biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." This indicates that clinical and pathological confirmation (biopsy results) were used to label the data for training the AI model.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193229
    Device Name
    Transpara
    Date Cleared
    2020-03-05

    (104 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    ScreenPoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Transpara™ software is intended for use as a concurrent reading aid for physicians interpreting screening full-field digital mammography exams and digital breast tomosynthesis exams from compatible FFDM and DBT systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes locations of calcifications groups and soft-tissue regions, with scores indicating the likelihood that cancer is present, and an exam score indicating the likelihood that cancer is present in the exam. Patient management decisions should not be made solely on the basis of analysis by Transpara™.

    Device Description

    Transpara™ is a software only application designed to be used by physicians to improve interpretation of digital mammography and digital breast tomosynthesis. The system is intended to be used as a concurrent reading aid to help readers with detection and characterization of potential abnormalities suspicious for breast cancer and to improve workflow. 'Deep learning' algorithms are applied to FFDM images and DBT slices for recognition of suspicious calcifications and soft tissue lesions (including densities, masses, architectural distortions, and asymmetries). Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue.

    Transpara™ offers the following functions which may be used at any time during reading (concurrent use):

    • a) Computer aided detection (CAD) marks to highlight locations where the device detected suspicious calcifications or soft tissue lesions.
    • b) Decision support is provided by region scores on a scale ranging from 0-100, with higher scores indicating a higher level of suspicion.
    • c) Links between corresponding regions in different views of the breast, which may be utilized to enhance user interfaces and workflow.
    • d) An exam score which categorizes exams on a scale of 1-10 with increasing likelihood of cancer. The score is calibrated in such a way that approximately 10 percent of mammograms in a population of mammograms without cancer falls in each category.

    Results of Transpara™ are computed in processing server which accepts mammograms or DBT exams in DICOM format as input, processes them, and sends the processing output to a destination using the DICOM protocol in a standardized mammography CAD DICOM format. Use of the device is supported for images from the following modality manufacturers: FFDM (Hologic, Siemens, General Electric, Philips, Fujifilm) and DBT (Hologic, Siemens). Common destinations are medical workstations, PACS and RIS. Transpara™ is offered as a virtual machine and runs on pre-selected standard PC hardware as well as a dedicated virtual machine cluster. The system can be configured using a service interface. Implementation of a user interface for end users in a medical workstation is to be provided by third parties.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document doesn't explicitly present a formal "acceptance criteria" table with numerical cutoffs for specific metrics. Instead, it describes its objectives in terms of "superior" or "non-inferior" performance compared to a baseline (unaided human reading or a previous device version).

    Acceptance Criteria (Stated Objective)Reported Device Performance
    Pivotal Reader Study (DBT)
    Superior breast-level Area Under the Receiver OperatingAverage AUC increased from 0.833 to 0.863 (P = 0.0025) with Transpara™ assistance. This demonstrates statistically significant improvement.
    Characteristic curve (AUC) between conditions
    Reading time reductionReading time was significantly reduced with Transpara™ assistance. (Specific reduction not quantified in text).
    Non-inferior or higher sensitivitySuperior sensitivity was obtained with Transpara™ assistance. (Specific values not quantified in text).
    Non-inferior or higher specificity(No specific mention of specificity performance beyond "non-inferior or higher," but the AUC improvement implies a balanced performance gain).
    Reading time reduction on normal examsReading time reduction on normal exams was a secondary objective that was met. (Specific reduction not quantified in text).
    Standalone AUC performance non-inferior to average AUC of readersThe text states it was tested if standalone AUC performance of Transpara™ was non-inferior to the average AUC performance of the readers, and statistical analysis showed all pre-specified endpoints were met. This implies non-inferiority was achieved. (Specific AUC not stated for standalone).
    Standalone Performance Testing (FFDM)
    Non-inferior or better detection accuracy compared to Transpara 1.3.0Validation testing confirmed that algorithm performance is non-inferior or better in comparison to Transpara 1.3.0 for the four manufacturers cleared for the predicate device.
    Non-inferior performance for Fujifilm FFDM systemsValidation testing confirmed that for Fujifilm, performance was non-inferior to the performance achieved on the pooled test data of devices cleared for use with the predicate device.

    2. Sample sizes used for the test set and the data provenance

    • Test Set (Pivotal Reader Study for DBT):

      • Sample Size: 240 Siemens Mammomat DBT exams. This included 65 exams with breast cancer, 65 exams with benign abnormalities, and 110 normal exams.
      • Data Provenance: The text states the data were "acquired from multiple centers." It also specifies they were "Siemens Mammomat DBT exams." The country of origin is not explicitly stated, but the manufacturer is based in Germany. The study was retrospective.
    • Test Set (Standalone Performance Testing for FFDM):

      • Sample Size: "Independent multi-vendor test-set of mammography and DBT exams." Specific number not provided, but it included exams from five manufacturers: Hologic, GE, Philips, Siemens, and Fujifilm.
      • Data Provenance: The data were "acquired from multiple centers." The study was retrospective.
    • Training Set:

      • Sample Size: "Algorithms are trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." No specific number provided.
      • Data Provenance: Not specified, but likely diverse given the mention of a "large database."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document doesn't explicitly detail the process or number of experts used to establish the ground truth for the test set before the reader study. It mentions the reader study itself involved 18 radiologists, but these radiologists were participating in the evaluation of the device, not necessarily establishing an independent ground truth for the test cases prior to the study.

    However, the training data used "biopsy-proven examples," which implies ground truth confirmation by pathology. For the reader study, the cases were "enriched," meaning they had known outcomes (cancer, benign, normal). The underlying ground truth for these clinical cases would typically be established by clinical diagnosis, pathology reports from biopsies, and follow-up.

    4. Adjudication method for the test set

    The document does not explicitly describe an adjudication method (like 2+1 or 3+1 consensus) for establishing the ground truth of the test set cases. The term "enriched sample" suggests that cases with known outcomes (cancer, benign, normal) were selected.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • Yes, a multi-reader multi-case (MRMC) comparative effectiveness study was done.

      • Design: "fully-crossed, multi-reader multi-case retrospective study."
      • Participants: 18 MQSA qualified radiologists.
    • Effect Size of Human Reader Improvement (with AI vs. without AI assistance):

      • Average AUC: Increased from 0.833 (unaided) to 0.863 (with Transpara™ assistance).
      • P-value: P = 0.0025, indicating statistical significance.
      • Sensitivity: "Superior sensitivity was obtained with Transpara™." (Specific values not provided).
      • Reading Time: "reading time was significantly reduced." (Specific reduction not provided).

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, standalone performance testing was done.
      • Type of Testing: "determining stand-alone performance of the algorithms in Transpara 1.6.0."
      • Context for FFDM: Focused on non-inferiority compared to the predicate device (Transpara 1.3.0) and for new manufacturers (Fujifilm).
      • Context for DBT: It was a secondary objective of the pivotal reader study to "test if standalone AUC performance of Transpara™ was non-inferior to the average AUC performance of the readers." The study results indicated this objective was met. (Specific standalone AUC not provided in the text).

    7. The type of ground truth used

    • For training data: "biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." This implies pathology and clinical follow-up for normality.
    • For the pivotal reader study test set: The "enriched sample" of exams (65 with cancer, 65 benign, 110 normal) suggests ground truth was based on clinical diagnosis, pathology results, and follow-up exams. While not explicitly stated as "expert consensus," these are considered robust forms of ground truth for breast imaging studies.

    8. The sample size for the training set

    • The training set was described as a "large database." No specific numerical sample size was provided in the document.

    9. How the ground truth for the training set was established

    • The algorithms were "trained with a large database of biopsy-proven examples of breast cancer, benign abnormalities, and examples of normal tissue." This indicates the ground truth was established through histopathological confirmation (biopsy results) for cancerous and benign cases, and likely clinical follow-up for normal cases to ensure no underlying malignancy was missed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K192287
    Device Name
    Transpara
    Date Cleared
    2019-12-10

    (110 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Screenpoint Medical B.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ScreenPoint Transpara™ system is intended for use as a concurrent reading aid for physicians interpreting screening mammograms from compatible FFDM systems, to identify regions suspicious for breast cancer and assess their likelihood of malignancy. Output of the device includes marks placed on suspicious soft tissue lesions and suspicious calcifications; region-based scores, displayed upon the physician's query, indicating the likelihood that cancer is present in specific regions; and an overall score indicating the likelihood that cancer is present on the mammogram. Patient management decisions should not be made solely on the basis of analysis by Transpara™.

    Device Description

    Transpara™ is a software-only device for aiding radiologists with the detection and diagnosis of breast cancer in mammograms. The product consists of a processing server and an optional viewer. The software applies algorithms for recognition of suspicious calcifications and soft tissue lesions, which are trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissue. Processing results of Transpara™ can be transmitted to external destinations, such as medical imaging workstations or archives, using the DICOM mammography CAD SR protocol. This allows PACS workstations to implement the interface of Transpara™ in mammography reading applications.

    Transpara™ automatically processes mammograms and the output of the device can be used by radiologists concurrently with the reading of mammograms. The user interface of Transpara™ has different functions:

    • a) Activation of computer aided detection (CAD) marks to highlight locations where the device detected suspicious calcifications or soft tissue lesions. Only the most suspicious soft tissue lesions are marked to achieve a very low false positive rate.
    • b) Regions can be queried using a pointer for interactive decision support. When the location of the queried region corresponds with a finding of Transpara™ a suspiciousness level of the region computed by the algorithms in the device is displayed. When Transpara™ has identified a corresponding region in another view of the same breast this corresponding region is also displayed to minimize interactions required from the user.
    • c) Display of the exam based Transpara™ Score which categorizes exams on a scale of 1-10 with increasing likelihood of cancer.

    Transpara™ is configured as a DICOM node in a network and receives its input images from another DICOM node, such as a mammography device or a PACS archive. The image analysis unit includes machine learning components trained to detect calcifications and soft tissue lesions and a component to pre-process images in such a way that images from different vendors can be processed by the same algorithms.

    AI/ML Overview

    The provided text primarily focuses on the device's regulatory submission and comparisons to a predicate device, rather than a detailed clinical study demonstrating its performance against acceptance criteria. There is no explicit "acceptance criteria" table with performance metrics in the provided document, nor a detailed description of a multi-reader, multi-case (MRMC) study or standalone performance study with specific metrics and methodologies.

    However, based on the available information, we can infer some aspects and highlight where information is missing for a complete response to your request.

    Inferred Acceptance Criteria and Reported Device Performance (Table):

    The document states, "Validation testing confirmed that algorithm performance has improved in comparison to Transpara 1.3.0 for the four manufacturers for which the device was already cleared and that for Fujifilm a similar performance is achieved." This implies that the acceptance criteria for the new version (1.5.0) were primarily:

    1. Improvement or maintenance of performance compared to the predicate device (Transpara 1.3.0) for existing compatible modalities.
    2. Achievement of similar performance for the newly supported Fujifilm modalities.

    Without specific metrics (e.g., AUC, sensitivity, specificity at a defined operating point), it's impossible to create a quantitative table. The document only provides qualitative statements.

    Acceptance Criteria (Inferred)Reported Device Performance (Qualitative)
    Algorithm performance for existing compatible modalities (Hologic, GE, Philips, Siemens) to be improved compared to Transpara 1.3.0."algorithm performance has improved in comparison to Transpara 1.3.0 for the four manufacturers for which the device was already cleared"
    Algorithm performance for newly supported Fujifilm modalities to be similar to Transpara 1.3.0."that for Fujifilm a similar performance is achieved."
    Effectiveness in detection of soft lesions and calcifications at an appropriate safety level."Based on results of verification and validation tests it is concluded that Transpara™ is effective in the detection of soft lesions and calcifications at an appropriate safety level in mammograms acquired with mammography devices for which the software has been validated." "Standalone performance tests demonstrate that Transpara™ 1.5.0 achieves better detection performance compared to the predicate device." (This broadly supports improvement, but lacks specific metrics like sensitivity/specificity/AUC).

    Study Details based on the provided text:

    1. Sample sizes used for the test set and the data provenance:

      • Test Set Sample Size: Not explicitly stated. The document mentions "a multi-vendor test-set of mammograms acquired from multiple centers."
      • Data Provenance:
        • Country of Origin: Not specified.
        • Retrospective or Prospective: Not explicitly stated, but "a multi-vendor test-set of mammograms acquired from multiple centers" for "asymptomatic women" suggests a retrospective collection of existing screening mammograms.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • This information is not provided for the validation test set. The document refers to "biopsy proven examples of breast cancer, benign lesions and normal tissue" for training, but not explicitly how the ground truth for the test set was established or by how many experts.
    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

      • Not specified. The document only mentions the use of "biopsy proven examples" for training and does not detail the ground truth establishment process for the test set.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • An MRMC study was done, but not for the 1.5.0 version that is the subject of this 510(k) summary.
      • The document states: "A pivotal reader study was conducted with the predicate device Transpara 1.3.0. This study provided evidence for safety and effectiveness of Transpara™."
      • Therefore, no MRMC study details or effect sizes for human reader improvement with 1.5.0 assistance are provided in this document. The current submission relies on the standalone performance of 1.5.0 showing improvement over the predicate, and assumes the clinical benefit of the predicate carries over to the improved version.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, a standalone performance study was done. The document explicitly states: "Validation testing consisted of determining stand-alone performance of the algorithms in Transpara™ using a multi-vendor test-set..." and "Standalone performance tests demonstrate that Transpara™ 1.5.0 achieves better detection performance compared to the predicate device."
      • Specific performance metrics (e.g., AUC, sensitivity, specificity, FROC analysis) are not provided in this summary, only the qualitative statements of "improved" or "similar" performance compared to the predicate device.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the training set, it explicitly states "biopsy proven examples of breast cancer, benign lesions and normal tissue."
      • For the validation/test set, it states the test set included "mammograms of asymptomatic women," and while not explicitly stated, the context of cancer detection implies that cancer status (and thus ground truth) would be established through pathology or follow-up outcomes for positive cases, and freedom from cancer on follow-up for negative cases. The text doesn't explicitly detail the method of ground truth collection for the test set, but it implies a robust method to define "cancer" vs. "normal."
    7. The sample size for the training set:

      • Not specified. The document only mentions that the algorithms were "trained with large databases of biopsy proven examples of breast cancer, benign lesions and normal tissue."
    8. How the ground truth for the training set was established:

      • "biopsy proven examples of breast cancer, benign lesions and normal tissue." This indicates that the ground truth for training data was established through histopathological confirmation from biopsies.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1