(191 days)
MammoScreen® is intended for use as a concurrent reading aid for interpreting physicians, to help identify findings on screening FFDM and DBT acquired with compatible mammography systems and assess their level of suspicion. Output of the device includes marks placed on findings on the mammogram and level of suspicion scores. The findings could be soft tissue lesions or calcifications. The level of suspicion score is expressed at the finding level, for each breast and overall for the mammogram. Patient management decisions should not be made solely on the basis of analysis by MammoScreen®.
MammoScreen 2.0 automatically processes the four views (one CC and one MLO per breast) of standard screening FFDM or DBT, and outputs a corresponding report on a separate screen, alongside the monitors used for reading. This report is designed to be easily readable with very few interactions required by providing an overall level of suspicion of each exam and giving explicit visual indications when highly suspicious exams are detected.
MammoScreen 2.0 detects and characterizes findings on a scale from one to ten, referred to as the MammoScreen score. The score was designed such that findings with a low score have a very low level of suspicion. As the score increases, so does the level of suspicion.
Furthermore, MammoScreen 2.0 provides a high level of interpretability. Results are by construction consistent at the finding, breast and mammogram level. A breast takes on the highest score of its detected findings, and the level of suspicion for the exam is driven by the breast(s) with the highest score. Therefore, it is always possible to track a high suspicion of malignancy for an exam to the corresponding breast(s), and to a specific finding within the breast(s).
Here's a breakdown of the acceptance criteria and the study that proves the device meets them based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (FFDM) | Reported Device Performance (DBT) |
|---|---|---|---|
| Radiologist Performance with AID (AUC) | Superior to unaided radiologist performance | Increased from 0.77 to 0.80 | Increased from 0.79 to 0.83 |
| Standalone Performance (AUC) | Non-inferior to unaided radiologist performance | 0.79 (non-inferior to 0.77 unaided) | 0.84 (superior to 0.79 unaided) |
| Standalone Performance vs. Predicate (FFDM) | Non-inferior to predicate device | Achieved non-inferior performance | Not applicable |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size (FFDM & DBT): 240 cases (enriched sample set)
- Data Provenance: Not explicitly stated regarding country of origin. The studies are described as "reader studies," implying prospective collection for the purpose of the study or a curated retrospective selection. The text doesn't specify if it's purely retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: 14 for the 2D (FFDM) study and 20 for the 3D (DBT) study.
- Qualifications: "MOSA-qualified and ABR-certified readers." (MOSA and ABR are common certifications for radiologists in the US, suggesting a US context for the experts).
4. Adjudication Method for the Test Set
The provided text does not explicitly state the adjudication method used to establish the ground truth for the test set. It mentions "enriched sample set" and "MOSA-qualified and ABR-certified readers," suggesting expert consensus, but the specific process (e.g., 2+1, 3+1) is not detailed.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
- Yes, an MRMC study was done. Clinical validation included two reader studies (one for FFDM and one for DBT) using a multi-reader multi-case (MRMC) cross-over design.
- Effect Size of Improvement:
- FFDM: Average AUC for radiologists increased from 0.77 (without AI) to 0.80 (with AI). (Improvement: 0.03 AUC)
- DBT: Average AUC for radiologists increased from 0.79 (without AI) to 0.83 (with AI). (Improvement: 0.04 AUC)
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, standalone performance was evaluated. The objectives of the studies included determining: "Whether the performance of MammoScreen standalone is superior to unaided radiologist performance" and "Whether the performance of MammoScreen standalone is non-inferior to aided radiologist performance."
- Standalone Performance Results:
- FFDM: AUC = 0.79 (found to be non-inferior to the average unaided radiologists' performance of 0.77).
- DBT: AUC = 0.84 (found to be superior to the average unaided radiologists' performance of 0.79).
- Additionally, standalone performance tests for MammoScreen 2.0 (FFDM) demonstrated non-inferiority compared to the predicate device.
7. The Type of Ground Truth Used
The text implicitly suggests expert consensus based on the mention of "MOSA-qualified and ABR-certified readers." It also references the training of deep learning modules with "biopsy-proven examples of breast cancer and normal tissue," indicating that biopsy (pathology) results were used as the ultimate ground truth to establish the benign/malignant status of lesions in the training data, and likely in the test set's ground truth development as well. The study assesses performance in the "detection of breast cancer," linking the ground truth directly to malignancy.
8. The Sample Size for the Training Set
The document states that the deep learning modules were "trained with very large databases of biopsy-proven examples of breast cancer and normal tissue." However, a specific numerical sample size for the training set is not provided.
9. How the Ground Truth for the Training Set Was Established
The ground truth for the training set was established using "biopsy-proven examples of breast cancer and normal tissue." This indicates that histopathological (pathology) results from biopsies served as the definitive ground truth for classifying cases as cancerous or normal during the training of the AI model.
{0}------------------------------------------------
November 26, 2021
Image /page/0/Picture/1 description: The image shows the logo of the U.S. Food and Drug Administration (FDA). On the left is the Department of Health & Human Services logo. To the right of that is a blue square with the letters "FDA" in white. To the right of the blue square is the text "U.S. FOOD & DRUG ADMINISTRATION" in blue.
Therapixel % Ms. Cindy Domecus Principal Domecus Consulting Services LLC 1171 Barroichet Drive HILLSBOROUGH CA 94010
Re: K211541
Trade/Device Name: MammoScreen® 2.0 Regulation Number: 21 CFR 892.2090 Regulation Name: Radiological computer assisted detection and diagnosis software Regulatory Class: Class II Product Code: QDQ Dated: November 19, 2021 Received: November 22, 2021
Dear Ms. Domecus:
We have reviewed your Section 510(k) premarket notification of intent to market the device referenced above and have determined the device is substantially equivalent (for the indications for use stated in the enclosure) to legally marketed predicate devices marketed in interstate commerce prior to May 28, 1976, the enactment date of the Medical Device Amendments, or to devices that have been reclassified in accordance with the provisions of the Federal Food, Drug, and Cosmetic Act (Act) that do not require approval of a premarket approval application (PMA). You may, therefore, market the device, subject to the general controls provisions of the Act. Although this letter refers to your product as a device, please be aware that some cleared products may instead be combination products. The 510(k) Premarket Notification Database located at https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm identifies combination product submissions. The general controls provisions of the Act include requirements for annual registration, listing of devices, good manufacturing practice, labeling, and prohibitions against misbranding and adulteration. Please note: CDRH does not evaluate information related to contract liability warranties. We remind you, however, that device labeling must be truthful and not misleading.
If your device is classified (see above) into either class II (Special Controls) or class III (PMA), it may be subject to additional controls. Existing major regulations affecting your device can be found in the Code of Federal Regulations, Title 21, Parts 800 to 898. In addition, FDA may publish further announcements concerning your device in the Federal Register.
Please be advised that FDA's issuance of a substantial equivalence determination does not mean that FDA has made a determination that your device complies with other requirements of the Act or any Federal statutes and regulations administered by other Federal agencies. You must comply with all the Act's requirements, including, but not limited to: registration and listing (21 CFR Part 807); labeling (21 CFR Part
{1}------------------------------------------------
801); medical device reporting of medical device-related adverse events) (21 CFR 803) for devices or postmarketing safety reporting (21 CFR 4, Subpart B) for combination products (see https://www.fda.gov/combination-products/guidance-regulatory-information/postmarketing-safety-reportingcombination-products); good manufacturing practice requirements as set forth in the quality systems (QS) regulation (21 CFR Part 820) for devices or current good manufacturing practices (21 CFR 4, Subpart A) for combination products; and, if applicable, the electronic product radiation control provisions (Sections 531-542 of the Act); 21 CFR 1000-1050.
Also, please note the regulation entitled, "Misbranding by reference to premarket notification" (21 CFR Part 807.97). For questions regarding the reporting of adverse events under the MDR regulation (21 CFR Part 803), please go to https://www.fda.gov/medical-device-safety/medical-device-reportingmdr-how-report-medical-device-problems.
For comprehensive regulatory information about mediation-emitting products, including information about labeling regulations, please see Device Advice (https://www.fda.gov/medicaldevices/device-advice-comprehensive-regulatory-assistance) and CDRH Learn (https://www.fda.gov/training-and-continuing-education/cdrh-learn). Additionally, you may contact the Division of Industry and Consumer Education (DICE) to ask a question about a specific regulatory topic. See the DICE website (https://www.fda.gov/medical-device-advice-comprehensive-regulatoryassistance/contact-us-division-industry-and-consumer-education-dice) for more information or contact DICE by email (DICE@fda.hhs.gov) or phone (1-800-638-2041 or 301-796-7100).
Sincerely,
For
Thalia T. Mills, Ph.D. Director Division of Radiological Health OHT7: Office of In Vitro Diagnostics and Radiological Health Office of Product Evaluation and Quality Center for Devices and Radiological Health
Enclosure
{2}------------------------------------------------
Indications for Use
510(k) Number (if known) K211541
Device Name MammoScreen® 2.0
Indications for Use (Describe)
MammoScreen® is intended for use as a concurrent reading aid for interpreting physicians, to help identify findings on screening FFDM and DBT acquired with compatible mammography systems and assess their level of suspicion. Output of the device includes marks placed on findings on the mammogram and level of suspicion scores. The findings could be soft tissue lesions or calcifications. The level of suspicion score is expressed at the finding level, for each breast and overall for the mammogram. Patient management decisions should not be made solely on the basis of analysis by MammoScreen®.
| Type of Use (Select one or both, as applicable) | |
|---|---|
| ------------------------------------------------- | -- |
X Prescription Use (Part 21 CFR 801 Subpart D)
| Over-The-Counter Use (21 CFR 801 Subpart C)
CONTINUE ON A SEPARATE PAGE IF NEEDED.
This section applies only to requirements of the Paperwork Reduction Act of 1995.
DO NOT SEND YOUR COMPLETED FORM TO THE PRA STAFF EMAIL ADDRESS BELOW.
The burden time for this collection of information is estimated to average 79 hours per response, including the time to review instructions, search existing data sources, gather and maintain the data needed and complete and review the collection of information. Send comments regarding this burden estimate or any other aspect of this information collection, including suggestions for reducing this burden, to:
Department of Health and Human Services Food and Drug Administration Office of Chief Information Officer Paperwork Reduction Act (PRA) Staff PRAStaff(@fda.hhs.gov
"An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB number."
{3}------------------------------------------------
510(k) Summary
K211541
This 510(k) summary of safety and effectiveness information is prepared in accordance with the requirements of 21 CFR § 807.92.
Applicant Information:
510(k) Owner: Therapixel 455 Promenade des Anglais, 06200 Nice France Phone: +33 9 72 55 20 39
Contact: Quentin de Snoeck, R.A.C. (US & EU) RA/QA/IS Manager Email: qdesnoeck(@therapixel.com
Submission Correspondent:
Cindy Domecus, R.A.C. (US & EU) Regulatory Consultant to Therapixel Phone: 650.343.4813 Fax: 650.343.7822 Email: Cindy(@)DomecusConsulting.com
Date Summary Prepared: November 25th, 2021
{4}------------------------------------------------
Device Information:
| Trade Name: | MammoScreen 2.0 |
|---|---|
| Common Name: | Computer-Assisted Detection Device |
| Device Classification Name: | Radiological Computer Assisted Detection/Diagnosis Software ForLesions Suspicious For Cancer |
| Regulation Number: | 892.2090 |
| Regulation Class: | Class II |
| Product Code: | QDQ |
| Submission type | Traditional 510(k) |
| 510(k) number: | K211541 |
Predicate Device:
The predicate device is MammoScreen, cleared under K192854.
Device Description:
MammoScreen 2.0 automatically processes the four views (one CC and one MLO per breast) of standard screening FFDM or DBT, and outputs a corresponding report on a separate screen, alongside the monitors used for reading. This report is designed to be easily readable with very few interactions required by providing an overall level of suspicion of each exam and giving explicit visual indications when highly suspicious exams are detected.
MammoScreen 2.0 detects and characterizes findings on a scale from one to ten, referred to as the MammoScreen score. The score was designed such that findings with a low score have a very low level of suspicion. As the score increases, so does the level of suspicion.
Furthermore, MammoScreen 2.0 provides a high level of interpretability. Results are by construction consistent at the finding, breast and mammogram level. A breast takes on the highest score of its detected findings, and the level of suspicion for the exam is driven by the breast(s) with the highest score. Therefore, it is always possible to track a high suspicion of malignancy for an exam to the corresponding breast(s), and to a specific finding within the breast(s).
Indication for Use:
MammoScreen is intended for use as a concurrent reading aid for interpreting physicians, to help identify findings on screening FFDM or DBT acquired with compatible mammography systems, and assess their level of suspicion. Output of the device includes marks placed on findings on the mammogram and level of suspicion scores. The findings could be soft tissue lesions or calcifications. The level of suspicion score is expressed at the finding level, for each breast and overall for the mammogram. Patient management decisions should not be made solely on the basis of analysis by MammoScreen.
{5}------------------------------------------------
Intended user population
Intended users of MammoScreen are physicians qualified to read screening mammograms.
Intended patient population
The device is intended to be used in the population of women undergoing screening mammography.
Warnings and precautions
Patient management decisions should not be made solely on the basis of analysis by MammoScreen.
Predicate device comparison:
The indication for use of MammoScreen 2.0 is similar to that of the predicate device. Both devices are intended for concurrent use by physicians interpreting breast images to help them with localizing and characterizing findings. The devices are not intended as a replacement for the review of a physician or their clinical judgement. Use of the device with DBT has been added in the indications for use of the subject device compared to the indications for use of the predicate device. The algorithmic components have been updated to improve detection accuracy for FFDM and to enable processing of DBT. The overall design of MammoScreen 2.0 is the same as that of the predicate device. Both versions detect and characterize findings in radiological breast images and provide information about the presence, location, and characteristics of the findings to the user in a similar manner. The modifications do not raise different questions of safety and effectiveness of the device as compared to the predicate device.
Non clinical Testing
MammoScreen is a software-only device. The level of concern for the device is determined as Moderate Level of Concern.
Tests have been performed in compliance with the following recognized consensus standards:
- IEC 62304:2006/A1:2016- Medical device software Software life-cycle processes ●
- IEC 62366-1:2015+AMD1:2020- Medical devices Application of usability engineering ● to medical devices.
MammoScreen 2.0 has successfully completed integration and verification testing and beta validation. In addition, potential hazards have been evaluated and mitigated, and have acceptable levels.
{6}------------------------------------------------
Clinical Testing
Clinical validation of MammoScreen 2.0 included two reader studies (one for each modality: FFDM and DBT) with similar designs and methodologies.
Hereafter, unless differences are explicitly detailed, the description will refer to both studies.
The reader studies used a multi-reader multi-case (MRMC) cross-over design with an enriched sample set of 240 cases with MOSA-qualified and ABR-certified readers (14 for the 2D study and 20 for the 3D study) to compare the performance of unaided radiologists to that of radiologists using MammoScreen.
The objectives of the studies were to determine:
- Whether the radiologist performance when using MammoScreen is superior to unaided ● radiologist performance for interpretation of screening mammograms (primary objective).
- Whether the performance of MammoScreen standalone is superior to unaided radiologist performance.
- Whether the performance of MammoScreen standalone is non-inferior to aided radiologist performance.
All performances were intended at mammogram. breast and finding level and assessed by measuring the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC).
Radiologists improved their diagnostic performance in the detection of breast cancer with 2D FFDM (Full-Field Digital Mammography) by using MammoScreen. The performance of radiologists taking part in the clinical study was improved when using MammoScreen 2.0, with the average AUC going from 0.77 to 0.80.
The performance of the standalone MammoScreen on FFDM (AUC = 0.79) was found to be noninferior to the average performance of unaided radiologists (AUC = 0.77).
This MRMC study was conducted on MammoScreen 1.0, the predicate device. Standalone performance tests on subject device demonstrate that MammoScreen 2.0 achieves non-inferior performance compared to the predicate device.
Radiologists improved their diagnostic performance in the detection of breast cancer with DBT (Digital Breast Tomosynthesis) by using MammoScreen. The performance of radiologists taking part in the clinical study was improved when using MammoScreen 2.0, with the average AUC going from 0.79 to 0.83.
The performance of the standalone MammoScreen on DBT (AUC = 0.84) was found to be superior to the average performance of unaided radiologists (AUC = 0.79).
Technological characteristics
| Subject Device | Predicate Device | Substantially Equivalent? | |
|---|---|---|---|
| -- | ---------------- | ------------------ | --------------------------- |
{7}------------------------------------------------
| MammoScreen (K192854) | MammoScreen 2.0 (K211541) | ||
|---|---|---|---|
| Fundamental scientific technology | In MammoScreen, a range of medical image processing and machine learning techniques are implemented. The system includes 'deep learning' modules for recognition of suspicious calcifications and soft tissue lesions. These modules are trained with very large databases of biopsy-proven examples of breast cancer and normal tissue. | SAME | Yes, identical |
The predicate device and the subject device are two versions of the MammoScreen software. They both rely on the same fundamental scientific technology. MammoScreen has been adapted to enable analysis of DBT images. This design change is gauged at the software detailed design level and does not raise different questions of safety and effectiveness by itself.
Conclusions
Standalone performance tests with FFDM demonstrate that MammoScreen 2.0 achieves noninferior performance compared to the predicate device. For application with DBT, a clinical reader study and standalone tests demonstrated that the device is safe and effective.
Therapixel has applied a risk management process in accordance with FDA recognized standards to identify, evaluate, and mitigate all known hazards related to MammoScreen 2.0. These hazards may occur when accuracy of diagnosis is potentially affected, causing either false positives or false negatives. All identified risks are effectively mitigated and it can be concluded that the residual risk is outweighed by the benefits. Considering all data in this submission, the data provided in this 510(k) support the safe and effective use of MammoScreen 2.0 for its indications for use and substantial equivalence to the predicate device.
§ 892.2090 Radiological computer-assisted detection and diagnosis software.
(a)
Identification. A radiological computer-assisted detection and diagnostic software is an image processing device intended to aid in the detection, localization, and characterization of fracture, lesions, or other disease-specific findings on acquired medical images (e.g., radiography, magnetic resonance, computed tomography). The device detects, identifies, and characterizes findings based on features or information extracted from images, and provides information about the presence, location, and characteristics of the findings to the user. The analysis is intended to inform the primary diagnostic and patient management decisions that are made by the clinical user. The device is not intended as a replacement for a complete clinician's review or their clinical judgment that takes into account other relevant information from the image or patient history.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Design verification and validation must include:
(i) A detailed description of the image analysis algorithm, including a description of the algorithm inputs and outputs, each major component or block, how the algorithm and output affects or relates to clinical practice or patient care, and any algorithm limitations.
(ii) A detailed description of pre-specified performance testing protocols and dataset(s) used to assess whether the device will provide improved assisted-read detection and diagnostic performance as intended in the indicated user population(s), and to characterize the standalone device performance for labeling. Performance testing includes standalone test(s), side-by-side comparison(s), and/or a reader study, as applicable.
(iii) Results from standalone performance testing used to characterize the independent performance of the device separate from aided user performance. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Devices with localization output must include localization accuracy testing as a component of standalone testing. The test dataset must be representative of the typical patient population with enrichment made only to ensure that the test dataset contains a sufficient number of cases from important cohorts (e.g., subsets defined by clinically relevant confounders, effect modifiers, concomitant disease, and subsets defined by image acquisition characteristics) such that the performance estimates and confidence intervals of the device for these individual subsets can be characterized for the intended use population and imaging equipment.(iv) Results from performance testing that demonstrate that the device provides improved assisted-read detection and/or diagnostic performance as intended in the indicated user population(s) when used in accordance with the instructions for use. The reader population must be comprised of the intended user population in terms of clinical training, certification, and years of experience. The performance assessment must be based on appropriate diagnostic accuracy measures (
e.g., receiver operator characteristic plot, sensitivity, specificity, positive and negative predictive values, and diagnostic likelihood ratio). Test datasets must meet the requirements described in paragraph (b)(1)(iii) of this section.(v) Appropriate software documentation, including device hazard analysis, software requirements specification document, software design specification document, traceability analysis, system level test protocol, pass/fail criteria, testing results, and cybersecurity measures.
(2) Labeling must include the following:
(i) A detailed description of the patient population for which the device is indicated for use.
(ii) A detailed description of the device instructions for use, including the intended reading protocol and how the user should interpret the device output.
(iii) A detailed description of the intended user, and any user training materials or programs that address appropriate reading protocols for the device, to ensure that the end user is fully aware of how to interpret and apply the device output.
(iv) A detailed description of the device inputs and outputs.
(v) A detailed description of compatible imaging hardware and imaging protocols.
(vi) Warnings, precautions, and limitations must include situations in which the device may fail or may not operate at its expected performance level (
e.g., poor image quality or for certain subpopulations), as applicable.(vii) A detailed summary of the performance testing, including test methods, dataset characteristics, results, and a summary of sub-analyses on case distributions stratified by relevant confounders, such as anatomical characteristics, patient demographics and medical history, user experience, and imaging equipment.