(503 days)
DenSeeMammo is a software application intended for use with digital mammography systems. DenSeeMammo estimates BI-RADS breast density value by analyzing processed digital 2D mammograms using a fully automated comparison procedure.
DenSeeMammo provides a BI-RADS breast density 5th Edition category to aid radiologists in the assessment of breast density.
DenSeeMammo produces adjunctive information. It is not an interpretive or diagnostic aid when the final assessment of breast density category is made by an MQSA qualified interpreting physician.
DenSeeMammo core software has been built and tested on OS X based computers.
DenSeeMammo graphical use interface software has been built and tested on Windows, OS X and Linux based computers.
DenSeeMammo is compatible for images obtained from GE Senographe Essentials systems.
DenSeeMammo analyzes processed digital 2D mammograms in a fully automated comparison procedure that produces a BI-RADS breast density value.
DenSeeMammo handles processed images extracted from DICOM files as input.
DenSeeMammo core software has been built and tested on OS X based computers.
DenSeeMammo graphical user interface software has been built and tested on Windows, OS X and Linux based computers.
DenSeeMammo software is a component which accepts digital mammography images as an input. The software processes and analyses the image according to proprietary algorithms which allow comparison to qualified databases containing images previously quoted by radiologists.
For each patient it provides measures of BI-RADS breast density category. DenSeeMammo provides results per patient based on the maximum density category of the two breasts. The patient population is symptomatic and asymptomatic women undergoing mammography. The software does perform illustrative image display but only for illustrative purposes and not for interpretation or diagnostic.
Here's a summary of the acceptance criteria and study information for DenSeeMammo v1.0, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document broadly states that "established acceptance criteria was met for all of the tests conducted." However, it does not provide specific quantitative acceptance criteria values or detailed performance metrics for DenSeeMammo v1.0. It only describes the types of tests performed.
Test Type | Acceptance Criteria (Not Explicitly Stated Quantitatively) | Reported Device Performance (Not Explicitly Stated Quantitatively) |
---|---|---|
Reproducibility (over same data sets) | Consistency of results | Successful (Criteria met) |
Reproducibility (left/right breast density comparison) | Consistency of results | Successful (Criteria met) |
Comparison to visual assessment by MQSA radiologists | Agreement with expert ratings | Successful (Criteria met) |
Run over substantial data sets (previously assessed) | Agreement with expert ratings on a larger scale | Successful (Criteria met) |
Beta site testing (integration & usability) | Successful integration into existing systems and usability for target users | Successful (Criteria met) |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size for Test Set: The document mentions "a data sets of images" and "a sample of exams" for reproducibility tests, and "substantial data sets of images" for comparison with expert assessments. However, specific numerical sample sizes are not provided for any of these test sets.
- Data Provenance: Not explicitly stated (e.g., country of origin). The document indicates the data used were "previously visually assessed by MOSA radiologists," implying real-world mammograms. It does not clarify if the data was retrospective or prospective.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Number of Experts: Not explicitly stated. The document consistently refers to "MQSA radiologists" or "MOSA radiologists" (likely a typo for MQSA), implying multiple qualified experts.
- Qualifications of Experts: "MQSA qualified interpreting physician" or "MQSA radiologists." MQSA (Mammography Quality Standards Act) qualified indicates they are certified for mammography interpretation in the US. Specific years of experience are not mentioned.
4. Adjudication Method for the Test Set:
Not explicitly stated. The document only mentions that DenSeeMammo results were "compared to visual assessment from MQSA radiologists" and run on data "previously visually assessed by MOSA radiologists." It does not describe how discrepancies among multiple radiologists (if present) were resolved.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
No, an MRMC comparative effectiveness study demonstrating how human readers improve with AI vs. without AI assistance was not reported. The study focused on the performance of the DenSeeMammo software itself compared to expert assessment. The device is explicitly stated as "adjunctive information. It is not an interpretive or diagnostic aid when the final assessment of breast density category is made by an MQSA qualified interpreting physician."
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
Yes, the verification bench testing and the clinical validation testing appear to describe standalone performance evaluations. DenSeeMammo was "run over twice on a data sets of images to test reproducibility" and "was run over substantial data sets of images." The core function is "fully automated comparison procedure." The results from the device were then compared to expert assessment, suggesting the algorithm's standalone output was the subject of evaluation.
7. The Type of Ground Truth Used:
The ground truth used was expert consensus/visual assessment by MQSA qualified radiologists. The device's results were compared to "qualified databases containing images previously quoted by radiologists" and "visual assessment from MQSA radiologists."
8. The Sample Size for the Training Set:
The sample size for the training set is not mentioned in the provided document. The document refers to "qualified databases" that the algorithm uses for comparison, but it does not specify the size or details of these databases or how they relate to the development/training of the software.
9. How the Ground Truth for the Training Set Was Established:
The document states that the software "analyses the image according to proprietary algorithms which allow comparison to qualified databases containing images previously quoted by radiologists." This implies that the ground truth for these "qualified databases" was established by radiologists' assessments. However, further details on the process of establishing this ground truth for the training data (e.g., number of radiologists, adjudication, specific criteria) are not provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).