Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K201411
    Date Cleared
    2021-01-29

    (246 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visage Breast Density is a software application intended for use with compatible full field digital mammography and digital breast tomosynthesis systems. Visage Breast Density assesses breast density from a mammography study and provides an ACR BI-RADS Atlas 5th Edition breast density category to aid radiologists in the assessment of breast tissue composition. Visage Breast Density produces adjunctive information. It is not a diagnostic aid.

    Device Description

    Visage Breast Density is a software application that assesses breast density from a mammography study and provides a density category A, B, C, or D according to the ACR BI-RADS Atlas 5th Edition to aid radiologists in the assessment of breast tissue composition.

    Visage Breast Density employs a convolutional network (CNN) for the automatic classification of breast density. The CNN has been trained on a large database of mammography exams. When applied to a mammography image, the CNN computes four likelihoods corresponding to the four breast density categories. The classifications of the individual images are merged into a general classification of the mammography study.

    Visage Breast Density is designed as an add-on module to the Visage 7 product for distributing, viewing, processing, and archiving medical images. The assessment of breast density is performed from mammography studies stored on the Visage 7 server. The resulting breast density classification is displayed by the Visage 7 client on a computer monitor and stored in the database on the Visage 7 server.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study proving device efficacy for Visage Breast Density, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that the acceptance criteria were defined by comparing the performance of Visage Breast Density to that of the predicate device, PowerLook Density Assessment. Specifically, the acceptance criteria are implicit in the statement: "Visage Breast Density achieved similar accuracies per category and similar total accuracies compared to the predicate device."

    While explicit numerical acceptance criteria (e.g., "accuracy must be >= X%") are not provided, the "reported device performance" is the claim of "similar accuracies per category and similar total accuracies compared to the predicate device."

    Therefore, the table would look like this:

    Acceptance CriterionReported Device Performance (Visage Breast Density)
    Similar accuracies per category compared to predicate deviceAchieved similar accuracies per category compared to the predicate device.
    Similar total accuracies compared to predicate deviceAchieved similar total accuracies compared to the predicate device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size:
      • Test Set 1: 500 studies
      • Test Set 2: 700 studies
      • Total Test Set Size: 1200 studies
    • Data Provenance: "two different sites." The country of origin is not explicitly stated, but given the company's German location (Visage Imaging GmbH, Berlin, Germany), it's plausible the data is from Europe, potentially Germany. The document does not specify if the data was retrospective or prospective, but it's common for such studies to use retrospective data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Three board-certified radiologists per site were used to establish the consensus ground truth. Since there were two sites, it implies a total of 6 unique radiologists (3 per site).
    • Qualifications: "Three board certified radiologists with MQSA qualification per site." MQSA (Mammography Quality Standards Act) qualification is a US standard, which might suggest US sites or radiologists with equivalent qualifications.

    4. Adjudication Method for the Test Set

    The adjudication method was consensus. "the consensus of the three reviewers was determined for each study." This implies that the three radiologists reviewed each case, and their agreement (or a process to resolve disagreement) led to the final ground truth label. A common consensus method is a majority vote (e.g., 2 out of 3 agree).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted to assess how human readers improve with AI assistance. The study focuses on the standalone performance of the AI by comparing its output to a human-established ground truth. The device is described as providing "adjunctive information," not as a diagnostic aid that would directly assist human reader performance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance evaluation of the algorithm was done. The study assessed "The predicted breast density category of Visage Breast Density... related to the ground truth from the clinical reports and the consensus of the three reviewers." This directly measures the algorithm's performance independent of human input during the assessment process itself.

    7. The Type of Ground Truth Used

    The ground truth used was a combination of:

    • Expert Consensus: "consensus of the three reviewers" (radiologists).
    • Clinical Reports: "ground truth from the clinical reports."

    This suggests the radiologists reviewed the clinical reports and then formed a consensus, or perhaps the clinical reports served as an initial "gold standard" which was then validated/adjudicated by the expert radiologists.

    8. The Sample Size for the Training Set

    The sample size for the training set is not explicitly stated. The document only mentions: "The CNN has been trained on a large database of mammography exams."

    9. How the Ground Truth for the Training Set was Established

    The document does not explicitly describe how the ground truth for the training set was established. It only states that the CNN was "trained on a large database of mammography exams." Typically, for such training, the ground truth would also be established by expert radiologists, likely with similar methods as the test set (consensus or single expert review).

    Ask a Question

    Ask a specific question about this device

    K Number
    K142196
    Device Name
    Visage Ease Pro
    Date Cleared
    2015-04-28

    (260 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Visage Ease Pro is a mobile client for diagnostic image viewing of radiological images from the following modalities: Xray, CT, MRI, PET, SPECT, Ultrasound and XA. It is based on the Visage 7 product for distributing, viewing, processing, and archiving medical images within and outside health care environments.

    Visage Ease Pro must only be used by trained health care professionals. It may support physicians and/or the medical staff by providing mobile access to relevant medical images. Any diagnostic decision resides with the doctors and/or the medical staff in their respective area of responsibility.

    Visage Ease Pro is not intended to replace full radiologic reading workstations. It must not be used in the context of diagnostic or therapeutic decisions if a radiologic reading workstation with appropriate display hardware is available. The user must make sure that the reading environment complicable diagnostic requirements and the state-of-the-art.

    Visage Ease Pro must not be used for primary image diagnosis in mammography or digital breast tomosynthesis.

    Device Description

    Visage Ease Pro is a mobile client for diagnostic image viewing of radiological images, image review by clinicians and image display for illustration and educational purposes. It is based on the Visage 7 product for distributing, viewing, processing, and archiving medical images within and outside health care environments.

    Visage Ease Pro has a graphical user interface which is optimized for mobile devices with a touch screen. The app allows searching for studies and viewing images and reports. The user may zoom and pan images, adjust the window level, browse through a stack of images or play a cine animation. Patient and image information is displayed as viewer text. The app supports voice memos, image attachments and push notifications.

    Visage Ease Pro is designed as a thin client in a single module and allows to remotely access the images on a Visage 7 server. The communication between the mobile client and the server is encrypted. The user must authenticate himself with username and password. A connection can only be established, if the user has the appropriate permissions for using the mobile client. These permissions are configurable on the Visage 7 server.

    AI/ML Overview

    The provided document does not contain the specific acceptance criteria and detailed study results that would typically be found in a performance study report. This document is a 510(k) summary for the Visage Ease Pro, which primarily focuses on demonstrating substantial equivalence to predicate devices rather than providing a detailed performance study with quantitative acceptance criteria and results against those criteria.

    However, based on the information available, I can extract and infer some details:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria or report specific performance metrics like sensitivity, specificity, or reader agreement percentages. Instead, the "clinical validation" described is qualitative and focused on functional equivalence.

    Acceptance Criteria (Inferred from "Summary of testing")Reported Device Performance
    Primary operating functions are equivalent to predicate devices for clinical use cases:Clinically validated as equivalent to predicate devices Visage 7® and Mobile MIM™
    Loading of imagesFunctionality is equivalent
    Selecting a series of imagesFunctionality is equivalent
    Adjusting the window levelFunctionality is equivalent
    Zooming and panning an imageFunctionality is equivalent
    Browsing through a stack of imagesFunctionality is equivalent
    Playing cine animationsFunctionality is equivalent
    No new clinical functionality compared to Visage 7Confirmed
    Identical behavior and leading to same diagnosis as predicate devicesConfirmed by clinical validation

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size for Test Set: Not explicitly stated. The document mentions "The data sets for the modalities CT, CR, DX, MRI, PET, SPECT, US, and XA are viewed with the predicate devices and Visage Ease Pro." This implies a set of diverse radiological images, but the exact number is not provided.
    • Data Provenance: Not explicitly stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not explicitly stated. The "clinical validation" implies expert review, but the number and qualifications of the experts or how ground truth was established are not detailed in this summary.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not explicitly stated. Given the qualitative nature of the validation described, it's unlikely a formal adjudication method for ground truth was applied in a traditional sense for quantitative performance metrics. The validation seems to involve expert comparison of functionalities.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No such study was mentioned. The device is a mobile viewing client and not an AI-powered diagnostic tool, so an MRMC comparative effectiveness study regarding "human readers improve with AI" would not be applicable here. The validation focused on functional equivalence to other viewing platforms.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    This question is not entirely applicable as the Visage Ease Pro is a mobile client for viewing images, not an algorithm that performs standalone diagnostic functions. Its performance is tied to its ability to display images accurately and provide viewing functionalities comparable to predicate devices. The "standalone" performance, in this context, would be the consistent and correct execution of its stated functionalities. The document states it was "clinically validated" for its primary operating functions, which is a form of standalone functional verification.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not explicitly stated. Given the device's function as an image viewer, the "ground truth" for its functional validation would likely be the correctness and completeness of image display and manipulation features compared to the known characteristics of the images and the performance of the predicate devices. This would involve technical specifications and potentially expert-confirmed visual accuracy.

    8. The sample size for the training set

    This device appears to be a software client for viewing images rather than a machine learning algorithm that requires a training set in the typical sense. Therefore, "training set" is not applicable.

    9. How the ground truth for the training set was established

    Not applicable (see point 8).

    In Summary:

    The provided document is a 510(k) summary for a medical image viewing client. Its "acceptance criteria" and "study" are primarily focused on demonstrating substantial equivalence to existing predicate devices in terms of functionality for viewing radiological images. It does not contain quantitative performance metrics, detailed study designs (like MRMC), or information typically associated with AI/CAD systems that require extensive annotated datasets and statistical performance evaluations. The clinical validation appears to be a qualitative assessment of the device's ability to perform its viewing functions similarly to already cleared devices.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1