Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K233714
    Device Name
    BreView
    Date Cleared
    2024-01-25

    (66 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K041521,K093234

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BreView is a tool to aid clinicians with the review of multi-parametric breast magnetic resonance (MR) images. The combination of acquired images, reconstructed images, annotations, and measurements performed by the clinician are intended to provide the referring physician with clinically relevant information that may aid in diagnosis and treatment planning.

    Device Description

    BreView is a dedicated advanced visualization review and post-processing tool that facilitates optimized post-processing of MR breast data review and assessment workflows for radiologists - including organizing images and composing reports. Adding the techniques of automatic motion correction and subtraction improves the review process. Software functionalities include:

    • Ability to load 2D, 3D and 4D MR datasets as well as DICOM Secondary Captures (SCPT) .
    • Optimized display of multi-parametric images within a dedicated layout
    • Display customization: ability to choose layout, orientation, rendering mode
    • Guided workflows for reviewing and processing MR breast exams
    • Automated motion correction and/or subtraction of multi-phase datasets
    • . Multi-planar reformats and maximum intensity projections (MIPs)
    • Semi-automated segmentation and measurement tools
    • Graph view for multi-phase datasets
    • Save and export capabilities though DICOM Secondary Captures
    • Data export in the form of a summary table
    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria or a comprehensive study that proves the device meets acceptance criteria. The document is primarily a 510(k) premarket notification, focusing on demonstrating substantial equivalence to a predicate device rather than presenting a detailed performance study with specific acceptance metrics.

    However, based on the type of information typically expected for such a submission and the limited details provided, I can infer some aspects and highlight what is missing.

    Here's an analysis based on the provided text, structured to address your questions as much as possible, while also noting where information is explicitly not present:

    Context: The device, "BreView," is a medical diagnostic software intended to aid clinicians in reviewing multi-parametric breast MR images. Its core functionalities include image organization, display customization, automated motion correction and/or subtraction, multi-planar reformats, semi-automated segmentation and measurement tools, graph view, and data export. It is marketed as a Class II device.


    1. Table of Acceptance Criteria and Reported Device Performance

    Critique: The document does not provide a specific table of acceptance criteria with corresponding performance metrics. It mentions "Performance testing (Verification, Validation)" and "Bench testing" but lacks quantifiable results against predefined acceptance thresholds.

    However, we can infer some criteria from the "Comparison" table against the predicate device, although these are qualitative comparisons rather than quantitative performance data.

    Feature/FunctionalityAcceptance Criteria (Inferred/Generic)Reported Device Performance (as described in the document)
    Indications for UseEquivalent or narrower than predicate."Equivalent. Both the predicate and proposed device are intended to display and process multi-parametric MR images... BreView is intended for breast exams only, thus BreView has a narrower indication for use than READY View."
    Computational PlatformEquivalent to predicate (AW Server)."Equivalent. Both applications are server based and accessible through connected computer."
    Compatible MR SeriesDICOM compliant, supporting 2D, 3D, 4D."Identical. DICOM compliant MR series including 2D, 3D and 4D MR series." Explicitly lists types of MR exams (T1 DCE, T1, T2, DWI, ADC maps).
    Non-rigid registration (NRR)Identical to predicate's method for motion correction."Identical. The non-rigid registration uses the same method as that of Integrated Registration (K093234) for the registration of MR series." "Non-Rigid Registration algorithm was tested and found to be successfully implemented."
    Image SubtractionEquivalent functionality to predicate, with added flexibility."Equivalent. Both proposed and predicate devices enable image subtraction although the proposed device allows the user to apply image subtraction without NRR registration while the predicate only allows subtraction after registration."
    Multi-planar reformats & MIPEquivalent rendering methods to predicate."Equivalent. Imaging Fabric rendering methods are used in cleared applications like CardIQ Suite (K213725)."
    Smart Brush Tool (Segmentation)Equivalent or better semi-automated segmentation."Equivalent, both the predicate and proposed device include semi-automated contouring tools... End user evaluation of the brush revealed that its behavior was equivalent or better to that of AutoContour (VV Apps K041521)."
    Graph ViewEquivalent capability for temporal variation display."Equivalent. Both applications have the capability to display in a graphical form the temporal variation of pixel values in a temporal series."
    Data ExportEquivalent DICOM Secondary Capture export."Equivalent. Both the predicate and proposed device have capabilities to export the information displayed in the application."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified. The document mentions "Bench testing was conducted on a database of exams representative of the clinical scenarios where BreView is intended to be used, with consideration of acquisition protocols and clinical indicators." However, no numerical sample size (e.g., number of cases, number of images) is provided for this test set.
    • Data Provenance: Not specified. There is no mention of the country of origin of the data or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. The document mentions "End user evaluation of the brush," implying radiologists or similar clinical professionals, but their specific qualifications or number are not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: None stated. The document does not describe any specific adjudication method (e.g., 2+1, 3+1 consensus) used for establishing ground truth or evaluating performance.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Conducted: No. The document does not describe a formal MRMC study where human readers' performance with and without AI assistance was evaluated. The evaluation appears to be primarily focused on the device's technological equivalence rather than its impact on human reader effectiveness.
    • Effect Size of Improvement: Not applicable, as no MRMC study was described.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance

    • Standalone Performance Evaluation: Implied, but not explicitly quantified with metrics. The statement, "BreView Non-Rigid Registration algorithm was tested and found to be successfully implemented," suggests a standalone evaluation of this specific algorithm. However, no quantitative performance metrics (e.g., accuracy, precision, dice score for segmentation, registration error) are provided for any standalone algorithm. The "equivalence or better" statement for the Smart Brush tool is a comparative qualitative assessment based on "End user evaluation," not a formal standalone study result.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The document does not explicitly define how ground truth was established for "bench testing." For functionalities like non-rigid registration and image subtraction, the "ground truth" might be implied by successful implementation against a known good algorithm (the predicate's method). For the "Smart Brush tool," the ground truth seems to be based on expert (end-user) evaluation comparing its behavior to the predicate's tool. There's no mention of pathology, clinical outcomes data, or a formal consensus process.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not specified. The document only mentions "Bench testing was conducted on a database of exams representative of the clinical scenarios," which refers to a test set, not a training set. Given that this is a 510(k) submission and the device is presented as applying "the same fundamental scientific technology as its predicate device," it is possible that new training data (in the context of deep learning) was not a primary focus of this submission, if the core algorithms are established or rule-based.

    9. How Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: Not applicable / Not specified. Since no training set or related machine learning components requiring explicit ground truth labeling were discussed in detail, this information is not provided.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1