Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K210670
    Device Name
    BU-CAD
    Date Cleared
    2021-12-21

    (291 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TaiHao Medical Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BU-CAD is a software application indicated to assist trained interpreting physicians in analyzing the breast ultrasound images of patients with soft tissue breast lesions suspicious for breast cancer who are being referred for further diagnostic ultrasound examination.

    Output of the device includes regions of interest (ROIs) and lesion contours placed on breast ultrasound images assisting physicians to identify suspicious soft tissue lesions from up to two orthogonal views of a single lesion, and region-based analysis of lesion malignancy upon the physician's query. The region-based analysis indicates the score of lesion characteristics (SLC), and corresponding BI-RADS categories in user-selected ROIs or ROIs automatically identified by the software. In addition, BU-CAD also automatically classifies lesion shape, orientation, margin, echo pattern, and posterior features according to BI-RADS descriptors.

    BU-CAD may also be used as an image viewer of multi-modality digital images, including ultrasound and mammography. The software includes tools that allow users to adjust, measure and document images, and output into a structured report (SR).

    Patient management decisions should not be made solely on the basis of analysis by BU-CAD.

    Limitations: BU-CAD is not to be used on sites of post-surgical excision, or images with Doppler, elastography, or other overlays present in them. BU-CAD is not intended for the primary interpretation of digital mammography images. BU-CAD is not intended for use on mobile devices.

    Device Description

    BU-CAD developed by TaiHao Medical Inc. is a software system designed to assist users in analyzing breast ultrasound images including identification of regions suspicious for breast cancer and assessment of their malignancy. The system consists of a Viewer, a Lesion Identification Module, and a Lesion Analysis Module. The Viewer loads breast ultrasound and mammography images from local storage or PACS for review, and includes tools for measurement and image adjustment. The Lesion Identification Module identifies automated ROIs and generates lesion contours on breast ultrasound images. The Lesion Analysis Module analyzes given ROIs and generates a score of lesion characteristics (SLC), BI-RADS category, and BI-RADS descriptors. Users can replace automated ROIs with re-delineated rectangular ROIs for analysis. The last analysis results are displayed and modifiable by the user. BU-CAD also supports exporting CAD results to third-party reporting software.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study proving the device meets these criteria for the BU-CAD system.

    Here's the breakdown of the information requested:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document implies acceptance criteria by demonstrating performance gains in a comparative study against human readers. While explicit quantitative acceptance criteria for each metric are not stated, the success is determined by statistically significant improvement over unaided human performance and comparable performance to predicate devices. The primary acceptance criterion appears to be superiority of aided performance (AUC_LROC) over unaided performance.

    Metric / FeatureAcceptance Criteria (Implied)Reported Device Performance (BU-CAD)
    MRMC Study (Aided vs. Unaided)
    AUC_LROC (Mean Shift)Statistically significant improvement over unaided performance.+0.0374 (95% CI: 0.0190, 0.0557), p-value: 0.0001 (Unaided AUC: 0.7786, Aided AUC: 0.8160)
    Sensitivity (Aided vs. Unaided)Higher in aided scenario.Unaided: 0.9225 (0.8896, 0.9554), Aided: 0.9353 (0.9050, 0.9655)
    Specificity (Aided vs. Unaided)Higher in aided scenario.Unaided: 0.3165 (0.2694, 0.3636), Aided: 0.3611 (0.3124, 0.4098)
    NPV (unadjusted) (Aided vs. Unaided)Higher in aided scenario.Unaided: 0.8623 (0.8048, 0.9198), Aided: 0.8945 (0.8456, 0.9434)
    PPV (unadjusted) (Aided vs. Unaided)Higher in aided scenario.Unaided: 0.4876 (0.4433, 0.5319), Aided: 0.5056 (0.4607, 0.5505)
    False Positive (Unaided to True Negative)Positive net benefit (reduction in FPs).Total Net Benefit: +267 events across 16 readers (790 FP→TN vs 523 TN→FP transitions for benign cases).
    Interpretation TimeDecrease in interpretation time.Demonstrated statistically significant decrease in readers' interpretation times (~40%).
    BI-RADS Descriptors AccuracyImprovement in determination for at least one subcategory.Improved readers' determination for Shape, Orientation, Margin, Echo Pattern, and Posterior Features for at least one or more subcategories for each descriptor (compared to unaided). Unaided vs. Aided Accuracy: Shape (78.14% vs 78.92%), Orientation (82.15% vs 82.20%), Margin (79.22% vs 77.34%), Echo Pattern (76.49% vs 66.52%), Posterior Features (66.51% vs 67.53%). Note: Aided Margin and Echo Pattern accuracy decreased but combined with other improved descriptors, overall benefit claimed.
    Standalone Study
    AUC_LROC (628 Reader Study Cases)Higher than unaided reading performance for the same cases.0.7987 (0.7626, 0.8348) (Unaided for same cases: 0.7786)
    AUC_LROC (1139 Standalone Study Cases)Achieve acceptable discrimination (AUC>0.7) and robust performance.0.8203 (0.7947, 0.8458). Overall "excellent" or "outstanding" discrimination (AUC LROC > 0.8 or > 0.9) across most subgroups, with some "acceptable" (0.7 to 0.8).
    Lesion Identification Module (CADe) AccuracyHigh accuracy for automated ROI identification.93.24% (1062/1139) met objective performance criteria (auto ROI center within ground truth ROI with >=50% overlap).
    Robustness of Lesion Analysis Module (CADx)Stable AUC despite ROI variations.AUC remained stable (0.840-0.846) with 20% random ROI shifts. AUC remained >0.8 with systematic ROI shrinking up to 16%.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • MRMC Reader Study: 628 cases
      • Standalone Study: 1139 cases (which includes the 628 reader study cases plus 511 additional extended cases).
    • Data Provenance:
      • MRMC Reader Study: 456 cases from the United States, 172 cases from Taiwan.
      • Standalone Study: 531 cases from North America, 36 cases from Europe, 572 cases from Taiwan.
    • Retrospective or Prospective: The study is clearly stated as a retrospective study ("fully crossed multi-reader multi-case receiver operating characteristic (MRMC-ROC) retrospective study").

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used to establish the ground truth for the test set. However, it mentions an "expert panel" in the context of defining ground truth ROIs for the robustness experiments.

    Qualifications of Experts (Readers in MRMC study, likely similar for GT):

    • 16 Readers participated in the MRMC study.
    • Specialties: 14 Radiologists, 2 Breast Surgeons.
    • Experience: Ranged from 1 year to >30 years of experience (as a radiologist/breast surgeon).
    • Certifications/Training: Most radiologists (13/14) were MQSA certified. 4/14 radiologists had received Breast Image Fellowship training.

    4. Adjudication Method for the Test Set

    The document does not explicitly detail the adjudication method for establishing the definitive ground truth for the test set cases (e.g., how malignancy/benignity or BI-RADS categories were finalized if there were disagreements among initial assessments). However, it mentions "Pathology proof benign," "Two-year follow-up benign," and specific malignant pathology types (DCIS, IDC, ILC, Other cancer types) as the basis for benign/malignant case classification in the dataset demographics. This suggests pathology and clinical follow-up as the primary ground truth, not necessarily a reader adjudication process per se, for the malignancy outcome.

    For the reader study itself, it was a "fully crossed" MRMC-ROC study where readers evaluate cases independently, both unaided and aided. The performance metrics (AUC, sensitivity, specificity, etc.) are derived from comparing individual reader's interpretations against the established ground truth.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve With AI vs Without AI Assistance

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Improvement:
      • Primary Objective (AUC_LROC Shift): The mean AUC_LROC shift was +0.0374. (Unaided AUC: 0.7786, Aided AUC: 0.8160). This improvement was statistically significant (p-value = 0.0001).
      • Comparison to Predicates: This shift (+0.0374) was stated to be "similar" to Koios DS for Breast (+0.037) and TransparaTM (+0.02).
      • Other Metrics (Aided vs Unaided):
        • Sensitivity: Increased from 0.9225 to 0.9353.
        • Specificity: Increased from 0.3165 to 0.3611.
        • NPV (unadjusted): Increased from 0.8623 to 0.8945.
        • PPV (unadjusted): Increased from 0.4876 to 0.5056.
        • Reduction in False Positives: A net benefit of +267 events (FP to TN) indicating reduction of false positives across all readers for benign cases.
        • Interpretation Time: Decreased by approximately 40%.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, a standalone study was done.
    • Standalone Performance (AUC_LROC):
      • On the 628 reader study cases: 0.7987 (95% CI: 0.7626, 0.8348)
      • On the larger 1,139 standalone study cases: 0.8203 (95% CI: 0.7947, 0.8458)
    • Standalone Sensitivity & Specificity (using BI-RADS 4A as cutoff):
      • Sensitivity: 88.33% (439/497)
      • Specificity: 57.94% (372/642)
    • Lesion Identification Module (CADe) Accuracy: 93.24% (1062/1139) detected and localized correctly.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    The ground truth for the test set (both reader study and standalone study cases) was primarily established by:

    • Pathology Proof: For malignant cases (Ductal carcinomas in situ (DCIS), invasive ductal carcinoma (IDC), Invasive lobular carcinoma (ILC), and other cancer types) and some benign cases.
    • Two-year Follow-up: For other benign cases.

    This indicates a strong, objective ground truth based on definitive clinical outcomes.

    8. The Sample Size for the Training Set

    The document does not provide the sample size for the training set. It only states that the "testing dataset was not used for training of BU-CAD algorithms."

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set was established. Since the test set ground truth was largely based on pathology and follow-up, it is highly probable that the training data followed similar rigorous ground truth establishment methods, but this is not explicitly stated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K171709
    Date Cleared
    2017-10-20

    (134 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TaiHao Medical Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BR-FHUS Viewer 1.0 is intended as a standalone software device installed on a standalone windows-based computer to assist physicians with manipulation and analysis tools in reviewing breast ultrasound images. Images and data are previously recorded from various imaging systems and other sources such as calibrated spatial positioning devices. BR-FHUS Viewer 1.0 provides the capability to visualize two-dimensional ultrasound images along with the scanning paths and position information of probe that stored in the DICOM file in advance.

    Device Description

    BR-FHUS Viewer 1.0 is an electronic image review and reporting software program intended to operate on a windows-based computer. The device allows the review of previously recorded ultrasound examinations, which are performed using standard ultrasound systems and other sources such as calibrated spatial positioning devices, the images of which were recorded digitally. The images are displayed on a computer monitor. The images can be reviewed individually or as a self-playing sequence. The software can adjust the speed of the playback. In addition, the device software allows the user to save the screenshots as DIOM-compatible files and generate electronic reports.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the BR-FHUS Viewer 1.0, based on the provided text:

    Important Note: The provided document is a 510(k) summary for a Picture Archiving and Communications System (PACS) device. The primary purpose of such devices is to display and manipulate medical images, not to perform diagnostic analysis or provide AI-driven insights directly. Therefore, the "acceptance criteria" and "study" described here are related to the functional performance and safety of the imaging viewer itself, rather than diagnostic accuracy metrics typically associated with AI algorithms. The document explicitly states that the device is intended to "assist physicians with manipulation and analysis tools in reviewing breast ultrasound images," implying a role in image presentation and review, not automated diagnosis.


    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Criterion (Implied/Stated)Reported Device Performance
    Functional RequirementsAll functional requirements met."All functional requirements have been met."
    Core Function ExecutionCore functions execute as expected."Core functions execute as expected."
    Validation to SpecificationValidation results are within specification."The result of these tests demonstrate that BR-FHUS Viewer 1.0 validation is with in specification."
    Intended OperationDevice functions as intended, and operation is as expected."In all instances, BR-FHUS Viewer 1.0 functioned as intended and the operation observed was as expected."
    Safety and EffectivenessDevice is as safe and effective as predicate devices."BR-FHUS Viewer 1.0 is as safe and effective as the predicate devices and is substantially equivalent to existing products on the market today." (Stated by manufacturer)
    No New Safety/Effectiveness IssuesNo new safety or effectiveness issues raised by features."The features provided by BR-FHUS Viewer 1.0 do not in themselves raise new concerns of safety or effectiveness."

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not explicitly stated in terms of patient cases. The testing was conducted using a "breast phantom." This suggests a limited, controlled environment rather than a broad patient data set.
      • Data Provenance: Not applicable in the context of patient data for diagnostic accuracy. The data used for testing was generated from a "breast phantom" in a "simulated work environment."
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • Number of Experts: Not specified.
      • Qualifications of Experts: Not specified. Given the nature of the device as a viewer (not a diagnostic AI), external expert ground truth might not be the primary focus for its clearance. The "ground truth" here likely refers to the expected functional behavior of the software and accurate display of the phantom's ultrasound images.
    3. Adjudication Method for the Test Set:

      • Not applicable/Not described. The text indicates "internal procedures" and "trained personnel" performed the testing, implying a verification process against predefined functional expectations rather than an adjudication process of expert interpretations.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No, an MRMC comparative effectiveness study was not done. The BR-FHUS Viewer 1.0 is described as a "standalone software device... to assist physicians with manipulation and analysis tools in reviewing breast ultrasound images." It is not an AI diagnostic or assistance tool in the sense of providing automated interpretations or improving human reader diagnostic accuracy. Its function is to facilitate image review.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, in the context of its function as a viewer. The performance data described is purely for the software's ability to display, manipulate, and analyze images as intended. It assesses the software itself against its functional specifications. It's a "standalone" performance assessment of the viewer's capabilities, not a diagnostic algorithm.
    6. The Type of Ground Truth Used:

      • Functional/Technical Ground Truth: The "ground truth" for this device appears to be its pre-defined functional requirements and expected display behavior. Testing against a "breast phantom" would verify that the images from the phantom are accurately displayed and that the manipulation tools work correctly. This is not a "diagnostic ground truth" like pathology for a lesion.
    7. The Sample Size for the Training Set:

      • Not applicable/Not mentioned. As the device is an image viewer and not an AI algorithm performing diagnostic tasks, there is no "training set" in the sense of machine learning.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable. (See point 7).
    Ask a Question

    Ask a specific question about this device

    K Number
    K171309
    Date Cleared
    2017-09-29

    (149 days)

    Product Code
    Regulation Number
    892.1560
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TaiHao Medical Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BR-FHUS Navigation 1.0 is intended as a standalone software device installed on a standalone windows-based computer to assist physicians with tools for electromagnetic tracking of instruments in respect of breast ultrasound images generated from FDA cleared handheld ultrasound devices. The device is not intended to be used in the environment of strong magnetic or electromagnetic fields, such as in Magnetic Resonance Imaging (MRI) room. BR-FHUS Navigation 1.0 is indicated for use as an adjunct to handheld breast ultrasound to assist the physicians in their scanning process. The scanning paths are displayed on a route map and provide quality control of scanning to provide an overall observation of scanning process.

    Device Description

    BR-FHUS Navigation 1.0 is a standalone software device installed on a standalone windows-based computer. It requires an off-the-shelf PC computer, computer user interface (keyboard, mouse, display), off-the-shelf image capture device (PCI frame grabber card or USB frame grabber device), and an off-the-shelf 3D electromagnetic tracking system (Ascension Technologies). The system includes software for Position Sensor Monitoring, Image presentation and recording, and user interface, and hardware including a Position Sensor Clip (electromagnetic), Sensor Transmitter, and Control Computer.

    AI/ML Overview

    The provided text describes the BR-FHUS Navigation 1.0 device and its substantial equivalence to a predicate device, the Tractus TissueMapper Image Recording System. However, the document does not contain detailed acceptance criteria or a specific study proving the device meets quantitative acceptance criteria beyond a general statement of "in specification."

    Here's a breakdown of the information that can be extracted and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria: The document does not explicitly state specific quantitative acceptance criteria for performance metrics. It only generally states that "all functional requirements have been met" and "core functions execute as expected."

    Reported Device Performance: The document does not provide numerical results for device performance. It only states: "The result of these tests demonstrates that BR-FHUS Navigation 1.0 validation is with in specification. As such, BR-FHUS Navigation 1.0 is as safe and effective as the predicate devices and is substantially equivalent to existing products on the market today." And "In all instances, BR-FHUS Navigation 1.0 functioned as intended and the operation observed was as expected."

    Acceptance CriteriaReported Device Performance
    Not explicitly stated in quantitative terms (e.g., accuracy +/- X mm, error rate
    Ask a Question

    Ask a specific question about this device

    K Number
    K151075
    Date Cleared
    2016-01-15

    (268 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    TAIHAO MEDICAL INC.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BR-ABVS Viewer 1.0 is intended as a standalone software device installed on a standalone windows-based computer to assist the physician to visualize any orientation of three-dimensional (3-D) breast ultrasound images generated by Siemens ACUSON S2000 Automated Breast Volume Scanner, ABVS (cleared in K081148). The software device is indicated for use to assist the physicians in their review and analysis of the 3-D breast ultrasound images generated by ABVS.

    Device Description

    BR-ABVS Viewer 1.0 is intended as a standalone software device installed on a standalone windows-based computer to assist the physician to visualize any orientation of three-dimensional (3-D) breast ultrasound images generated by Siemens ACUSON S2000 Automated Breast Volume Scanner, ABVS (cleared in K081148). The software also automatically generates reports to provide the sub-image and location information of markers annotated during the image review.

    AI/ML Overview

    This document describes the BR-ABVS Viewer 1.0, a standalone software device intended to assist physicians in visualizing and analyzing 3-D breast ultrasound images generated by the Siemens ACUSON S2000 Automated Breast Volume Scanner (ABVS).

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or a comprehensive table comparing multiple performance metrics. Instead, it focuses on demonstrating "substantial equivalence" to a predicate device (ABVS Workplace, K092067) in terms of image loading and overall functionality.

    Feature / Performance MetricAcceptance Criteria (Implied)Reported Device Performance
    3-D Image LoadingAbility to accurately load and display 3-D breast ultrasound images.Demonstrated accurate 3-D image loading during comparison testing with the predicate device.
    Image VisualizationAbility to visualize 3-D image volumes by axial, sagittal, and coronal planes.Provides visualization of any orientation of 3-D image (axial, sagittal, coronal) according to anatomical coordinate system.
    Image Size AccuracyAccurate representation of image size.Actual image size obtained by considering spacings of three axes specified in standard DICOM tags.
    Overall FunctionalitySimilar intended use, technological characteristics, and major functionality to the predicate."The intended use, technological characteristics, and major functionality of BR-ABVS Viewer 1.0 are similar to the predicate device..."
    Safety and EffectivenessNo new issues of safety or effectiveness introduced compared to the predicate."...no new issues of safety or effectiveness are introduced by using this device." "The performance data generated... demonstrates that our software device is as safe and effective, as compared to the predicate."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: "An actual clinical image generated by a Siemens ACUSON S2000 Automated Breast Volume Scanner in 2014 was used..." This implies a sample size of one clinical image.
    • Data Provenance: The image was "generated by a Siemens ACUSON S2000 Automated Breast Volume Scanner in 2014." The country of origin is not explicitly stated, but the manufacturer (TaiHao Medical Inc.) is based in Taiwan. The image is retrospective as it was generated prior to the study.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

    The document does not mention the use of experts to establish ground truth for the test set or their qualifications. The performance study appears to be a technical comparison of image loading and visualization between the subject device and the predicate device, rather than a clinical accuracy study requiring expert human annotation/diagnosis as ground truth.

    4. Adjudication Method for the Test Set

    Not applicable. There is no indication of multiple readers, consensus, or adjudication in establishing ground truth for the single image used in the comparison. The comparison focused on whether the device could load and display the image as expected, similar to the predicate.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not conducted. The study described is a technical comparison of image loading and display, using a single image, between the subject device and a predicate device. There is no mention of human readers evaluating performance "with AI vs. without AI assistance."

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The performance testing described is primarily a standalone assessment of the BR-ABVS Viewer 1.0's technical capabilities (image loading, visualization, size accuracy) compared to the predicate device. While the device is intended "to assist the physician," the described study itself evaluates the software's inherent ability to process and display images without explicitly measuring the human-in-the-loop performance or diagnostic accuracy.

    7. The Type of Ground Truth Used

    The "ground truth" for the described performance study appears to be the expected rendering and technical specifications of the 3-D breast ultrasound image when loaded and displayed. The comparison was against the functionality of a legally marketed predicate device (ABVS Workplace) for properties like 3-D image loading, multi-planar visualization, and accurate image sizing based on DICOM tags. It is not an "expert consensus," "pathology," or "outcomes data" type of ground truth.

    8. The Sample Size for the Training Set

    The document does not specify a sample size for a training set. This device is described as a "Viewer" and not as an AI/ML-based diagnostic algorithm that requires a training set in the typical sense for learning patterns from data. Its function is to visualize existing 3-D images.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable. As described in point 8, there is no mention or indication of a training set as would be required for machine learning models. The device's primary function is image visualization based on known technical specifications (e.g., DICOM standards).

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1