Search Filters

Search Results

Found 83 results

510(k) Data Aggregation

    K Number
    K252421

    Validate with FDA (Live)

    Device Name
    JLK-NCCT
    Manufacturer
    Date Cleared
    2026-03-24

    (235 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K253818

    Validate with FDA (Live)

    Date Cleared
    2026-03-03

    (95 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K253578

    Validate with FDA (Live)

    Date Cleared
    2026-02-26

    (101 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, or chest/abdomen, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:

    • Pneumothorax;
    • Pericardial effusion;
    • Large aortic aneurysm
    • Shoulder Fracture or Dislocation

    The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.

    BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage: CARE Multi-Triage CT for Pneumothorax; Pericardial effusion; Large aortic aneurysm; Shoulder fracture or dislocation device is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage device receives images that match meta-data criteria according to the BriefCase-Triage predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving device performance, based on the provided FDA 510(k) clearance letter:


    1. Acceptance Criteria and Reported Device Performance

    The core acceptance criteria are based on standalone performance metrics for each of the four clinical indications.

    IndicationAcceptance Criteria (Default Operating Point)Reported Device Performance (Default Operating Point)
    PneumothoraxAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 98.9 (95% CI: 97.8-99.7) Sensitivity: 94.8% (95% CI: 89.5%-97.9%) Specificity: 95.9% (95% CI: 91.3%-98.5%)
    Pericardial effusionAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.1 (95% CI: 98.0-99.8) Sensitivity: 96.4% (95% CI: 91.7%-98.8%) Specificity: 96.5% (95% CI: 92.0%-98.8%)
    Large aortic aneurysmAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.5 (95% CI: 98.9-99.9) Sensitivity: 97.1% (95% CI: 92.7%-99.2%) Specificity: 97.2% (95% CI: 92.9%-99.2%)
    Shoulder fracture or dislocationAUC > 0.95 (lower bound 95% CI); Sensitivity > 80%; Specificity > 80%AUC: 99.9 (95% CI: 99.7-100) Sensitivity: 97.8% (95% CI: 93.7%-99.5%) Specificity: 99.3% (95% CI: 96.2%-100.0%)
    Time-to-notificationComparability with predicate device in time savings to standard of care.Subject Device Mean: 49.9 seconds (95% CI: 46.4-53.5) Predicate Device Mean: 10.7 seconds (95% CI: 10.5-10.9) Note: While the subject device's time is longer, the conclusion states comparability regarding time savings to standard of care review, implying it still offers significant benefit.

    Study Proving Device Meets Acceptance Criteria

    The study conducted was a retrospective, blinded, multicenter standalone performance analysis.

    2. Sample size used for the test set and the data provenance:
    * Sample Size: N = 280 for each of the 4 clinical indications, totaling 772 unique scans across all indications.
    * Data Provenance: The cases were collected from 6 US-based clinical sites, representing diverse geographic locations and site types. The data was "distinct in time or center from the cases used to train the algorithm," and "sequestered from algorithm development activities." This indicates a high level of independence for the test set. The data is retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
    * Number of Experts: Three (3)
    * Qualifications: Senior board-certified radiologists.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
    * The document states ground truth was "determined by three senior board-certified radiologists." It does not explicitly mention an adjudication method like 2+1 or 3+1, but the plural "radiologists" and the method of "determined by" suggests a consensus or majority opinion among these three, rather than individual opinions without interaction.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
    * No MRMC comparative effectiveness study was explicitly described. The study was a "standalone performance analysis" of the software itself. The comparison of "time-to-notification" with the predicate device implies a comparison of software performance characteristics related to triage, not a study of human readers with and without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
    * Yes, a standalone performance study was done. The document explicitly refers to it as a "standalone performance analysis" to "evaluate the software's performance."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
    * Expert Consensus: The ground truth was established by "three senior board-certified radiologists."

    8. The sample size for the training set:
    * The document does not specify the exact sample size for the training set. It only mentions that the "algorithm was trained during software development on images of the pathology."

    9. How the ground truth for the training set was established:
    * The ground truth for the training set was established by "labeled ('tagged') images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The method or type of tagging (e.g., by experts, automated, etc.) is not detailed, but it's implied that there was a process of assigning labels/tags to the images to indicate the presence or absence of the target pathologies.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251195

    Validate with FDA (Live)

    Device Name
    BriefCase-Triage
    Date Cleared
    2026-01-27

    (285 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of contrast-enhanced CT images that include the brain, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive cases of Brain Aneurysm (BA) findings that are 3.0 mm or larger.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and flag suspect cases in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for suspect cases. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    BriefCase-Triage is a radiological computer-assisted triage and notification software device.

    The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the BriefCase-Triage device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Performance Goal)Reported Device Performance
    Primary Endpoints
    Sensitivity80%87.8% (95% CI: 83.1%-91.6%)
    Specificity80%91.6% (95% CI: 87.9%-94.5%)
    Secondary Endpoints
    Time-to-Notification (mean)Comparable to predicate device44.8 seconds (95% CI: 41.4-48.2)
    Negative Predictive Value (NPV)N/A98.9% (95% CI: 98.4%-99.2%)
    Positive Predictive Value (PPV)N/A47.6% (95% CI: 38.4%-57.1%)
    Positive Likelihood Ratio (PLR)N/A10.5 (95% CI: 7.2-15.3)
    Negative Likelihood Ratio (NLR)N/A0.13 (95% CI: 0.1-0.19)

    Note on Additional Operating Points (AOPs): The device also met performance goals (80% sensitivity and specificity) for three additional operating points (AOP1, AOP2, AOP3) with slightly varying sensitivity/specificity trade-offs (e.g., AOP3: Sensitivity 86.2%, Specificity 93.6%).

    Study Details

    1. Sample size used for the test set and the data provenance:

    • Sample Size: 544 cases
    • Data Provenance: Retrospective, blinded, multicenter study from 6 US-based clinical sites. The cases were distinct in time or center from those used for algorithm training.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Three (3) senior board-certified radiologists.
    • Qualifications: "Senior board-certified radiologists." (Specific number of years of experience not detailed in the provided text).

    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The text states the ground truth was "determined by three senior board-certified radiologists." It doesn't explicitly describe an adjudication method like "2+1" or "3+1." This implies a consensus approach where all three radiologists agreed, or a majority rule, but the exact mechanism for resolving discrepancies (if any) is not specified.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was NOT done. The study's primary objective was to evaluate the standalone performance of the BriefCase-Triage software. The secondary endpoint compared the device's time-to-notification to that of the predicate device, but not its impact on human reader performance.

    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance study was done. The primary endpoints (sensitivity and specificity) measure the algorithm's performance in identifying Brain Aneurysm (BA) findings.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Expert Consensus: The ground truth was "determined by three senior board-certified radiologists."

    7. The sample size for the training set:

    • Not explicitly stated. The document mentions the algorithm was "trained during software development on images of the pathology" and that "critical findings were tagged in all CTs in the training data set." However, the specific sample size for this training data is not provided.

    8. How the ground truth for the training set was established:

    • Manually labeled ("tagged") images: The text states, "As is customary in the field of machine learning, deep learning algorithm development consisted of training on manually labeled ('tagged') images. In that process, critical findings were tagged in all CTs in the training data set." It does not specify who performed the tagging or their qualifications, nor the method of consensus if multiple taggers were involved.
    Ask a Question

    Ask a specific question about this device

    K Number
    K252970

    Validate with FDA (Live)

    Date Cleared
    2026-01-07

    (112 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage: CARE (Clinical AI Reasoning Engine) Multi-Triage CT Body is a radiological computer aided triage and notification software indicated for use in the analysis of contrast and non-contrast CT images of the chest, abdomen, and/or pelvis, in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive findings, per study, of:

    1. Diverticulitis;
    2. Abdominal-pelvic abscess;
    3. Appendicitis;
    4. Intestinal ischemia and/or pneumatosis;
    5. Obstructive renal stone;
    6. Small bowel obstruction;
    7. Large bowel obstruction;
    8. Spleen injury;
    9. Liver injury;
    10. Kidney injury;
    11. Pelvic fracture.

    The device flags cases with at least one suspected finding to assist with triage/prioritization of medical images. The device will provide a flag for each suspected finding within this study. A preview image will be provided for each distinct suspected finding.

    BriefCase-Triage uses a foundation model-based artificial intelligence (AI) system to analyze images and highlight cases with detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images for each suspected finding that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical images and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    Briefcase-Triage is a radiological computer-assisted triage and notification software device. The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The BriefCase-Triage receives images that match meta-data criteria according to BriefCase-Triage: CARE Multi-Triage CT Body's predefined set of parameters. Then, the BriefCase-Triage processes the series chronologically, identifying cases with suspected positive finding(s) and selecting key slice(s) for preview. BriefCase-Triage output consists of suspected positive flag/notification regarding the existence of each finding in the analyzed study. Each finding includes a Representative Key Slice. The Key Slice(s) may be presented to the users as compressed, low-quality, grayscale, preview images with the date and time imprinted. The previews are not annotated and are captioned with the disclaimer "Not for diagnostic use, for prioritization only" according to the device requirement from the Image Communication Platform (ICP).

    AI/ML Overview

    Acceptance Criteria and Study Details for BriefCase-Triage: CARE Multi-triage CT Body

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the BriefCase-Triage: CARE Multi-triage CT Body device were primarily defined by performance goals for Area Under the Curve (AUC), Sensitivity (Se), and Specificity (Sp). The study demonstrated that the device met and exceeded these criteria for all 11 indications.

    IndicationPerformance Goal (Acceptance Criteria)Reported Device Performance (Mean)95% Confidence Interval (Reported)
    Primary Endpoints
    Finding-level AUC> 0.950.974 - 1.000.952 - 1.00
    Sensitivity (Se)> 80%94.0% - 99.3%88.9% - 100%
    Specificity (Sp)> 80%95.7% - 99.3%91% - 100%
    Secondary Endpoints (Comparable to Predicate)
    BriefCase time-to-notificationComparable to predicate45 seconds43.4 - 46.5 seconds

    Note: The reported device performance for AUC, Sensitivity, and Specificity are ranges covering the minimum and maximum values observed across the 11 indications in the pivotal study. Detailed values for each indication are provided in the source text.

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size: N = 280 for each of the 11 clinical indications, resulting in 1769 unique scans included across all device indications.
    • Data Provenance: The data was collected from 6 US-based clinical sites. It was retrospective and the cases were distinct in time or center from the cases used to train the algorithm.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three senior board-certified radiologists.
    • Qualifications: The document specifically states "senior board-certified radiologists." No further details on years of experience were provided.

    4. Adjudication Method for the Test Set

    The adjudication method used to establish ground truth was based on the "consensus" of the three senior board-certified radiologists ("as determined by three senior board-certified radiologists"). This implies a consensus-based adjudication, but the specific mechanics (e.g., majority vote like 2+1, or requiring all three to agree) are not explicitly detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC study comparing human readers with AI assistance versus without AI assistance was reported in this document. The study described is a standalone performance analysis of the algorithm.

    6. Standalone Performance Study

    Yes, a standalone performance study was done. The document states: "Aidoc conducted a retrospective, blinded, multicenter study with the Briefcase-Triage software to evaluate the standalone performance analysis individually for each of the 11 clinical indications supported by BriefCase-Triage: CARE Multi-triage CT Body device."

    7. Type of Ground Truth Used

    The ground truth was established by expert consensus of three senior board-certified radiologists.

    8. Sample Size for the Training Set

    The sample size for the training set is not explicitly provided in the given text. It is only mentioned that "the algorithm was trained during software development on images of the pathology."

    9. How the Ground Truth for the Training Set was Established

    The ground truth for the training set was established through labeled ("tagged") images. The document states: "As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding." The specific method or expert involvement in this tagging process is not detailed, but it implies human expert labeling.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250694

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-11-25

    (263 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K252366

    Validate with FDA (Live)

    Date Cleared
    2025-11-24

    (117 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    a2z-Unified-Triage is a radiological computer-aided triage and notification software indicated for use in the analysis of abdominal/pelvic CT images in adults aged 22 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communicating suspected positive cases of the 7 specified abdominopelvic findings: Acute Cholecystitis, Acute Pancreatitis, Unruptured Abdominal Aortic Aneurysm, Acute Diverticulitis, Free Air, Hydronephrosis, and Small Bowel Obstruction. These findings are intended to be used together as one device. The device supports both cloud-based and on-premises deployment, with integration either directly with healthcare facility systems or through third-party healthcare technology platforms.

    a2z-Unified-Triage uses an artificial intelligence algorithm to analyze images and flag cases with detected findings in parallel to the ongoing standard of care image interpretation. The device provides analysis results that enable client systems to generate notifications for cases with suspected findings. These results can include DICOM instance UIDs for key images, which are meant for informational purposes only and not intended for primary diagnosis beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of a2z-Unified-Triage are intended to be used in conjunction with other patient information and based on clinicians' professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    a2z-Unified-Triage is a radiological computer-assisted triage and notification software device. The software consists of an algorithmic component that supports both cloud-based and on-premises deployment on standard server hardware. The device processes abdomen/pelvis CT images from clinical imaging systems, analyzing them using artificial intelligence algorithms to detect suspected cases of 7 abdominopelvic conditions: Acute Cholecystitis, Acute Pancreatitis, Unruptured Abdominal Aortic Aneurysm, Acute Diverticulitis, Free Air, Hydronephrosis, and Small Bowel Obstruction.

    Following the AI processing, the analysis results are returned to the client system for worklist prioritization. When a suspected case is detected, the software provides analysis results that enable the client system to generate appropriate notifications. These results can include DICOM instance UIDs for key images, which are for informational purposes only, do not contain any marking of the findings, and are not intended for primary diagnosis beyond notification.

    Integration with clinical imaging systems facilitates efficient triage by enabling prioritization of suspect cases for review of the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    AI/ML Overview

    Here's a detailed summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA clearance letter:

    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    a2z-Unified-Triage differentiates between two types of findings for regulatory purposes: QAS (Qualitative, Automated, and Subjective) and QFM (Quantitative, Functional, and Measurable).

    Condition TypeAcceptance CriteriaDevice Performance (with 95% Confidence Intervals)
    QFM FindingsAUC > 0.95
    Acute CholecystitisAUC > 0.95AUC: 0.985 [0.972-0.998] (Also provided: High Sensitivity: Se 96.1% [89.2-98.7%], Sp 89.3% [86.6-91.5%]; Sensitivity Biased: Se 92.2% [84.0-96.4%], Sp 95.8% [93.9-97.2%]; Balanced: Se 92.2% [84.0-96.4%], Sp 95.8% [93.9-97.2%])
    Acute PancreatitisAUC > 0.95AUC: 0.994 [0.985-1.000] (Also provided: High Sensitivity: Se 98.0% [92.9-99.4%], Sp 87.8% [84.9-90.3%]; Sensitivity Biased: Se 98.0% [92.9-99.4%], Sp 97.0% [95.3-98.1%]; Balanced: Se 98.0% [92.9-99.4%], Sp 97.0% [95.3-98.1%]; High Specificity: Se 92.9% [86.1-96.5%], Sp 99.8% [99.0-100.0%])
    Unruptured AAAAUC > 0.95AUC: 0.995 [0.991-0.999] (Also provided: High Sensitivity: Se 100.0% [95.2-100.0%], Sp 86.3% [83.3-88.8%]; Sensitivity Biased: Se 97.4% [90.9-99.3%], Sp 95.8% [93.9-97.2%]; Balanced: Se 97.4% [90.9-99.3%], Sp 97.5% [95.9-98.5%])
    Acute DiverticulitisAUC > 0.95AUC: 0.995 [0.990-1.000] (Also provided: High Sensitivity: Se 98.7% [92.9-99.8%], Sp 89.3% [86.6-91.5%]; Sensitivity Biased: Se 97.4% [90.9-99.3%], Sp 96.8% [95.1-98.0%]; Balanced: Se 97.4% [90.9-99.3%], Sp 96.8% [95.1-98.0%]; High Specificity: Se 94.7% [87.2-97.9%], Sp 98.7% [97.4-99.3%])
    HydronephrosisAUC > 0.95AUC: 0.976 [0.960-0.991] (Also provided: High Sensitivity: Se 89.7% [82.1-94.3%], Sp 92.9% [90.5-94.7%])
    QAS FindingsSensitivity > 80% and Specificity > 80%
    Small Bowel ObstructionSensitivity > 80%, Specificity > 80%High Sensitivity: Se 94.9% [88.7-97.8%], Sp 91.7% [89.1-93.7%]; Sensitivity Biased: Se 91.9% [84.9-95.8%], Sp 96.0% [94.1-97.3%]; Balanced: Se 88.9% [81.2-93.7%], Sp 98.1% [96.6-98.9%]
    Free AirSensitivity > 80%, Specificity > 80%Balanced: Se 89.3% [82.2-93.8%], Sp 88.6% [85.7-91.0%]; High Specificity: Se 88.4% [81.1-93.1%], Sp 90.8% [88.1-92.9%]

    Turnaround Time Acceptance Criteria and Performance:

    MetricAcceptance Criteria (Implied by Predicate)Device Performance
    Triage Turn-around TimeMean < 81.6 seconds (Predicate's Mean)Mean: 58.39 seconds (95% CI: 56.11-60.68)
    Median: 55.02 seconds
    95th percentile: 90.36 seconds

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: 675 cases from 643 unique patients (after excluding 3 cases due to quality control failures from an initial 678 cases).
    • Data Provenance: The data was sourced from multiple clinical sites within the United States. Specific states mentioned are New York (45.2%), Kansas (21.2%), Missouri (18.4%), Texas (15.0%), and Nebraska (0.3%). The study evaluated against clinical standards consistent with U.S. practice patterns. The data appears to be retrospective, as it was used for development and testing after collection.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of Experts: A minimum of two U.S. board-certified radiologists, with a third U.S. board-certified expert adjudicator for discordant cases.
    • Qualifications: All experts were U.S. board-certified radiologists. The third adjudicator was specifically fellowship-trained in body imaging.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Adjudication Method: 2+1 methodology. Each case was independently reviewed by two U.S. board-certified radiologists. If the two initial readers disagreed, a third U.S. board-certified expert adjudicator (fellowship-trained in body imaging) provided the tie-breaking determination.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • The provided document does not indicate that an MRMC comparative effectiveness study was performed or submitted for this clearance. The study described is a standalone performance assessment of the algorithm itself against ground truth.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, a standalone performance assessment was done. The document explicitly states: "A standalone performance assessment was performed for a2z-Unified-Triage to validate the accuracy of detecting the 7 findings against a reference standard established by U.S. board-certified radiologists."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of Ground Truth: Expert consensus, specifically a 2+1 consensus of U.S. board-certified radiologists, with the third adjudicator being fellowship-trained in body imaging.

    8. The sample size for the training set

    • The document states, "The algorithms were developed on an extensive dataset of abdomen/pelvis CT studies from multiple clinical sites." However, a specific numerical sample size for the training set is not provided. It only mentions that strict protocols ensured complete independence between development and testing datasets (mutually exclusive patients).

    9. How the ground truth for the training set was established

    • The document does not explicitly detail how the ground truth for the training set was established. It only describes the ground truth establishment for the test set (2+1 radiologist consensus). It states that the algorithms were developed on an "extensive dataset" and implies internal processes for data collection and annotation during development.
    Ask a Question

    Ask a specific question about this device

    K Number
    K253265

    Validate with FDA (Live)

    Device Name
    BriefCase-Triage
    Date Cleared
    2025-11-06

    (38 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BriefCase-Triage is a radiological computer aided triage and notification software indicated for use in the analysis of abdominal CT images in adults or transitional adolescents aged 18 and older. The device is intended to assist hospital networks and appropriately trained medical specialists in workflow triage by flagging and communication of suspected positive findings of Intra-abdominal free gas (IFG) pathologies.

    BriefCase-Triage uses an artificial intelligence algorithm to analyze images and highlight cases with the detected findings in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected findings. Notifications include compressed preview images that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of BriefCase-Triage are intended to be used in conjunction with other patient information and based on their professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Device Description

    Briefcase-Triage is a radiological computer-assisted triage and notification software device.

    The software is based on an algorithm programmed component and is intended to run on a linux-based server in a cloud environment.

    The Briefcase-Triage receives filtered DICOM Images, and processes them chronologically by running the algorithms on each series to detect suspected cases. Following the AI processing, the output of the algorithm analysis is transferred to an image review software (desktop application). When a suspected case is detected, the user receives a pop-up notification and is presented with a compressed, low-quality, grayscale image that is captioned "not for diagnostic use, for prioritization only" which is displayed as a preview function. This preview is meant for informational purposes only, does not contain any marking of the findings, and is not intended for primary diagnosis beyond notification.

    Presenting the users with worklist prioritization facilitates efficient triage by prompting the user to assess the relevant original images in the PACS. Thus, the suspect case receives attention earlier than would have been the case in the standard of care practice alone.

    The algorithm was trained during software development on images of the pathology. As is customary in the field of machine learning, deep learning algorithm development consisted of training on labeled ("tagged") images. In that process, each image in the training dataset was tagged based on the presence of the critical finding.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for BriefCase-Triage:

    Acceptance Criteria and Reported Device Performance

    ParameterAcceptance Criteria (Performance Goal)Reported Device Performance
    Primary Endpoints
    Sensitivity80%94.2% (95% CI: 89.6%, 97.2%)
    Specificity80%94.6% (95% CI: 90.7%, 97.2%)
    Secondary Endpoint
    Time-to-notification (Subject Device)Comparability with predicate (time savings to standard of care review)10.4 seconds (95% CI: 10.1-10.8)
    Time-to-notification (Predicate Device)(for comparison)264.4 seconds (95% CI: 222-300)

    Note: The document explicitly states that the primary endpoints were "sensitivity and specificity with an 80% performance goal." The reported performance for both sensitivity and specificity (94.2% and 94.6% respectively) significantly exceeds this 80% goal. The time-to-notification for the subject device is significantly faster than the predicate, demonstrating improved "time savings to the standard of care review."

    Study Information

    1. Sample Size Used for the Test Set and Data Provenance:
    * Sample Size: 394 cases
    * Data Provenance:
    * Country of Origin: US (6 clinical sites)
    * Retrospective/Prospective: Retrospective
    * Additional Detail: Cases were distinct in time or center from the training data.

    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
    * Number of Experts: 3
    * Qualifications: Senior board-certified radiologists

    3. Adjudication Method for the Test Set:
    * The document states "as determined by three senior board-certified radiologists." While it doesn't explicitly state "2+1" or "3+1," this implies a consensus-based approach among the three experts. Without further detail, it's reasonable to infer a consensus was reached, or a specific rule for disagreement (e.g., majority) was applied.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
    * No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted to assess how much human readers improve with AI vs. without AI assistance. The study focuses purely on the standalone performance of the AI algorithm.

    5. Standalone Performance Study (Algorithm Only):
    * Yes, a standalone study was performed. The "Pivotal Study Summary" describes evaluating "the software's performance to the ground truth," indicating a standalone performance assessment of the algorithm without human-in-the-loop performance measurement.

    6. Type of Ground Truth Used:
    * Expert consensus (as determined by three senior board-certified radiologists).

    7. Sample Size for the Training Set:
    * The document states, "The algorithm was trained during software development on images of the pathology." However, it does not provide a specific sample size for the training set.

    8. How the Ground Truth for the Training Set Was Established:
    * "each image in the training dataset was tagged based on the presence of the critical finding." This indicates that human experts (or a similar method to the test set ground truth) labeled the images in the training set for the presence of the pathology. However, the specific number and qualifications of these experts are not explicitly stated for the training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251610

    Validate with FDA (Live)

    Device Name
    qER-CTA (v1.0)
    Date Cleared
    2025-09-08

    (104 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    qER-CTA is a notification-only, parallel workflow tool for use by hospital networks and trained clinicians to identify and communicate images of specific patients to a specialist, independent of the standard of care workflow. qER-CTA uses a deep learning algorithm to analyze images for findings suggestive of a pre-specified clinical condition and to notify an appropriate medical specialist in parallel to standard of care image interpretation. Identification of suspected findings is not for diagnostic use beyond notification. Specifically, the device analyses CT angiogram images of the brain acquired in the acute setting and sends notifications to a neurovascular specialist that a suspected large vessel occlusion has been identified, recommending review of those images. Images can be previewed through a mobile application. qER-CTA is intended to analyze the internal carotid artery (ICA) and M1 segment of the middle cerebral artery (MCA) for LVOs on CTA scans of adults (≥ 22 years of age). Images previewed through the mobile application are compressed and for informational purposes only, not intended for diagnostic use beyond notification. Notified clinicians are responsible for viewing non-compressed images on a diagnostic viewer, conducting appropriate patient evaluation, and engaging in relevant discussions with the treating physician before making care-related decisions or requests. qER-CTA is limited to the analysis of imaging data and should not be used as a substitute for full patient evaluation or relied upon to make or confirm a diagnosis.

    Device Description

    qER-CTA is a radiological computer-aided triage and notification (CADt) software designed to assist trained clinicians and radiologists in analyzing and triaging head CTA scans for suspected LVO (Large Vessel Occlusion) in the anterior circulation.

    The software uses a deep learning algorithm to analyze CTA images and provide a case-level output available in the PACS or workstation for worklist prioritization or triage. It does not alter the original image, change the worklist order, or send proactive alerts directly to the end user. Instead, the end user can sort the worklist based on the passive notification flag. Images can be previewed through a mobile application also. There are two alternatives' users can choose from engaging with qER-CTA.

    1. For de-identified CTA scans, they are sent to qER-CTA via transmission functions built within the user's PACS or workstation. Results are pushed back to the user's PACS or other user-specified radiology software database once the processing is complete.

    2. For the client system that does not have de-identification and re-identification capabilities, qER-CTA interacts with on-premises gateway rather than directly with the PACS.

    qER-CTA is not intended to direct attention to specific portions of the image, rule out target conditions, or be used as a standalone tool for clinical decision-making. It operates as a parallel workflow tool, independent of the standard of care, to assist in identifying and communicating suspected LVO cases to appropriate medical specialists for further review. Images previewed through the mobile application are compressed and are for informational purposes only.

    AI/ML Overview

    1. Acceptance Criteria and Reported Device Performance

    AbnormalityAcceptance CriteriaReported Device Performance (95% CI)
    Large Vessel OcclusionNot explicitly stated in the provided text. Likely compared against predicate.AUC: 0.959 (0.943 – 0.975)
    Sensitivity: 91.35% (87.54%-94.07%)
    Specificity: 91.86% (88.18% -94.47%)
    Time to NotificationNot explicitly stated in the provided text. Likely compared against predicate.Mean: 6.36 minutes (6.06-6.66)

    2. Sample Size for Test Set and Data Provenance

    • Sample Size: 584 head CTA scans (289 LVO, 295 non-LVO).
    • Data Provenance: Not explicitly stated in the provided text (e.g., country of origin, retrospective or prospective).

    3. Number of Experts and Qualifications for Ground Truth Establishment (Test Set)

    • Number of Experts: Three.
    • Qualifications: U.S. board-certified neuroradiologists with at least 10 years of experience.

    4. Adjudication Method for the Test Set

    • The adjudication method is not explicitly mentioned. It states that "Three U.S. board certified neuroradiologists with at least 10 years of experience did the ground truthing," implying a consensus or majority vote might have been used, but specific details (e.g., 2+1, 3+1) are not provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, a MRMC comparative effectiveness study was not done. The document explicitly states that the performance was assessed using a "standalone study."

    6. Standalone Performance (Algorithm Only) Study

    • Yes, a standalone study was done. The performance of the qER-CTA device was assessed using a standalone study, evaluating its classification of large vessel occlusion.

    7. Type of Ground Truth Used

    • Expert Consensus: The ground truth was established by three U.S. board-certified neuroradiologists with at least 10 years of experience.

    8. Sample Size for the Training Set

    • The sample size for the training set is not provided in the given text.

    9. How Ground Truth for the Training Set Was Established

    • How the ground truth for the training set was established is not provided in the given text. The document only mentions the ground truthing for the clinical performance testing (test set).
    Ask a Question

    Ask a specific question about this device

    K Number
    K251533

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2025-09-04

    (108 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Rapid OH is a radiological computer aided triage and notification software indicated for suspicion of Obstructive Hydrocephalus (OH) in non-enhanced CT head images of adult patients. The device is intended to assist trained clinicians in workflow prioritization triage by providing notification of suspected findings in head CT images.

    Rapid OH uses an artificial intelligence algorithm to analyze images and highlight cases with suspected OH on a server or standalone desktop application in parallel to the ongoing standard of care image interpretation. The user is presented with notifications for cases with suspected OH findings. Notifications include compressed preview images, that are meant for informational purposes only and not intended for diagnostic use beyond notification. The device does not alter the original medical image and is not intended to be used as a diagnostic device.

    The results of Rapid OH are intended to be used in conjunction with other patient information and based on professional judgment, to assist with triage/prioritization of medical images. Notified clinicians are responsible for viewing full images per the standard of care.

    Contraindications/Limitations/Exclusions:

    • Rapid OH is intended for use for adult patients.
    • Input data image series containing excessive patient motion or metal implants may impact module analysis accuracy, robustness and quality.
    • Ventriculoperitoneal shunts are contraindicated

    Exclusions:

    • Series with missing slices or improperly ordered slices
    • data acquired at x-ray tube voltage < 100kVp or > 140kVp.
    • data not representing human head or head/neck anatomical regions
    Device Description

    Rapid OH software device is a radiological computer-aided triage and notification software device using AI/ML. The Rapid OH device is a non-contrast CT (NCCT) processing module which operates within the integrated Rapid Platform to provide a notification of suspected findings of obstructive hydrocephalus (OH). The Rapid OH device is SaMD which analyzes input NCCT images that are provided in DICOM format for notification of suspected findings for workflow prioritization.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Rapid OH device, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Device Performance
    Primary Endpoint: Sensitivity (Se)Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval.89.5% (95% CI: 0.837-0.935)
    Primary Endpoint: Specificity (Sp)Not explicitly stated as a separate acceptance criterion, but the reported performance met the statistical confidence interval.97.6% (95% CI: 0.940-0.991)
    Secondary Endpoint: Time to NotificationNot explicitly stated as a numerical acceptance criterion, but the reported performance indicates efficiency.30.3 seconds (range 10.5-55.5 seconds)

    Note: The document states "Standalone performance primary endpoint passed with sensitivity (Se) of 89.5% (95% CI:0.837-0.935) and specificity (Sp) of 97.6% (95% CI:0.940-0.991)". While explicit numerical acceptance criteria for sensitivity and specificity are not provided, the "passed" statement implies that the reported performance fell within pre-defined acceptable ranges or met a statistical hypothesis.

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Test Set: 320 cases
    • Data Provenance: The document mentions "diversity amongst demographics (M: 45%, F: 54%); Sites (and manufacturers (GE, Philips, Siemens, Toshiba) and confounders (ICH, Ischemic Stroke, Tumor, Cyst, Aqueductal stenosis, Mass effect, Brain atrophy and Communicating hydrocephalus)". While specific countries of origin are not explicitly stated, the mention of multiple manufacturers (Siemens, GE, Toshiba, Philips) and multiple sites (74 sites for algorithm development, and "Sites" for the validation set) suggests a diverse, likely multi-site, and potentially multi-country dataset, although this is not definitively confirmed for the test set itself. The dataset appears to be retrospective, as it's used for algorithm development and validation based on existing cases.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: 3 experts (implied from "Truthing was established using 2:3 experts.")
    • Qualifications of Experts: Not explicitly stated in the provided text. They are referred to as "experts." In regulatory contexts, these would typically be radiologists or neuro-radiologists with significant experience in interpreting head CTs.

    4. Adjudication Method for the Test Set

    • Adjudication Method: "2:3 experts." This means that ground truth was established by agreement from at least 2 out of 3 experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No, an MRMC comparative effectiveness study was not explicitly mentioned for this device. The study described is a standalone performance validation of the algorithm.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop) Done

    • Standalone Performance Done: Yes, "Final device validation included standalone performance validation. This performance validation testing demonstrated the Rapid OH device provides accurate representation of key processing parameters under a range of clinically relevant conditions associated with the intended use of the software." The reported sensitivity and specificity values are for this standalone performance.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus ("Truthing was established using 2:3 experts.")

    8. Sample Size for the Training Set

    • Sample Size for Training Set: 3340 cases (This refers to "Algorithm development" which encompasses training and likely internal validation/development sets).

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established (Training Set): The document states "Algorithm development was performed using 3340 cases... Truthing was established using 2:3 experts." This implies that the same expert consensus method (2 out of 3 experts) used for the test set was also used to establish ground truth for the cases used in algorithm development (which includes the training set).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 9