Search Filters

Search Results

Found 4402 results

510(k) Data Aggregation

    K Number
    K254114

    Validate with FDA (Live)

    Date Cleared
    2026-01-18

    (30 days)

    Product Code
    Regulation Number
    870.2900
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251831

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2026-01-15

    (216 days)

    Product Code
    Regulation Number
    870.4100
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K253023

    Validate with FDA (Live)

    Device Name
    BIOGRAPH One
    Date Cleared
    2026-01-15

    (118 days)

    Product Code
    Regulation Number
    892.1200
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251214

    Validate with FDA (Live)

    Date Cleared
    2026-01-13

    (270 days)

    Product Code
    Regulation Number
    888.3044
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K253248

    Validate with FDA (Live)

    Date Cleared
    2026-01-13

    (106 days)

    Product Code
    Regulation Number
    872.3250
    Panel
    Dental
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251988

    Validate with FDA (Live)

    Date Cleared
    2026-01-12

    (199 days)

    Product Code
    Regulation Number
    878.4400
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K252913

    Validate with FDA (Live)

    Device Name
    Break Wave
    Manufacturer
    Date Cleared
    2026-01-12

    (122 days)

    Product Code
    Regulation Number
    876.5990
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K252970

    Validate with FDA (Live)

    Date Cleared
    2026-01-07

    (112 days)

    Product Code
    Regulation Number
    892.2080
    Age Range
    N/A
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251019

    Validate with FDA (Live)

    Date Cleared
    2025-12-22

    (264 days)

    Product Code
    Regulation Number
    876.5010
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The catheter is designed for percutaneous drainage of abscess fluid, cyst, gall bladders, nephrostomy, urinary, and others fluids.

    Device Description

    The BT-PD1-SERIES-G / BT-PD1-SERIES(MN)-G / BT-PD1-SERIES-G(+FSC) / BT-PDS-SERIES-G / BT-PDS-SERIES(MN)-G / BT-PDS-SERIES(B)-RB-G Percutaneous Drainage Catheter with hydrophilic coating is a percutaneous drainage catheter used for drainage of abscess and fluid collections. The catheter is made from a soft, biocompatible plastic, a material that is radiopaque for X-rays. The distal end of catheter contains a pigtail or mini-pigtail close loop and drainage holes.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K250959

    Validate with FDA (Live)

    Device Name
    BioticsAI
    Manufacturer
    Date Cleared
    2025-12-22

    (266 days)

    Product Code
    Regulation Number
    892.1550
    Age Range
    18 - 44
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    BioticsAI is intended to analyze fetal ultrasound images and frames (DICOM instances) using machine learning techniques to automatically detect views, detect anatomical structures within the views and to facilitate quality criteria verification and characteristics of the views.

    The device is intended for use by Healthcare Professionals as a concurrent reading aid during and after the acquisition and interpretation of fetal ultrasound images.

    Device Description

    BioticsAI is a software used by OB/GYN care centers for prenatal ultrasound review and reporting. BioticsAI uses artificial intelligence (A.I.) to automatically annotate ultrasound images with fetal anatomical planes and structures to facilitate ultrasound review and report generation for fetal ultrasound anatomical scans. It serves as concurrent reading aid for ultrasound images both during and after a fetal anatomical ultrasound examination.

    BioticsAI is a Software as a Service (SaaS) solution that aims at helping sonographers, OB/GYNs, MFMs and Fetal surgeons (all three designated as healthcare professionals i.e. HCP) to perform their routine fetal ultrasound examinations in real-time.

    BioticsAI can be used by Healthcare Professionals HCPs during fetal ultrasound exams for Trimester 2 of the fetus, during which a fetal anatomy exam is typically captured (typically conducted between 18-22 weeks but can be captured on gestational ages ranging from 18 up to 39 weeks). The software is intended to assist HCPs in assuring during and after their examination that the examination is complete and all images were collected according to their protocol

    BioticsAI requires the following SaaS accessibility from internet browser.

    BioticsAI receives DICOM instances, which consist of still fetal ultrasound images (in the form still image captures or individual frames from a multi-frame instance) from the ultrasound machine, which are submitted by the performing healthcare professional from the clinic's network, either during the screening or post-screening and performs the following:

    • Automatically detect fetal anatomical planes (2D ultrasound views).
    • Automatically flag high-level anatomical features (e.g., "head", "thorax", "limb detected in image", etc).
    • Automatically detect specific anatomical structures within supported planes/views (i.e. "cerebellum, csp, and cisterna magna found in transcerebellar plane image").
    • Facilitate quality verification of supported planes by determining whether the expected anatomical structures, as informed by the latest ISUOG and AIUM guidelines, are present in the ultrasound image. The quality assessment focuses on the presence or absence of these anatomical structures.

    BioticsAI automatically identifies fetal anatomical views and anatomical structures captured during the screening. It uses green highlights to indicate successfully detected planes and structures. Red highlights are used to flag instances where the model could not detect an expected anatomical view or structure, even though it is a supported feature. Yellow highlights indicate views or structures that require manual verification (when the AI cannot determine whether anatomical features are present or absent because it is not yet supported by our product).

    The end user can interact with the software to override BioticsAI's outputs. Specifically, users can unassign or assign an image to a plane or high level anatomical feature, and update the status of quality criteria for structures by changing it from "found" to "not found" or vice versa. Users have the flexibility to review and edit these assignments at any point during or after the exam.

    The end user then has the ability to include the information gathered during quality and image review automatically in a final report via a button called "Confirm Screening Results", automatically filling out a report template with identified planes and structures. The report can then be further exported to the clinic's PACS over DIMSE via a populated DICOM SR.

    BioticsAI also provides a standard DICOM Viewer for viewing DICOM instances, and obstetrics ultrasound report templates for manually creating ultrasound reports without the AI based functionality as described above.

    To further explain the AI-driven outputs provided by the device, we describe the three primary AI components below:

    • AI-1: High-Level Anatomy Classification

      Provides a multi-label classification of the general anatomical region depicted in the image (e.g., head/brain, face, thorax/chest, abdomen, limbs). These categories correspond to standard high-level anatomy groupings used in fetal ultrasound interpretation.

    • AI-2: Per-Class Top-1 Fetal Plane Classification

      Provides fetal anatomical plane classifications using a per-class Top-1 approach. A fetal "plane" refers to a standardized cross-sectional view defined by ISUOG and aligned with AIUM guidance for mid-trimester fetal anatomy scans. For each anatomical plane category, the model outputs the image with the single highest-confidence prediction (Top-1) associated with that class.

    • AI-3: Fetal Anatomical Structure Classification

      Provides multi-label identification of fetal anatomical structures (e.g., cerebellum, cisterna magna, cerebral peduncles), generated from the model's segmentation head and refined through post-processing filters that enforce plane-structure consistency and remove non-intended labels.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for BioticsAI:

    Please note that the document primarily provides the results of standalone performance testing and verification/validation activities. It does not detail specific acceptance criteria values that were established prior to testing for each metric (e.g., "The device must achieve a sensitivity of at least X"). Instead, the tables present the achieved performance of the device from its standalone testing. Based on the clearance letter, it is implied that these reported performance metrics were deemed acceptable by the FDA for substantial equivalence.


    1. Table of Acceptance Criteria and Reported Device Performance

    CategoryItemPerformance MetricReported Device Performance (Point Estimate)95% Bootstrapping Confidence Interval
    AI-1: High-Level Anatomy ClassificationFetal "Abdomen" ViewSensitivity0.953(0.942, 0.962)
    Specificity0.986(0.984, 0.989)
    Fetal "Face" ViewSensitivity0.944(0.932, 0.956)
    Specificity0.993(0.991, 0.994)
    Fetal "Head" PlanesSensitivity0.955(0.946, 0.964)
    Specificity0.996(0.995, 0.997)
    Fetal "Limbs"Sensitivity0.919(0.895, 0.943)
    Specificity0.983(0.981, 0.985)
    "Heart Screening" PlanesSensitivity0.912(0.895, 0.928)
    Specificity0.990(0.988, 0.992)
    Summary: 5 High-Level Fetal Anatomy Sections (Abdomen, Face, Head, Limbs, Thorax)Sensitivity (All Image Qualities)0.934(0.929, 0.94)
    Specificity (All Image Qualities)0.989(0.988, 0.99)
    AI-2: Per-Class Top-1 Fetal Plane ClassificationAbdomen BladderSensitivity0.960(0.940, 0.977)
    Specificity0.998(0.997, 0.998)
    Abdomen Cord InsertionSensitivity0.965(0.947, 0.983)
    Specificity0.998(0.997, 0.999)
    Abdomen KidneysSensitivity0.953(0.927, 0.973)
    Specificity0.998(0.997, 0.999)
    Abdomen Stomach Umbilical VeinSensitivity0.990(0.982, 0.997)
    Specificity1.000(1.000, 1.000)
    Face Coronal Upperlip Nose NostrilsSensitivity0.981(0.968, 0.993)
    Specificity0.999(0.999, 1.000)
    Face Median Facial ProfileSensitivity1.000(1.000, 1.000)
    Specificity0.999(0.998, 1.000)
    Face Orbits LensesSensitivity0.897(0.863, 0.927)
    Specificity0.999(0.999, 1.000)
    Head TranscerebellarSensitivity0.998(0.994, 1.000)
    Specificity1.000(0.999, 1.000)
    Head TransthalamicSensitivity0.923(0.899, 0.945)
    Specificity0.992(0.991, 0.994)
    Head TransventricularSensitivity0.975(0.964, 0.984)
    Specificity1.000(1.000, 1.000)
    Limbs FemurSensitivity0.955(0.944, 0.966)
    Specificity0.992(0.990, 0.994)
    Spine SagittalSensitivity0.909(0.891, 0.927)
    Specificity0.995(0.993, 0.996)
    Thorax Lungs Four Heart ChambersSensitivity0.969(0.954, 0.983)
    Specificity0.997(0.996, 0.998)
    Summary: 13 Fetal Ultrasound PlanesSensitivity (All Image Qualities)0.960(0.955, 0.964)
    Specificity (All Image Qualities)0.997(0.997, 0.998)
    AI-3: Fetal Anatomical Structure Classification12 Fetal Head Anatomical StructuresSensitivity (Diagnostically Acceptable Images)0.948(0.935, 0.959)
    Sensitivity (All Image Qualities)0.881(0.871, 0.891)
    Specificity (All Image Qualities)0.991(0.99, 0.992)
    9 Fetal Abdomen Anatomical StructuresSensitivity (Diagnostically Acceptable Images)0.953(0.941, 0.964)
    Sensitivity (All Image Qualities)0.919(0.909, 0.93)
    Specificity (All Image Qualities)0.983(0.982, 0.984)
    9 Fetal Face Anatomical StructuresSensitivity (Diagnostically Acceptable Images)0.983(0.976, 0.989)
    Sensitivity (All Image Qualities)0.958(0.951, 0.965)
    Specificity (All Image Qualities)0.991(0.99, 0.992)
    2 Fetal Spine Anatomical StructuresSensitivity (Diagnostically Acceptable Images)0.992(0.989, 0.996)
    Sensitivity (All Image Qualities)0.975(0.97, 0.98)
    Specificity (All Image Qualities)0.927(0.921, 0.931)
    16 Fetal Thorax & Heart Anatomical StructuresSensitivity (Diagnostically Acceptable Images)0.978(0.969, 0.985)
    Sensitivity (All Image Qualities)0.925(0.911, 0.939)
    Specificity (All Image Qualities)0.989(0.988, 0.99)

    2. Sample size used for the test set and the data provenance

    • Sample Size: 11,186 fetal ultrasound images across 296 patients.
    • Data Provenance:
      • Country of Origin: United States.
      • Retrospective or Prospective: Not explicitly stated as retrospective or prospective, but described as "independent of the data used during model development" and collected "from a single site (across multiple ultrasound screening units and machine instances) in the United States." This typically implies a retrospective collection for model validation.
      • Diversity: Data represented varying ethnicities, patient BMIs, patient ages (18-44 years), gestational ages (18-39 weeks), twin pregnancies, and presence of abnormalities, designed to be "representative of the intended use population."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document does not explicitly state the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience") used to establish the ground truth for the test set. It only states that the ground truth was "independent of the data used during model development."


    4. Adjudication method for the test set

    The document does not specify the adjudication method (e.g., 2+1, 3+1, none) used for establishing the ground truth of the test set.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A MRMC comparative effectiveness study was not explicitly mentioned or detailed in the provided document. The performance data presented is for standalone device performance.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance testing (algorithm only without human-in-the-loop performance) was done. The document states: "BioticsAI conducted a standalone performance testing on a dataset of 11,186 fetal ultrasound images..." The tables present the Sensitivity and Specificity of the AI model.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document does not explicitly state the precise type of ground truth used (e.g., expert consensus, pathology, outcomes data). However, for image analysis tasks like detecting planes and structures in ultrasound images, ground truth is typically established by expert annotation or consensus by qualified medical professionals (e.g., sonographers, OB/GYN, MFMs, Fetal surgeons, or radiologists) interpreting the images. The context describes the device as verifying guidelines and determining presence/absence of structures, implying a gold standard based on established medical interpretation.


    8. The sample size for the training set

    The document does not provide the exact sample size for the training set. It only mentions that the test set was "independent of the data used during model development (training/fine tuning/internal validation) and establishment of device operating points."


    9. How the ground truth for the training set was established

    The document does not provide details on how the ground truth for the training set was established. It only mentions the data was used for "model development (training/fine tuning/internal validation)." Typically, similar to the test set, this would involve expert annotation and labeling.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 441