Search Filters

Search Results

Found 2808 results

510(k) Data Aggregation

    K Number
    K252496

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2026-01-29

    (174 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    20 - 80
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Korea

    Re: K252496
    Trade/Device Name: Neurophet AQUA AD Plus
    Regulation Number: 21 CFR 892.2050
    Classification Name** | Automated Radiological Image Processing Software |
    | Regulation Number | 892.2050
    Classification Name** | Medical image management and processing system |
    | Regulation Number | 892.2050
    , LLZ (subsequent) | QIH (primary), LLZ (subsequent) | LLZ | Identical. |
    | Regulation Number | 892.2050
    | 892.2050 | 892.2050 | Identical |
    | 510(k) Review Panel | Radiology | Radiology | Radiology |

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Neurophet AQUA AD Plus is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures and lesions, as well as SUVR quantification from a set of MR and PET images. Volumetric measurements may be compared to reference percentile data.

    Device Description

    Neurophet AQUA AD Plus is a software device intended for the automatic labeling of brain structures, visualization, and volumetric quantification of segmented brain regions and lesions, as well as standardized uptake value ratio (SUVR) quantification using MR and PET images. The volumetric outcomes are compared to normative reference data to support the evaluation of neurodegeneration and cognitive impairment.

    The device is designed to assist physicians in clinical evaluation by streamlining the clinical workflow from patient registration through image analysis, analysis result archiving, and report generation using software-based functionalities. The device provides percentile-based results by comparing an individual's imaging-derived quantitative analysis results to reference populations. Percentile-based results are provided for reference only and are not intended to serve as a standalone basis for diagnostic decision-making. Clinical interpretation must be performed by qualified healthcare professionals.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Neurophet AQUA AD Plus, based on the provided FDA 510(k) Clearance Letter:


    Acceptance Criteria and Device Performance for Neurophet AQUA AD Plus

    The Neurophet AQUA AD Plus employs multiple AI modules for automated segmentation and quantitative analysis of brain structures and lesions using MR and PET images. The device's performance was validated against predefined acceptance criteria for each module.

    1. Table of Acceptance Criteria and Reported Device Performance

    AI ModulePerformance MetricAcceptance CriteriaReported Device Performance
    T1-SegEngine (T1-weighted structural MRI segmentation)Accuracy (Dice Similarity Coefficient - DSC)95% CI of DSC: [0.750, 0.850] for major cortical brain structures 95% CI of DSC: [0.800, 0.900] for major subcortical brain structuresCortical Regions: Mean DSC: 0.83 ± 0.04 (95% CI: 0.82–0.84) Subcortical Regions: Mean DSC: 0.87 ± 0.03 (95% CI: 0.86–0.88)
    Reproducibility (Average Volume Difference Percentage - AVDP)Equivalence range: 1.0–5.0% for both subcortical and cortical regionsSubcortical Regions: Mean AVDP: 2.50 ± 0.93% (95% CI: 2.26–2.74) Cortical Regions: Mean AVDP: 1.79 ± 0.74% (95% CI: 1.60–1.98)
    FLAIR-SegEngine (T2-FLAIR hyperintensity segmentation)Accuracy (Dice Similarity Coefficient - DSC)Mean DSC ≥ 0.80Mean DSC: 0.90 ± 0.04 (95% CI: 0.89–0.91)
    Reproducibility (Mean AVDP and Absolute Lesion Volume Difference)Absolute difference < 0.25 cc Mean AVDP < 2.5%Mean AVDP: 0.99 ± 0.66% Mean absolute lesion volume difference: 0.08 ± 0.06 cc
    PET-Engine (SUVR and Centiloid quantification)SUVR Accuracy (Intraclass Correlation Coefficient - ICC)ICC ≥ 0.60 across Alzheimer's-relevant regions (compared to FDA-cleared reference product K221405)ICC ≥ 0.993 across seven Alzheimer's-relevant regions
    Centiloid Classification (Kappa value for amyloid positivity)κ ≥ 0.70 (indicating substantial agreement with consensus expert visual reads)Kappa values met or exceeded criterion (specific values not provided, but noted as meeting/exceeding)
    ED-SegEngine (edema-like T2-FLAIR hyperintensity segmentation)Accuracy (Dice Similarity Coefficient - DSC)DSC ≥ 0.70Mean DSC: 0.91 ± 0.09 (95% CI: 0.89–0.93)
    HEM-SegEngine (GRE/SWI hypointense lesion segmentation)Accuracy (F1-score / DSC)F1-score ≥ 0.60Median F1-score (DSC): 0.860 (95% CI: 0.824–0.902)

    2. Sample Sizes and Data Provenance for the Test Set

    • T1-SegEngine (Accuracy): 60 independent T1-weighted MRI cases. Data provenance not explicitly stated, but implicitly from public repositories (e.g., ADNI, AIBL, PPMI) and institutional clinical sites as mentioned for training data, and distinct from training.
    • T1-SegEngine (Reproducibility): 60 subjects with paired T1-weighted scans (120 scans total). Data provenance not explicitly stated.
    • FLAIR-SegEngine (Accuracy): 136 independent T2-FLAIR cases. Data provenance not explicitly stated, but distinct from training data.
    • FLAIR-SegEngine (Reproducibility): Paired T2-FLAIR scans (number not specified). Data provenance not explicitly stated.
    • PET-Engine (SUVR accuracy): 30 paired MRI–PET datasets. Data provenance not explicitly stated, but implicitly from multi-center studies including varied tracers and sites.
    • PET-Engine (Centiloid classification): 176 paired T1-weighted MRI and amyloid PET scans from ADNI and AIBL. These are public repositories, likely involving diverse geographical data (e.g., USA, Australia). Data is retrospective.
    • ED-SegEngine (Accuracy): 100 T2-FLAIR scans collected from U.S. and U.K. clinical sites. Data is retrospective.
    • HEM-SegEngine (Accuracy): 106 GRE/SWI scans from U.S. clinical sites. Data is retrospective.

    For all modules, validation datasets were fully independent from training datasets at the subject level, drawn from distinct sites and/or repositories where applicable.
    The validation cohorts covered adult subjects across a broad age range (approximately 40–80+ years), with both females and males represented.
    Racial/ethnic composition included White, Asian, Black, and African American subjects, depending on the underlying public and institutional datasets.
    Clinical subgroups included clinically normal, mild cognitive impairment, and Alzheimer's disease for structural, FLAIR, and PET modules, and cerebrovascular/amyloid‑related pathologies for ED‑ and HEM‑SegEngines.

    3. Number of Experts and Qualifications for Ground Truth

    For structural and lesion segmentation modules (T1-, FLAIR-, ED-, HEM-SegEngines):

    • Number of Experts: Not explicitly stated as a specific number, but "subspecialty-trained neuroradiologists" were used.
    • Qualifications: "Subspecialty-trained neuroradiologists." Specific years of experience are not mentioned.

    For Centiloid classification in the PET-Engine:

    • Number of Experts: "Consensus expert visual reads." The exact number isn't specified, but implies multiple experts.
    • Qualifications: "Experts" trained in established amyloid PET reading criteria. Specific qualifications beyond "expert" and training in criteria are not detailed.

    4. Adjudication Method for the Test Set

    For structural and lesion segmentation modules (T1-, FLAIR-, ED-, HEM-SegEngines):

    • "Consensus/adjudication procedures and internal quality control to ensure consistency" were used for establishing reference segmentations. The specific 2+1, 3+1, or other detailed method is not provided.

    For Centiloid classification in the PET-Engine:

    • "Consensus expert visual interpretation" was used. The specific method details are not provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The provided text does not indicate that an MRMC comparative effectiveness study was done to compare human readers with AI assistance versus without AI assistance. The performance studies primarily focus on the standalone (algorithm-only) performance of the device against expert-derived ground truth or a cleared reference product.

    6. Standalone (Algorithm-Only) Performance Study

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was done for all AI modules. The text explicitly states: "Standalone performance tests were conducted for each module using validation datasets that were completely independent from those used for model development and training." The results presented in the table above reflect this standalone performance.

    7. Type of Ground Truth Used

    • Expert Consensus:
      • For structural and lesion segmentation modules (T1-, FLAIR-, ED-, HEM-SegEngines), reference segmentations were generated by "subspecialty-trained neuroradiologists using predefined anatomical and lesion‑labeling criteria, with consensus/adjudication procedures."
      • For Centiloid classification in the PET-Engine, reference labels were derived from "consensus expert visual interpretation using established amyloid PET reading criteria."
    • Comparison to Cleared Reference Product:
      • For SUVR quantification in the PET-Engine, reference values were obtained from an "FDA‑cleared reference product (K221405)" (Neurophet SCALE PET).

    8. Sample Size for the Training Set

    The exact sample size for the training set is not explicitly stated as a single number. However, the document mentions:

    • "The AI-based modules (T1‑SegEngine, FLAIR‑SegEngine, PET‑Engine, ED‑SegEngine, HEM‑SegEngine) were trained using multi-center MRI and PET datasets collected from public repositories (e.g., ADNI, AIBL, PPMI) and institutional clinical sites."
    • "Training data covered:
      • Adult subjects across a broad age range (approximately 20–80+ years), with both sexes represented and including multiple racial/ethnic groups (e.g., White, Asian, Black).
      • A spectrum of clinical conditions relevant to the intended use, including clinically normal, mild cognitive impairment, and Alzheimer's disease, as well as patients with cerebrovascular and amyloid‑related pathologies for lesion-segmentation modules.
      • MRI acquired on major vendor platforms (GE, Siemens, Philips) at 1.5T and 3T... and amyloid PET acquired on multiple PET systems with commonly used tracers (Amyvid, Neuraceq, Vizamyl)."

    This indicates a large and diverse training set, although a precise count of subjects or images isn't provided.

    9. How the Ground Truth for the Training Set Was Established

    The document implies that the training data included "manual labels" as it states: "No images or manual labels from the training datasets were reused in the validation datasets." However, it does not explicitly detail the process by which these "manual labels" or ground truth for the training set were established (e.g., number of experts, qualifications, adjudication method for training data). It's reasonable to infer that similar expert-driven processes were likely used for training ground truth as for validation, but this is not explicitly confirmed in the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K252091

    Validate with FDA (Live)

    Manufacturer
    Date Cleared
    2026-01-29

    (210 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    22 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Netherlands

    Re: K252091
    Trade/Device Name: Surgical Reality Viewer
    Regulation Number: 21 CFR 892.2050
    |
    | Classification name | Automated Radiological Image Processing Software |
    | Regulation number | 892.2050
    |
    | Classification name | Automated Radiological Image Processing Software |
    | Regulation number | 892.2050
    Reveal 3 is, as Surgical Reality Viewer regulated under the 'QIH' Product Code and regulated per CFR 892.2050
    br>Ceevra Reveal 3(K222676) |
    |---|---|---|
    | Product Code | QIH | QIH |
    | Product Regulation | 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Surgical Reality Viewer is a medical imaging visualization software intended to assist trained healthcare professionals with preoperative and intraoperative visualizations, by displaying 2D and 3D renderings of DICOM compliant patient images and normal anatomic segmentations derived from patient images as well as functions for manipulation of segmentations and 3D models.

    Surgical Reality Viewer assists the trained healthcare professional who is responsible for making all final patient management decisions.

    The machine learning algorithms in use by Surgical Reality Viewer are intended for use on adult patients aged 22 years and over.

    Device Description

    Surgical Reality Viewer is medical imaging visualization software that accepts DICOM compliant images (e.g. CT-scans or MR images) and segmentation files in various 3D object file formats (e.g. NifTi, OBJ, MHD, STL, etc.). The device can generate preliminary segmentations of normal anatomy on demand using machine learning and computer vision algorithms. It provides tools for editing and/or creating segmentations using various built-in 2D and 3D image manipulation functions. The software generates a 3D segmented view of the loaded patient data, either on a supported 2D or 3D screen, and offers features such as pre-operative (re)viewing of DICOM data overlaid with segmentation, (intra/post)operative visualization of anatomical structures, 2D-viewing, volume rendering, surface rendering, immersive and interactive 3D-viewing, 2D and 3D measuring of DICOM image data, storing on a local device, anatomic labelling including segmentation tools, and tools for annotations, brushing or carving of anatomical structures. Surgical Reality Viewer runs on a dedicated computer within the customer environment, meeting specific hardware requirements including a Windows operating system (version 10 or higher), GPU (Nvidia GeForce 2070), CPU (Intel i7), 16GB RAM, and at least 100GB free hard drive space.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Surgical Reality Viewer, based on the provided FDA 510(k) clearance letter and summary:

    Acceptance Criteria and Reported Device Performance

    The provided document details the performance of the machine learning algorithms for various anatomical segmentations using the Sørensen–Dice coefficient (DSC). Additionally, it describes a qualitative assessment of suitability.

    Table of Acceptance Criteria (Implicit) and Reported Device Performance

    Anatomical StructureMetric (Implicit Acceptance Criteria)Reported Device Performance
    Lobe segmentationAverage Sørensen–Dice coefficient (DSC)0.97
    - LULDSC0.98
    - LLLDSC0.98
    - RULDSC0.98
    - RLLDSC0.98
    - RMLDSC0.96
    Vessel segmentationAverage Sørensen–Dice coefficient (DSC)0.84
    - ArteryDSC0.84
    - VeinDSC0.83
    Airway segmentationSørensen–Dice coefficient (DSC)0.96
    Aorta segmentationSørensen–Dice coefficient (DSC)0.96
    Pulmonary segmentationAverage Sørensen–Dice coefficient (DSC)0.85
    - Left segmentsDSC0.85
    - Right segmentsDSC0.85
    Qualitative Scores (Suitability)(Score 1-5, higher is better)Reported Scores:
    Airways segmentationsSuitability score4.8
    Artery segmentationsSuitability score4.8
    Vein segmentationsSuitability score4.9
    Lobe SegmentationsSuitability score5.0
    Pulmonary lobe segmentsSuitability score4.7
    Aorta segmentationsSuitability score5.0

    Note on Acceptance Criteria: The document directly presents the performance metrics (DSC and qualitative scores). While explicit numerical acceptance criteria (e.g., "must be >= 0.95 DSC") are not stated, the reported high performance figures implicitly demonstrate the device meets acceptable levels for these metrics.


    Study Details

    1. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 102 CT images (Each study belonged uniquely to a single patient subject).
    • Data Provenance: 60 (n=60) scans were obtained from the United States. The remaining 42 scans' country of origin is not specified, but the document mentions "geographical location" as a subgroup for generalizability.
    • Retrospective/Prospective: Not explicitly stated, but the mention of "curated datasets" and "clinical testing dataset" without ongoing patient enrollment suggests a retrospective study.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated as a specific number. The document mentions "trained professionals" who generated the initial segmentations and "thoracic surgeons with a minimum of 2 years professional working experience" who verified these segmentations. This implies at least two distinct groups of experts were involved, potentially multiple individuals within each group.
    • Qualifications of Experts:
      • Initial Segmentation Generation: "Trained professionals." (Specific professional background and experience level not detailed).
      • Segmentation Verification: "Thoracic surgeons with a minimum of 2 years professional working experience."

    3. Adjudication Method (for the Test Set)

    • Adjudication Method: Not explicitly stated. The process described is "segmented by trained professionals and the segmentations were verified by thoracic surgeons." This suggests a single ground truth was established after the verification step, but the specific process for resolving discrepancies (e.g., consensus, tie-breaking by a third expert) is not detailed. It does not mention a 2+1 or 3+1 method.

    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • MRMC Study: No, an MRMC comparative effectiveness study was not explicitly described. The study focuses on the standalone performance of the algorithm against ground truth, and separate qualitative scoring of the suitability of segmentations. There is no mention of comparing human readers with and without AI assistance to determine an "effect size" of improvement.

    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Standalone Study: Yes, a standalone performance study was done. The "Performance was verified by comparing segmentations generated by the machine learning models against ground truth segmentations generated by trained professionals." This directly assesses the algorithm's performance without a human in the loop for generating the primary segmentation output being evaluated for accuracy.

    6. The Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for the quantitative analysis (DSC) was established by "expert consensus" (or at least expert-verified segmentations). Specifically, "segmentations generated by trained professionals and the segmentations were verified by thoracic surgeons." For the qualitative assessment, "medical professionals were tasked to qualitatively score the suitability of the segmentations provided through the Viewer," which is also an expert-based evaluation of the AI output.

    7. The Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The document mentions "Each of the algorithms has been trained and tuned on curated datasets representative of the intended patient population," but does not provide a specific number for the training set. It only states that a "CT image was either part of the tuning or testing dataset and not in both," indicating that the 102 CT images used for testing were separate from the training/tuning data.

    8. How the Ground Truth for the Training Set Was Established

    • Training Set Ground Truth: Not explicitly stated. The document mentions "trained and tuned on curated datasets representative of the intended patient population." While not explicitly detailed, it's reasonable to infer that a similar expert-driven process (like the ground truth establishment for the test set) would have been used for creating the ground truth in the training dataset to ensure high-quality training data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251306

    Validate with FDA (Live)

    Date Cleared
    2026-01-28

    (275 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    18 - 999
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    406040
    Taiwan

    Re: K251306
    Trade/Device Name: Seg Pro V3 (RT-300)
    Regulation Number: 21 CFR 892.2050
    Name | Radiological Image Processing Software For Radiation Therapy |
    | Regulation Number | 21 CFR 892.2050
    Name | Radiological Image Processing Software For Radiation Therapy |
    | Regulation Number | 21 CFR 892.2050
    Name | Radiological Image Processing Software For Radiation Therapy |
    | Regulation Number | 21 CFR 892.2050
    | 21 CFR 892.2050 | 21 CFR 892.2050 | Identical |
    | Classification | II | II | II | Identical |
    | Product

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Seg Pro V3 is a software device intended to assist trained radiation oncology professionals, including, but not limited to, radiation oncologists, medical physicists, and dosimetrists, during their clinical workflows of radiation therapy treatment planning by providing initial contours of organs at risk on DICOM images. Seg Pro V3 is intended to be used on adult patients only.

    The contours are generated by deep-learning algorithms and then transferred to radiation therapy treatment planning systems. Seg Pro V3 must be used in conjunction with a DICOM-compliant treatment planning system to review and edit results generated. Seg Pro V3 is not intended to be used for decision making or to detect lesions.

    Seg Pro V3 is an adjunct tool and is not intended to replace a clinician's judgment and manual contouring of the normal organs on DICOM images. Clinicians must not use the software generated output alone without review as the primary interpretation.

    Device Description

    The proposed device, Seg Pro V3, is a standalone software that is designed to be used by trained radiation oncology professionals to automatically delineate (segment/contour) organs-at-risk (OARs) on DICOM images. This auto-contouring of OARs is intended to facilitate radiation therapy workflows.

    The device receives images in DICOM format as input and automatically generates the contours of OARs, which are stored in DICOM format and in RTSTRUCT modality. The device must be used in conjunction with a DICOM-compliant treatment planning system (TPS) to review and edit results. Once data is routed to Seg Pro V3, the data will be processed and no user interaction is required, nor provided.

    The deployment environment is recommended to be in a local network with an existing hospital-grade IT system in place. Seg Pro V3 should be installed on a specialized server supporting deep learning processing. The configurations are only being operated by the manufacturer.

    • Local network setting of input and output destinations.
    • Presentation of labels and their color.
    • Processed image management and output (RTSTRUCT) file management.
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study proving the device meets those criteria, based on the provided FDA 510(k) clearance letter for Seg Pro V3 (RT-300):


    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Metric)Threshold (for large, medium, small volume structures)Reported Device Performance (Mean DSC for respective sizes)
    Dice Similarity Coefficient (DSC)> 0.80 for large-volume structures0.90
    Dice Similarity Coefficient (DSC)> 0.65 for medium-volume structures0.86
    Dice Similarity Coefficient (DSC)> 0.50 for small-volume structures0.73
    Overall Mean DSC(N/A - overall performance reported)0.85
    Overall Median 95% Hausdorff Distance (HD)(N/A - overall performance reported)2.62 mm
    Median 95% HD for large-volume structures(N/A - specific threshold not defined)3.01 mm
    Median 95% HD for medium-volume structures(N/A - specific threshold not defined)2.57 mm
    Median 95% HD for small-volume structures(N/A - specific threshold not defined)2.27 mm

    Study Details Proving Device Meets Acceptance Criteria

    2. Sample size used for the test set and the data provenance:

    • Sample Size: 175 cases.
    • Data Provenance: Consecutively collected from the Cancer Imaging Archive (TCIA) datasets. The data was acquired independently from product development training and internal testing. Race and ethnic distribution within the study data patient population was unavailable.
    • Geographic Origin (inferred): TCIA is primarily a US-based resource, so data is likely from the United States or a diverse international collection.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: Three.
    • Qualifications of Experts: Board-certified radiation oncologists.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Adjudication Method: "Each OAR contour used as ground truth (GT) was independently generated by three board-certified radiation oncologists." This implies a consensus or agreement among all three experts was used to define the ground truth, effectively a 3-way consensus. The document does not explicitly state an adjudication method like 2+1, but the independent generation by three experts suggests a high-quality, agreed-upon ground truth.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No. The study primarily evaluated the standalone performance of the AI algorithm. The clinical validation mentions that Seg Pro V3 "operates as intended within a clinical workflow and supports its intended use as an adjunct tool," but it does not present data from an MRMC study comparing human reader performance with and without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Yes. "a standalone performance evaluation was conducted to assess the Organ-at-Risk (OAR) contouring capabilities of Seg Pro V3. The observed results indicated that Seg Pro V3 by itself, in the absence of any interaction with a clinician, can contour developed OARs with satisfactory results." The reported DSC and HD metrics are from this standalone evaluation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Ground Truth Type: Expert consensus. The ground truth (GT) for each OAR contour was "independently generated by three board-certified radiation oncologists."

    8. The sample size for the training set:

    • The document explicitly states that the 175 cases used for the standalone performance evaluation were "acquired independently from product development training and internal testing." However, the document does not specify the sample size of the training set used to develop the deep learning models.

    9. How the ground truth for the training set was established:

    • The document does not specify how the ground truth for the training set was established. It only describes the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251351

    Validate with FDA (Live)

    Device Name
    AccuContour 4.0
    Date Cleared
    2026-01-23

    (268 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    21 - 100
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    361008
    China

    Re: K251351
    Trade/Device Name: AccuContour 4.0
    Regulation Number: 21 CFR 892.2050
    AccuContour | AccuContour-Lite | AccuContour |
    | Regulatory Information | | | |
    | Regulation No. | 21 CFR 892.2050
    | 21 CFR 892.2050 | 21 CFR 892.2050 |
    | Product Code | QKB | QKB | QKB |
    | Class | II | II | II |
    |

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    It is used by radiation oncology department to segment CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaption.

    Device Description

    The proposed device, AccuContour 4.0 Family, is a standalone software with the following variants: AccuContour and AccuContour-Lite. The functions of AccuContour-Lite is a subset of AccuContour.

    AccuContour:
    It is used by oncology department to register multi-modality images and segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has two image processing functions:

    1. Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,
    2. Automatic registration: rigid and deformable registration, and
    3. Manual contouring.

    It also has the following general functions:

    • Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    • Patient management;
    • Review tool of processed images;
    • Extension tool;
    • Plan evaluation and plan comparison;
    • Dose analysis.

    AccuContour-Lite:
    It is used by oncology department to segment (non-contrast) CT images, to generate needed information for treatment planning, treatment evaluation and treatment adaptation.

    The product has one image processing function:
    Deep learning contouring: it can automatically contour organs-at-risk, in head and neck, thorax, abdomen and pelvis (for both male and female) areas,

    It also has the following general functions:

    • Receive, add/edit/delete, transmit, input/export, medical images and DICOM data;
    • Patient management;
    • Review tool of processed images.
    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for the AccuContour 4.0, extracted and organized from the provided FDA 510(k) clearance letter.


    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are derived from the "Pass Criteria" columns in Tables 1, 2, 3, and 4, which specify minimum DSC and maximum HD95 values. The reported device performance is represented by the "Lower Bound 95% CI" for both DSC and HD95, and the "Average Rating" for clinical applicability.

    Table A: Performance for Synthetic CT (sCT) Contouring Function (Derived from MR Images)

    Organ & StructureSizeDSC Pass CriteriaHD95 Pass Criteria (mm)Reported DSC (Lower Bound 95% CI)Reported HD95 (Lower Bound 95% CI, mm)Average Rating (1-5)Meet Criteria? (DSC)Meet Criteria? (HD95)
    TemporalLobe_LMedium0.65N/A0.8864.319 (N/A criteria)4.5YesN/A
    TemporalLobe_RMedium0.65N/A0.8784.382 (N/A criteria)4.6YesN/A
    BrainLarge0.8N/A0.9861.877 (N/A criteria)4.7YesN/A
    BrainStemMedium0.65N/A0.8434.999 (N/A criteria)4.5YesN/A
    SpinalCordMedium0.65N/A0.8673.030 (N/A criteria)4.8YesN/A
    OpticChiasmSmall0.5N/A0.8044.771 (N/A criteria)4.1YesN/A
    OpticNerve_LSmall0.5N/A0.8222.235 (N/A criteria)4.1YesN/A
    OpticNerve_RSmall0.5N/A0.7942.422 (N/A criteria)4.2YesN/A
    InnerEar_LSmall0.5N/A0.8432.164 (N/A criteria)4.2YesN/A
    InnerEar_RSmall0.5N/A0.8062.102 (N/A criteria)4.4YesN/A
    MiddleEar_LSmall0.5N/A0.8243.580 (N/A criteria)4.5YesN/A
    MiddleEar_RSmall0.5N/A0.7923.700 (N/A criteria)4.4YesN/A
    Eye_LSmall0.5N/A0.9061.659 (N/A criteria)4.8YesN/A
    Eye_RSmall0.5N/A0.8971.584 (N/A criteria)4.9YesN/A
    Lens_LSmall0.5N/A0.8363.368 (N/A criteria)4.5YesN/A
    Lens_RSmall0.5N/A0.8413.379 (N/A criteria)4.2YesN/A
    PituitarySmall0.5N/A0.8012.267 (N/A criteria)4.4YesN/A
    MandibleSmall0.5N/A0.9131.844 (N/A criteria)4.3YesN/A
    TMJ_LSmall0.5N/A0.8302.819 (N/A criteria)4.4YesN/A
    TMJ_RSmall0.5N/A0.8172.722 (N/A criteria)4.5YesN/A
    OralCavityMedium0.65N/A0.9163.677 (N/A criteria)4.7YesN/A
    LarynxMedium0.65N/A0.7952.196 (N/A criteria)4.4YesN/A
    TracheaMedium0.65N/A0.8702.452 (N/A criteria)4.5YesN/A
    EsophagusMedium0.65N/A0.8002.680 (N/A criteria)4.7YesN/A
    Parotid_LMedium0.65N/A0.8512.386 (N/A criteria)4.6YesN/A
    Parotid_RMedium0.65N/A0.8682.328 (N/A criteria)4.6YesN/A
    Submandibular_LMedium0.65N/A0.8334.920 (N/A criteria)4.5YesN/A
    Submandibular_RMedium0.65N/A0.7832.348 (N/A criteria)4.3YesN/A
    ThyroidMedium0.65N/A0.8031.911 (N/A criteria)4.8YesN/A
    BrachialPlexus_LMedium0.65N/A0.8285.347 (N/A criteria)4.4YesN/A
    BrachialPlexus_RMedium0.65N/A0.8005.062 (N/A criteria)4.3YesN/A
    Lung_LLarge0.8N/A0.9681.635 (N/A criteria)4.5YesN/A
    Lung_RLarge0.8N/A0.9761.516 (N/A criteria)4.7YesN/A
    HeartLarge0.8N/A0.9592.496 (N/A criteria)4.5YesN/A
    LiverLarge0.8N/A0.9412.439 (N/A criteria)4.0YesN/A
    Kidney_LLarge0.8N/A0.8922.748 (N/A criteria)4.7YesN/A
    Kidney_RLarge0.8N/A0.8952.797 (N/A criteria)4.5YesN/A
    StomachLarge0.8N/A0.7824.754 (N/A criteria)4.1No*N/A
    PancreasMedium0.65N/A0.8276.271 (N/A criteria)4.0YesN/A
    DuodenumMedium0.65N/A0.8156.447 (N/A criteria)4.1YesN/A
    RectumMedium0.65N/A0.7962.047 (N/A criteria)3.9YesN/A
    BowelBagLarge0.8N/A0.8087.380 (N/A criteria)4.0YesN/A
    BladderLarge0.8N/A0.9432.082 (N/A criteria)4.5YesN/A
    MarrowLarge0.8N/A0.8891.842 (N/A criteria)4.6YesN/A
    FemurHead_LMedium0.65N/A0.9502.261 (N/A criteria)4.5YesN/A
    FemurHead_RMedium0.65N/A0.9412.466 (N/A criteria)4.6YesN/A

    *Note: For Stomach, the reported DSC (0.782) is below the pass criteria (0.8). However, the document states, "The results indicate that the auto-segmentation performance of the AccuContour system for sCT images derived from both CBCT and MR modalities meets the requirements for geometric accuracy." This suggests there might be an overall or combined assessment, or other factors led to acceptance despite this single instance. The average clinical rating is 4.1, which is above the threshold of 3.

    Table B: Performance for Synthetic CT (sCT) Contouring Function (Derived from CBCT Images)

    Organ & StructureSizeDSC Pass CriteriaHD95 Pass Criteria (mm)Reported DSC (Lower Bound 95% CI)Reported HD95 (Lower Bound 95% CI, mm)Average Rating (1-5)Meet Criteria? (DSC)Meet Criteria? (HD95)
    TemporalLobe_LMedium0.65N/A0.8543.451 (N/A criteria)4.8YesN/A
    TemporalLobe_RMedium0.65N/A0.8593.258 (N/A criteria)4.6YesN/A
    BrainLarge0.8N/A0.9861.804 (N/A criteria)4.7YesN/A
    BrainStemMedium0.65N/A0.9034.678 (N/A criteria)4.5YesN/A
    SpinalCordMedium0.65N/A0.8692.088 (N/A criteria)4.8YesN/A
    OpticChiasmSmall0.5N/A0.7955.252 (N/A criteria)4.4YesN/A
    OpticNerve_LSmall0.5N/A0.8152.373 (N/A criteria)4.2YesN/A
    OpticNerve_RSmall0.5N/A0.8162.210 (N/A criteria)4.1YesN/A
    InnerEar_LSmall0.5N/A0.8002.144 (N/A criteria)4.5YesN/A
    InnerEar_RSmall0.5N/A0.7942.171 (N/A criteria)4.2YesN/A
    MiddleEar_LSmall0.5N/A0.8003.301 (N/A criteria)4.5YesN/A
    MiddleEar_RSmall0.5N/A0.7973.888 (N/A criteria)4.5YesN/A
    Eye_LSmall0.5N/A0.9441.553 (N/A criteria)4.8YesN/A
    Eye_RSmall0.5N/A0.9411.678 (N/A criteria)4.9YesN/A
    Lens_LSmall0.5N/A0.8203.532 (N/A criteria)4.5YesN/A
    Lens_RSmall0.5N/A0.8213.370 (N/A criteria)4.7YesN/A
    PituitarySmall0.5N/A0.8022.496 (N/A criteria)4.4YesN/A
    MandibleSmall0.5N/A0.8702.227 (N/A criteria)4.3YesN/A
    TMJ_LSmall0.5N/A0.7742.775 (N/A criteria)4.3YesN/A
    TMJ_RSmall0.5N/A0.8002.791 (N/A criteria)4.5YesN/A
    OralCavityMedium0.65N/A0.8853.794 (N/A criteria)4.8YesN/A
    LarynxMedium0.65N/A0.7932.827 (N/A criteria)4.8YesN/A
    TracheaMedium0.65N/A0.8732.545 (N/A criteria)4.5YesN/A
    EsophagusMedium0.65N/A0.8002.811 (N/A criteria)4.5YesN/A
    Parotid_LMedium0.65N/A0.8912.415 (N/A criteria)4.6YesN/A
    Parotid_RMedium0.65N/A0.8942.525 (N/A criteria)4.6YesN/A
    Submandibular_LMedium0.65N/A0.7455.026 (N/A criteria)4.8YesN/A
    Submandibular_RMedium0.65N/A0.7972.192 (N/A criteria)4.7YesN/A
    ThyroidMedium0.65N/A0.8232.182 (N/A criteria)4.8YesN/A
    BrachialPlexus_LMedium0.65N/A0.8053.922 (N/A criteria)4.4YesN/A
    BrachialPlexus_RMedium0.65N/A0.8233.529 (N/A criteria)4.2YesN/A
    Lung_LLarge0.8N/A0.9471.587 (N/A criteria)4.5YesN/A
    Lung_RLarge0.8N/A0.9711.635 (N/A criteria)4.3YesN/A
    HeartLarge0.8N/A0.8961.823 (N/A criteria)4.5YesN/A
    LiverLarge0.8N/A0.9142.595 (N/A criteria)4.6YesN/A
    Kidney_LLarge0.8N/A0.9222.645 (N/A criteria)4.7YesN/A
    Kidney_RLarge0.8N/A0.9062.611 (N/A criteria)4.5YesN/A
    StomachLarge0.8N/A0.8584.681 (N/A criteria)4.2YesN/A
    PancreasMedium0.65N/A0.8225.548 (N/A criteria)4.4YesN/A
    DuodenumMedium0.65N/A0.8185.252 (N/A criteria)4.1YesN/A
    RectumMedium0.65N/A0.7974.253 (N/A criteria)4.3YesN/A
    BowelBagLarge0.8N/A0.8505.028 (N/A criteria)4.0YesN/A
    BladderLarge0.8N/A0.9263.322 (N/A criteria)4.7YesN/A
    MarrowLarge0.8N/A0.8372.148 (N/A criteria)4.7YesN/A
    FemurHead_LMedium0.65N/A0.8931.639 (N/A criteria)4.8YesN/A
    FemurHead_RMedium0.65N/A0.9271.807 (N/A criteria)4.9YesN/A

    Table C: Performance for 4DCT Registration Function (Rigid Registration)

    Organ & StructureSizeDSC Pass CriteriaReported DSC (Lower Bound 95% CI)Average Rating (1-5)Meet Criteria?
    TracheaMedium0.650.8884.5Yes
    EsophagusMedium0.650.8364.5Yes
    Lung_LLarge0.80.9324.7Yes
    Lung_RLarge0.80.9294.8Yes
    Lung_AllLarge0.80.9304.8Yes
    HeartLarge0.80.9174.6Yes
    SpinalCordMedium0.650.9434.6Yes
    LiverLarge0.80.8884.6Yes
    StomachLarge0.80.7914.5No*
    A_AortaLarge0.80.9174.4Yes
    SpleenLarge0.80.7864.5No*
    BodyLarge0.80.9954.9Yes

    *Note: For Stomach (0.791) and Spleen (0.786), the reported DSC is below the pass criteria (0.8). However, the document states, "According to the results, the accuracy of 4DCT image registration images meets the requirements and all structure models demonstrating that only minor edits would be required in order to make the structure models acceptable for clinical use." The average clinical rating for both is 4.5, above the threshold of 3.

    Table D: Performance for 4DCT Registration Function (Deformable Registration)

    Organ & StructureSizeDSC Pass CriteriaReported DSC (Lower Bound 95% CI)Average Rating (1-5)Meet Criteria?
    TracheaMedium0.650.9404.7Yes
    EsophagusMedium0.650.8664.6Yes
    Lung_LLarge0.80.9664.7Yes
    Lung_RLarge0.80.9494.5Yes
    Lung_AllLarge0.80.9544.8Yes
    HeartLarge0.80.9314.6Yes
    SpinalCordMedium0.650.9204.6Yes
    LiverLarge0.80.9364.5Yes
    StomachLarge0.80.8894.5Yes
    A_AortaLarge0.80.9474.6Yes
    SpleenLarge0.80.9134.8Yes
    BodyLarge0.80.9974.9Yes

    2. Sample Size Used for the Test Set and Data Provenance

    • Synthetic CT (sCT) Contouring Function:

      • Sample Size: 247 synthetic CT images (116 generated from MR, 131 generated from CBCT).
      • Data Provenance:
        • Demographic Distribution: 57% male, 43% female. Age distribution: 13% (21-40), 44.1% (41-60), 36.8% (61-80), 6.1% (81-100). Race: 78% White, 12% Black or African American, 10% Others.
        • Imaging Equipment: MR images from GE (21.6%), Philips (56.9%), Siemens (21.6%). CBCT images from Varian (58.8%), Elekta (41.2%).
        • Retrospective/Prospective: Not explicitly stated, but the description of demographic and equipment distribution from a "sample" indicates retrospective data collection from existing patient records.
        • Country of Origin: The racial distribution explicitly mentions "U.S. clinical radiotherapy practice," suggesting the data is primarily from the United States.
    • 4DCT Registration Function:

      • Sample Size: 30 4DCT image sets.
      • Data Provenance:
        • Imaging Equipment: Siemens (90.0%), Philips (10.0%) scanners.
        • Demographic Distribution: 17 males (56.7%), 13 females (43.3%). Age: 33-82 years, with majority in 51-65 (40.0%) and 66-80 (43.3%) year brackets.
        • Image Characteristics: Uniform 3mm slice thickness (100%).
        • Sourcing Location: Most images (90.0%) from Drexel Town Square Health Center/Community Memorial Hospital, remainder from Froedtert Hospital.
        • Retrospective/Prospective: Not explicitly stated, but implies retrospective data from patient archives of the mentioned hospitals.
        • Country of Origin: Based on the hospital names (Drexel Town Square Health Center, Community Memorial Hospital, Froedtert Hospital), the data is from the United States.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not explicitly stated. The text mentions "clinical experts evaluate the clinical applicability" and "RTStruct contoured by the professional physician as the gold standard." This implies at least one, and likely multiple, qualified medical professionals.
    • Qualifications of Experts: The experts are described as "clinical experts" and "professional physician(s)." Their specific qualifications (e.g., "radiologist with 10 years of experience") are not provided. They are implied to be clinically qualified radiotherapy personnel.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The ground truth for segmentation is stated to be "RTStruct contoured by the professional physician". For clinical applicability, "clinical experts evaluate the clinical applicability" and assign a 1-5 scale score. This suggests a single expert (or group consensus without specific adjudication rules like 2+1) established the ground truth segmentation, and separate clinical experts evaluated the results. There is no mention of a formal adjudication process for disagreements in ground truth labeling if multiple experts were involved in its creation.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No.
    • Effect Size of Human Improvement (if applicable): Not applicable, as no MRMC study comparing human readers with and without AI assistance was reported. The testing focused solely on the algorithm's performance against expert-generated ground truth and expert evaluation of the algorithm's output.

    6. Standalone Performance

    • Was a standalone performance study done? Yes. The entire report details the "Performance Test Report on Synthetic CT (sCT) Contouring Function" and "Performance Test Report on 4DCT Registration Function," measuring the algorithm's performance (DSC, HD95) against gold standard contours and qualitative evaluation by clinical experts. This reflects the algorithm's performance independent of human interaction during the contouring process.

    7. Type of Ground Truth Used

    • Ground Truth: For the synthetic CT contouring and 4DCT registration functions, the ground truth was "RTStruct contoured by the professional physician" (i.e., expert consensus or expert-generated contours).

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not provided in the document.

    9. How the Ground Truth for the Training Set was Established

    • Training Set Ground Truth Establishment: Not provided in the document. The document only details the ground truth used for the validation/test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K253735

    Validate with FDA (Live)

    Device Name
    AV Vascular
    Date Cleared
    2026-01-22

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    21 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Netherlands

    Re: K253735
    Trade/Device Name: AV Vascular
    Regulation Number: 21 CFR 892.2050
    January 22, 2026

    Re: K253735
    Trade/Device Name: AV Vascular
    Regulation Number: 21 CFR 892.2050
    Primary Product Code | QIH |
    | Secondary Product Code | LLZ |
    | Classification | 21 CFR 892.2050
    management and processing system | Identical to primary predicate device |
    | Regulation Number | 892.2050
    | 892.2050 | 892.1750 | 892.1750892.2050 | Identical to primary predicate device |
    | **Indications

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AV Vascular is indicated to assist users in the visualization, assessment and quantification of vascular anatomy on CTA and/or MRA datasets, in order to assess patients with suspected or diagnosed vascular pathology and to assist with pre-procedural planning of endovascular interventions.

    Device Description

    AV Vascular is a post-processing software application intended for visualization, assessment, and quantification of vessels in computed tomography angiography (CTA) and magnetic resonance angiography (MRA) data with a unified workflow for both modalities.

    AV Vascular includes the following functions:

    • Advanced visualization: the application provides all relevant views and interactions for CTA and MRA image review: 2D slides, MIP, MPR, curved MPR (cMPR), stretched MPR (sMPR), path-aligned views (cross-sectional and longitudinal MPRs), 3D volume rendering (VR).

    • Vessel segmentation: automatic bone removal and vessel segmentation for head/neck and body CTA data, automatic vessel centerline, lumen and outer wall extraction and labeling for the main branches of the vascular anatomy in head/neck and body CTA data, semi-automatic and manual creation of vessel centerline and lumen for CTA and MRA data, interactive two-point vessel centerline extraction and single-point centerline extension.

    • Vessel inspection: enable inspection of an entire vessel using the cMPR or sMPR views as well as inspection of a vessel locally using vessel-aligned views (cross-sectional and longitudinal MPRs) by selecting a position along a vessel of interest.

    • Measurements: ability to create and save measurements of vessel and lumen inner and outer diameters and area, as well as vessel length and angle measurements.

    • Measurements and tools that specifically support pre-procedural planning: manual and automatic ring marker placement for specific anatomical locations, length measurements of the longest and shortest curve along the aortic lumen contour, angle measurements of aortic branches in clock position style, saving viewing angles in C-arm notation, and configurable templated

    • Saving and export: saving and export of batch series and customizable reports.

    AI/ML Overview

    This summarization is based on the provided 510(k) clearance letter for Philips Medical Systems' AV Vascular device.

    Acceptance Criteria and Device Performance for Aorto-iliac Outer Wall Segmentation

    MetricsAcceptance CriteriaReported Device Performance (Mean with 98.75% confidence intervals)
    3D Dice Similarity Coefficient (DSC)> 0.90.96 (0.96, 0.97)
    2D Dice Similarity Coefficient (DSC)> 0.90.96 (0.95, 0.96)
    Mean Surface Distance (MSD)< 1.0 mm0.57 mm (0.485, 0.68)
    Hausdorff Distance (HD)< 3.0 mm1.68 mm (1.23, 2.08)
    ∆Dmin (difference in minimum diameter)> 95% |∆Dmin| < 5 mm98.8% (98.3-99.2%)
    ∆Dmax (difference in maximum diameter)> 95% |∆Dmax| < 5 mm98.5% (97.9-98.9%)

    The reported device performance for all primary and secondary metrics meets the predefined acceptance criteria.

    Study Details for Aorto-iliac Outer Wall Segmentation Validation

    1. Sample Size used for the Test Set and Data Provenance:

      • Sample Size: 80 patients
      • Data Provenance: Retrospectively collected from 7 clinical sites in the US, 3 European hospitals, and one hospital in Asia.
      • Independence from Training Data: All performance testing datasets were acquired from clinical sites distinct from those which provided the algorithm training data. The algorithm developers had no access to the testing data, ensuring complete independence.
      • Patient Characteristics: At least 80% of patients had thoracic and/or abdominal aortic diseases and/or iliac artery diseases (e.g., thoracic/abdominal aortic aneurysm, ectasia, dissection, and stenosis). At least 20% had been treated with stents.
      • Demographics:
        • Geographics: North America: 58 (72.5%), Europe: 3 (3.75%), Asia: 19 (23.75%)
        • Sex: Male: 59 (73.75%), Female: 21 (26.25%)
        • Age (years): 21-50: 2 (2.50%), 51-70: 31 (38.75%), >71: 45 (56.25%), Not available: 2 (2.5%)
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Number of Experts: Three
      • Qualifications: US-board certified radiologists.
    3. Adjudication Method for the Test Set:

      • The three US-board certified radiologists independently performed manual contouring of the outer wall along the aorta and iliac arteries on cross-sectional planes for each CT angiographic image.
      • After quality control, these three aortic and iliac arterial outer wall contours were averaged to serve as the reference standard contour. This can be considered a form of consensus/averaging after independent readings.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • The provided document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to measure human reader improvement with AI assistance. The study focused on the standalone performance of the AI algorithm compared to an expert-derived ground truth.
    5. Standalone (Algorithm Only Without Human-in-the-Loop Performance):

      • Yes, the performance data provided specifically describes the standalone performance of the AI-based algorithm for aorto-iliac outer wall segmentation. The algorithm's output was compared directly against the reference standard without human intervention in the segmentation process.
    6. Type of Ground Truth Used:

      • Expert Consensus/Averaging: The ground truth was established by averaging the independent manual contouring performed by three US-board certified radiologists.
    7. Sample Size for the Training Set:

      • The document states that the testing data were independent of the training data and that developers had no access to the testing data. However, the exact sample size for the training set is not specified in the provided text.
    8. How the Ground Truth for the Training Set Was Established:

      • The document implies that training data were used, but it does not describe how the ground truth for the training set was established. It only ensures that the testing data did not come from the same clinical sites as the training data and that algorithm developers had no access to the testing data.
    Ask a Question

    Ask a specific question about this device

    K Number
    K253057

    Validate with FDA (Live)

    Date Cleared
    2026-01-22

    (122 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Re: K253057
    Trade/Device Name: AI-Rad Companion Brain MR
    Regulation Number: 21 CFR 892.2050
    Image Management and Processing System
    Classification Panel: Radiology
    CFR Section: 21 CFR §892.2050
    Image Management and Processing System
    Classification Panel: Radiology
    CFR Section: 21 CFR §892.2050
    Image Management and Processing System
    Classification Panel: Radiology
    CFR Section: 21 CFR §892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AI-Rad Companion Brain MR is a post-processing image analysis software that assists clinicians in viewing, analyzing, and evaluating MR brain images.

    AI-Rad Companion Brain MR provides the following functionalities:
    • Automated segmentation and quantitative analysis of individual brain structures and white matter hyperintensities
    • Quantitative comparison of each brain structure with normative data from a healthy population
    • Presentation of results for reporting that includes all numerical values as well as visualization of these results

    Device Description

    AI-Rad Companion Brain MR runs two distinct and independent algorithms for Brain Morphometry analysis and White Matter Hyperintensities (WMH) segmentation, respectively. In overall, comprises four main algorithmic features:

    • Brain Morphometry
    • Brain Morphometry follow-up
    • White Matter Hyperintensities (WMH)
    • White Matter Hyperintensities (WMH) follow-up

    The feature for Brain Morphometry is available since the first version of the device (VA2x), while segmentation of White Matter Hyperintensities was added since VA4x and the follow-up analysis for both is available since VA5x. The brain morphometry and brain morphometry follow-up feature have not been modified and remain identical to previous VA5x mainline version.

    AI-Rad Companion Brain MR VA60 is an enhancement to the predicate, AI-Rad Companion Brain MR VA50 (K232305). Just as in the predicate, the brain morphometry feature of AI-Rad Companion Brain MR addresses the automatic quantification and visual assessment of the volumetric properties of various brain structures based on T1 MPRAGE datasets. From a predefined list of brain structures (e.g. Hippocampus, Caudate, Left Frontal Gray Matter, etc.) volumetric properties are calculated as absolute and normalized volumes with respect to the total intracranial volume. The normalized values are compared against age-matched mean and standard deviations obtained from a population of healthy reference subjects. The deviation from this reference population can be visualized as 3D overlay map or out-of-range flag next to the quantitative values.

    Additionally, identical to the predicate, the white matter hyperintensities feature addresses the automatic quantification and visual assessment of white matter hyperintensities on the basis of T1 MPRAGE and T2 weighted FLAIR datasets. The detected WMH can be visualized as a 3D overlay map and the quantification in count and volume as per 4 brain regions in the report.

    AI/ML Overview

    Here's a structured overview of the acceptance criteria and study details for the AI-Rad Companion Brain MR, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance (AI-Rad Companion Brain MR WMH Feature)Reported Device Performance (AI-Rad Companion Brain MR WMH Follow-up Feature)
    WMH Segmentation AccuracyPearson correlation coefficient between WMH volumes and ground truth annotation: 0.96Interclass correlation coefficient between WMH volumes and ground truth annotation: 0.94Dice score: 0.60F1-score: 0.67Detailed Dice Scores for WMH Segmentation:Mean: 0.60Median: 0.62STD: 0.1495% CI: [0.57, 0.63]Detailed ASSD Scores for WMH Segmentation:Mean: 0.05Median: 0.00STD: 0.1595% CI: [0.02, 0.08]
    New or Enlarged WMH Segmentation Accuracy (Follow-up)Pearson correlation coefficient between new or enlarged WMH volumes and ground truth annotation: 0.76Average Dice score: 0.59Average F1-score: 0.71Detailed Dice Scores for New/Enlarged WMH Segmentation (by Vendor - Siemens, GE, Philips):Siemens: Mean 0.64, Med 0.67, STD 0.15, 95% CI [0.60, 0.69]GE: Mean 0.56, Med 0.60, STD 0.14, 95% CI [0.51, 0.61]Philips: Mean 0.55, Med 0.59, STD 0.16, 95% CI [0.50, 0.61]Detailed ASSD Scores for New/Enlarged WMH Segmentation (by Vendor - Siemens, GE, Philips):Siemens: Mean 0.02, Med 0.00, STD 0.06, 95% CI [0.00, 0.04]GE: Mean 0.09, Med 0.01, STD 0.23, 95% CI [0.03, 0.19]Philips: Mean 0.04, Med 0.00, STD 0.11, 95% CI [0.00, 0.08]

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • White Matter Hyperintensities (WMH) Feature: 100 subjects (Multiple Sclerosis patients (MS), Alzheimer's patients (AD), cognitive impaired (CI), and healthy controls (HC)).
      • White Matter Hyperintensities (WMH) Follow-up Feature: 165 subjects (Multiple Sclerosis patients (MS) and Alzheimer's patients (AD)).
      • Data Provenance: Data acquired from Siemens, GE, and Philips scanners. Testing data had balanced distribution with respect to gender and age of the patient according to target patient population, and field strength (1.5T and 3T). This indicates a retrospective, multi-vendor, multi-national (implied by vendor diversity) dataset.
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

      • Number of Experts: Three radiologists.
      • Qualifications: Not explicitly stated beyond "radiologists." It is not specified if they are board-certified, or their years of experience.
    3. Adjudication Method for the Test Set:

      • For each dataset, three sets of ground truth annotations were created manually.
      • Each set was annotated by a disjoint group consisting of an annotator, a reviewer, and a clinical expert.
      • The clinical expert was randomly assigned per case to minimize annotation bias.
      • The clinical expert reviewed and corrected the initial annotation of the changed WMH areas according to a specified annotation protocol. Significant corrections led to re-communication with the annotator and re-review.
      • This suggests a 3+1 Adjudication process, where three initial annotations are reviewed by a clinical expert.
    4. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done:

      • No, an MRMC comparative effectiveness study comparing human readers with and without AI assistance was not done. The study focuses on the standalone performance of the AI algorithm against expert ground truth.
    5. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done:

      • Yes, a standalone performance study was done. The "Accuracy was validated by comparing the results of the device to manual annotated ground truth from three radiologists." This evaluates the algorithm's performance directly.
    6. The Type of Ground Truth Used:

      • Expert Consensus / Manual Annotation: The ground truth for both WMH and WMH follow-up features was established through "manual annotated ground truth from three radiologists" and involved a "standard annotation process" with annotators, reviewers, and clinical experts.
    7. The Sample Size for the Training Set:

      • The document states that the "training data used for the fine tuning the hyper parameters of WMH follow-up algorithm is independent of the data used to test the white matter hyperintensity algorithm follow up algorithm." However, the specific sample size for the training set is not provided in the given text.
    8. How the Ground Truth for the Training Set Was Established:

      • The document implies that the WMH follow-up algorithm "does not include any machine learning/ deep learning component," suggesting a rule-based or conventional image processing algorithm. Therefore, "training" might refer to parameter tuning rather than machine learning model training.
      • For the "fine-tuning the hyper parameters of WMH follow-up algorithm," the ground truth establishment method for this training data is not explicitly detailed in the provided text. It only states that this data was "independent of the data used to test" the algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K252634

    Validate with FDA (Live)

    Date Cleared
    2026-01-16

    (149 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    India

    Re: K252634
    Trade/Device Name: Imagine® Enterprise Suite
    Regulation Number: 21 CFR 892.2050
    Automated radiological image processing software
    Class: Class II
    Regulation Number: 21 CFR 892.2050
    Regulation Name: Medical image management and processing system
    Regulation Number: 21 CFR 892.2050
    Regulation Name: Medical image management and processing system
    Regulation Number: 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Imagine® Enterprise Suite (IES) is a medical diagnostic device that receives, stores, and shares the medical images from and to DICOM-compliant entities such as imaging modalities (such as X-ray Angiograms (XA), Echocardiograms (US), MRI, CT, CR, DR, IVUS, OCT, PET and SPECT), external PACS, and other diagnostic workstations. It is used in the display and quantification of medical images, after image acquisition from modalities, for post-procedure clinical decision support. It constitutes a PACS for the communication and storage of medical images and provides a worklist of stored medical images that can be used to open patient studies in one of its image viewers. It is intended to display images and related information that are interpreted by trained professionals to render findings and/or diagnosis, but it does not directly generate any diagnosis or potential findings. Not intended for primary diagnosis of mammographic images. Not intended for intra-procedural or real-time use. Not intended for diagnostic use on mobile devices.

    Device Description

    The Imagine® Enterprise Suite (IES) has, as its backbone, the IES PACS – a DICOM stack for the communication and storage of medical images. It is based on its predecessor, the HCP DICOM Net® PACS (K023467). The IES is made up of the following modules:

    IES_EntViewer: This viewer module can be launched from the IES PACS Worklist and is intended primarily for the review and manipulation of angiographic X-ray images. It also supports the review of images from other modalities in single or combination views, thereby serving as a general-purpose multi-modality viewer.

    IES_EchoViewer: This viewer module can be launched from the IES Worklist and is intended for specialized viewing, manipulation, and measurements of Echocardiography images.

    IES_RadViewer: This viewer module can be launched from the IES Worklist and is intended for specialized viewing, manipulation, and measurements of Radiological images. It also supports the fusion of Radiological images (such as MRI and CT) with Nuclear Medicine images (such as PET and SPECT).

    IES_ZFPViewer: This viewer is intended for non-diagnostic review of medical images over a web browser. It supports an independent worklist and a viewing component that requires no installation for the end user. It works within an intranet or over the internet via user-provided VPN or static IP.

    AngioQuant: This module can be launched from the IES_EntViewer to perform automatic quantification of coronary arteries. It uses, as input, the cardiac angiogram studies stored on the IES PACS. It is intended for display and quantification of Xray angiographic images after image acquisition in the cathlab, for post-procedure clinical decision support within the cathlab workflow. It is not intended for intra-procedural or real-time use. The Imagine® Enterprise Suite (IES) is integrated with ML only for the segmentation of coronary vessels from X-ray angiographic images and uses deep learning methodology for image analysis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Imagine® Enterprise Suite, specifically focusing on the AngioQuant module's machine learning component, as described in the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The 510(k) summary provides a narrative description of the performance evaluation rather than a direct table of acceptance criteria with corresponding performance metrics for every criterion. However, it explicitly states that the performance of the IES_AngioQuant module's machine learning-based coronary vessel segmentation function was evaluated using several metrics and compared against an FDA-cleared predicate device.

    Acceptance Criterion (Inferred from Study Design)Reported Device Performance (IES_AngioQuant ML component)
    Quantitative Performance Metrics for Coronary Vessel SegmentationEvaluated using:
    Jaccard Index (Intersection over Union)Value not explicitly stated, but was among the comprehensive set of metrics used for evaluation.
    Dice ScoreValue not explicitly stated, but was among the comprehensive set of metrics used for evaluation.
    PrecisionValue not explicitly stated, but was among the comprehensive set of metrics used for evaluation.
    AccuracyValue not explicitly stated, but was among the comprehensive set of metrics used for evaluation.
    RecallValue not explicitly stated, but was among the comprehensive set of metrics used for evaluation.
    Visual Assessment of SegmentationConducted in conjunction with quantitative metrics.
    Comparative Performance to Predicate DevicePerformance was compared against the FDA-cleared predicate device, CAAS Workstation (510(k) No. K232147).
    Reproducibility/Consistency of Ground Truth (Implicit for verification)Verification performed by two independent board-certified interventional cardiologists.

    Note: The specific numerical values for Jaccard Index, Dice Score, Precision, Accuracy, and Recall are not provided in the summary. The summary highlights that these metrics were used for evaluation.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: An independent external test set comprising 30 patient studies was used.
    • Data Provenance: The dataset consisted of anonymized angiographic studies sourced from multiple U.S. and international clinical sites. It was a retrospective dataset. The dataset included adult patients of mixed gender and represented a range of age, body habitus, and diverse race and ethnicity. Clinically relevant variability, including lesion severity, vessel anatomy, image quality, and imaging equipment vendors, was represented.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Two independent board-certified interventional cardiologists.
    • Qualifications of Experts: Each expert had more than 10 years of clinical experience.

    4. Adjudication Method for the Test Set

    The summary does not explicitly state a formal adjudication method like "2+1" or "3+1" for differences between the experts. However, it states that the ground truth (reference standard) was established using the FDA-cleared Medis QAngio XA (K182611) software, with verification performed by the two independent board-certified interventional cardiologists. This implies that the experts reviewed and confirmed the ground truth generated by the predicate software, rather than independently generating it and then adjudicating differences.

    5. MRMC Comparative Effectiveness Study

    An MRMC comparative effectiveness study was not explicitly described in the summary. The performance comparison was primarily an algorithm-only comparison against a predicate device (CAAS Workstation) for the ML component. The summary does not mention how much human readers improve with or without AI assistance.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone (algorithm only without human-in-the-loop performance) study was done for the IES_AngioQuant module's machine learning-based coronary vessel segmentation function. Its performance was evaluated using quantitative metrics and visual assessment, and then compared against the FDA-cleared predicate device (CAAS Workstation).

    7. Type of Ground Truth Used

    The ground truth was established using an FDA-cleared software (Medis QAngio XA, K182611), with its output verified by expert consensus of two independent board-certified interventional cardiologists.

    8. Sample Size for the Training Set

    A total of 762 anonymized angiographic studies were used for training, validation, and internal testing sets combined. The summary does not provide an exact breakdown of how many studies were specifically in the training set versus the validation and internal testing sets.

    9. How the Ground Truth for the Training Set Was Established

    The summary states that the ground truth ("truthing") for the dataset (which includes the training, validation, and internal testing sets) was established using the FDA-cleared Medis QAngio XA (K182611) software, with verification performed by two independent board-certified interventional cardiologists, each with more than 10 years of clinical experience. Implicitly, this same method was used for establishing ground truth for the training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K253639

    Validate with FDA (Live)

    Device Name
    View
    Manufacturer
    Date Cleared
    2026-01-08

    (50 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Chicago, IL 60661

    Re: K253639
    Trade/Device Name: View
    Regulation Number: 21 CFR 892.2050
    Regulation
    Classification Name: | Medical image management and processing system |
    | Primary Regulation Number: | 21 CFR 892.2050
    Classification Name: | Medical image management and processing system |
    | Primary Regulation Number: | 21 CFR 892.2050
    Classification Name: | Medical image management and processing system |
    | Primary Regulation Number: | 21 CFR 892.2050
    Classification Name: | Medical image management and processing system |
    | Primary Regulation Number: | 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    View is a software application that displays, processes, and analyzes medical image data and associated clinical reports to aid in diagnosis for healthcare professionals. It streamlines standard and advanced medical imaging analysis by providing a complete suite of measurement tools intended to generate relevant findings automatically collected for export and save purposes.

    Typical users of this system are authorized healthcare professionals.

    Mammography images may only be interpreted using a monitor compliant with requirements of local regulations and must meet other technical specifications reviewed and accepted by the local regulatory agencies. Lossy compressed mammographic images and digitized film screen images should not be reviewed for primary image interpretations with use of the View.

    Device Description

    View is a cloud-native software application designed to support healthcare professionals in the display, processing, and analysis of medical image data. It enhances diagnostic workflows by integrating intelligent tools, streamlined accessibility, and advanced visualization capabilities, including specialized support for breast imaging.

    View brings together 2D imaging, basic 3D visualization and advance image analysis in a single, intuitive interface. This simplifies information access, improves workflow efficiency, and reduces the need for multiple applications.

    Key features include:

    • Smart Reading Protocol (SRP) which uses machine learning for creating and applying hanging protocols (HP)
    • AI workflow to support both DICOM Secondary Capture Object & DICOM Structured Report for displaying AI findings and enabling rejection/modification of the AI findings.
    • Displays 2D, 3D, and historical comparison exams in customizable layouts.
    • Enables smooth transitions between 2D and 3D views, either manually or as part of hanging protocols.
    • Advantage Workstation integration for deeper analysis through dedicated 3D applications.
    • Offers a full suite of measurement, annotation and segmentation tools for DICOM images.
    • Captures all measurements and annotations in a centralized findings panel.
    • Enhanced access for DICOM images stored in the cloud server.
    • Easy way to integrate with external systems using FHIRcast.
    • Better user experience by having a native MIP/MPR/Smart Segmentation/Volume Rendering
    • Seamless access to breast images through cloud with specific tools for Mammo images.
    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the "View" device (K253639) offer limited details regarding specific acceptance criteria and the studies conducted to prove device performance. The information is high-level and generalized.

    Based on the available text, here's the breakdown of what can and cannot be extracted:

    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria (e.g., minimum accuracy, sensitivity, specificity) or specific reported performance metrics for the device. It focuses on functional equivalence and verification/validation testing without presenting performance data.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states: "The Smart Reading Protocol function, which uses machine learning for creating and applying hanging protocols, was tested on various imaging modality datasets representative of the clinical scenarios where View is intended to be used."

    • Sample Size: "various imaging modality datasets" – The exact sample size (number of images, cases, or patients) is not specified.
    • Data Provenance: "representative of the clinical scenarios" – The country of origin and whether the data was retrospective or prospective are not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not provide any information on the number or qualifications of experts used to establish ground truth for the test set.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not specify any adjudication method used for the test set.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions a "comparison was performed between the predicate device (Universal Viewer) and the subject device (View) and showed that the devices are equivalent" for the Smart Reading Protocol function. However, this is a comparison between devices, not an MRMC study designed to assess human reader improvement with AI assistance. Therefore:

    • A specific MRMC comparative effectiveness study involving human readers with and without AI assistance is not described.
    • Consequently, an effect size for human reader improvement is not provided.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document implies that the Smart Reading Protocol (SRP) function, which uses machine learning, was tested. The statement "A comparison was performed between the predicate device (Universal Viewer) and the subject device (View) and showed that the devices are equivalent" regarding SRP suggests an evaluation of the algorithm's output. However, whether this testing was strictly standalone performance (algorithm only) versus integrated system performance is not explicitly detailed. Given the context of a 510(k) summary for a "Medical Image Management And Processing System," the testing likely covers the integrated system.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document does not specify the type of ground truth used for its testing.

    8. The sample size for the training set

    The document does not provide any information on the sample size used for the training set.

    9. How the ground truth for the training set was established

    The document does not provide any information on how the ground truth for the training set was established.


    Summary of available information regarding acceptance criteria and study data:

    Unfortunately, the provided FDA 510(k) summary is very high-level and primarily focuses on justifying substantial equivalence based on technological characteristics and general verification/validation processes. It does not contain the detailed performance metrics, sample sizes, ground truth establishment methods, or reader study details typically found in more comprehensive clinical study reports. The approval is based on the device having "substantial equivalent technological characteristics" and being "as safe and as effective" as the predicate device.

    The only specific functional testing mentioned is for the "Smart Reading Protocol" using machine learning, which was compared to the predicate device to show equivalence. However, the details of this comparison (e.g., performance metrics, specific acceptance criteria for equivalence, or study design) are not included.

    Ask a Question

    Ask a specific question about this device

    K Number
    K253784

    Validate with FDA (Live)

    Device Name
    3DICOM MD Cloud
    Date Cleared
    2026-01-08

    (43 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    All
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Australia

    Re: K253784
    Trade/Device Name: 3DICOM MD Cloud
    Regulation Number: 21 CFR 892.2050
    Australia

    Re: K253784
    Trade/Device Name: 3DICOM MD Cloud
    Regulation Number: 21 CFR 892.2050

    Proprietary name**3DICOM MD Cloud
    Model numberv1.0.0
    Regulation number892.2050
    k) number**K233226
    ManufacturerRadical Imaging LLC
    Regulation number21 CFR 892.2050
    number**K222470
    ManufacturerSingular Health Pty Ltd
    Regulation number21 CFR 892.2050
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3DICOM MD Cloud is intended for use as a diagnostic and analysis tool for multi-modality medical images and their associated reports and information, enabling qualified healthcare professionals from hospitals, imaging centres, radiologists, and reading practices to view patient images, documents, and related data. 3DICOM MD Cloud enables qualified users to manipulate medical images, create markups, and perform measurements using a range of tools.

    3DICOM MD Cloud is not intended for diagnostic use with mammography images. Usage for mammography is for reference and referral only. 3DICOM MD Cloud is not intended for diagnostic use on mobile devices.

    Device Description

    3DICOM MD Cloud is a software as a medical device that provides diagnostic viewing and analysis of multi-modality DICOM images in a secure, web-based, cloud/server-hosted environment. Authorized users access studies from site-provisioned cloud storage sources and perform 2D multi-planar viewing (MPR) and 3D visualization, apply window/level and other standard adjustments, make 2D measurements and annotations, generate a DICOM-structured report summarizing tracked measurements, and export snapshots/measurement tables. The device does not generate diagnoses or provide automated clinical interpretation.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K253370

    Validate with FDA (Live)

    Device Name
    LOGIQ Totus
    Date Cleared
    2026-01-08

    (100 days)

    Product Code
    Regulation Number
    892.1550
    Age Range
    0 - 999
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ultrasound Transducer, 21 CFR 892.1570, 90-ITX;Medical Image Management and Processing System, 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    LOGIQ Totus is intended for use by a qualified physician for ultrasound evaluation of Fetal/Obstetrics; Abdominal(including Renal, Gynecology/Pelvic), Pediatric; Small organ(Breast, Testes, Thyroid); Neonatal Cephalic; Adult Cephalic; Cardiac(Adult and Pediatric), Peripheral Vascular, Musculo-skeletal Conventional and Superficial; Urology(including Prostate); Transrectal; Transvaginal; Transesophageal and Intraoperative(Abdominal and Vascular).

    Modes of operation includes: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.

    The LOGIQ Totus is intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ Totus is full featured, Track 3 device, primarily intended for general purpose diagnostic ultrasound system which consists of a mobile console approximately 490mm wide (monitor width: 545mm), 835mm deep and 1415~1815mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 14-inch LCD touch screen and color 23.8-inch LCD & HDU image display.

    AI/ML Overview

    The provided FDA 510(k) clearance letter and summary for the LOGIQ Totus Ultrasound System (K253370) describes the acceptance criteria and the study for the Ultrasound Guided Fat Fraction for adult imaging (UGFF) software feature. This feature is being added to the LOGIQ Totus and is similar to a previously cleared Siemens UDFF feature.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes the performance of the UGFF feature by comparing it to MRI Proton Density Fat Fraction (MRI-PDFF) and, in a separate confirmatory study, to a predicate UDFF device. The "acceptance criteria" are implied by the reported strong correlations and limits of agreement with these reference standards.

    Acceptance Criteria (Implied)Reported Device Performance (UGFF vs. MRI-PDFF - Primary Study, Japan)Reported Device Performance (UGFF vs. MRI-PDFF - Confirmatory Study, US/EU)Reported Device Performance (UGFF vs. UDFF - Confirmatory Study, EU)
    Strong correlation with MRI-PDFFCorrelation coefficient: 0.87Correlation coefficient: 0.90N/A (compared to UDFF instead of MRI-PDFF)
    Acceptable agreement (Bland-Altman) with MRI-PDFFOffset: -0.32% LOA: -6.0% to 5.4% 91.6% patients within ±8.4%Offset: -0.1% LOA: -3.6% to 3.4% 95.0% patients within ±4.6%N/A (compared to UDFF instead of MRI-PDFF)
    Strong correlation with predicate UDFF deviceN/AN/ACorrelation coefficient: 0.88
    Acceptable agreement (Bland-Altman) with predicate UDFF deviceN/AN/AOffset: -1.2% LOA: -5.0% to 2.6% All patients within ±4.7%
    No statistically significant effect of demographic confounders on measurementsConfirmed for BMI, SCD, and other demographic confounders on AC, BSC, and NSR.Not explicitly stated for confirmatory studies but implied.Not explicitly stated for confirmatory studies but implied.

    2. Sample Size Used for the Test Set and Data Provenance

    • Primary Study (UGFF vs. MRI-PDFF):

      • Sample Size: 582 participants
      • Data Provenance: External clinical study in Japan (Population: Asian). The study was retrospective or prospective is not specified, but the phrase "obtained from the liver of five hundred and eighty-two (582) participants" suggests a data collection event rather than a purely retrospective analysis of existing medical records. The study is described as an "external clinical study," further suggesting a dedicated data collection.
    • First Confirmatory Study (UGFF vs. MRI-PDFF):

      • Sample Size: 15 US patients and 5 EU patients (total 20 patients)
      • Data Provenance: US and EU patients. Demographic information on the 5 EU patients was unavailable. This was conducted as a "confirmatory study."
    • Second Confirmatory Study (UGFF vs. UDFF):

      • Sample Size: 24 EU patients
      • Data Provenance: EU patients. This was conducted as a "confirmatory study."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not specify the number of experts or their qualifications for establishing the ground truth.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method. For the UGFF feature, the "ground truth" was objective measurements (MRI-PDFF or a predicate device's UDFF), which typically do not require adjudication by human experts in the same way an image diagnosis might.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not done for the UGFF feature as described. The studies focused on comparing the device's output (UGFF index) to an objective reference standard (MRI-PDFF or another device's UDFF), not on how human readers' performance improved with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation was done. The UGFF index, based on acoustic property measurements, is compared directly to MRI-PDFF and UDFF. This indicates the algorithm's performance independent of human interpretation or intervention in the final measurement calculation. While a technologist operates the ultrasound system, the UGFF index calculation itself is an algorithmic output.

    7. The Type of Ground Truth Used

    The type of ground truth used is MRI Proton Density Fat Fraction (MRI-PDFF) measurements, which are quantitative and objective reference standards for liver fat quantification. Additionally, for one confirmatory study, the ground truth was the Ultrasound-Derived Fat Fraction (UDFF) from a Siemens Acuson S3000/S2000, functioning as a predicate device's output. These are akin to "outcomes data" or "established reference standard measurements."

    8. The Sample Size for the Training Set

    The document states: "During the migration of the AI software feature from LOGIQ E10s (K231989), the algorithm was not retrained and there were no changes to the algorithmic flow or the AI components performing the inferencing." This implies the training set was associated with the original clearance of the Auto Renal Measure Assistant on the LOGIQ E10s (K231989) but the sample size for the training set is not provided in this document.

    9. How the Ground Truth for the Training Set Was Established

    Similarly, since the algorithm was not retrained and the document pertains to the migration of an existing AI feature, the method for establishing the ground truth for the original training set is not provided in this document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 281