Search Filters

Search Results

Found 2767 results

510(k) Data Aggregation

    K Number
    K252074
    Date Cleared
    2025-10-31

    (121 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    per 21 CFR 892.1570]
    Medical Image Management and Processing System – Product Code: QIH [per 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Diagnostic Ultrasound System Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800, and Aplio i700 Model TUS-AI700 are indicated for the visualization of structures, and dynamic processes with the human body using ultrasound and to provide image information for diagnosis in the following clinical applications: fetal, abdominal, intra-operative (abdominal), pediatric, small organs (thyroid, breast and testicle), trans-vaginal, trans-rectal, neonatal cephalic, adult cephalic, cardiac (both adult and pediatric), peripheral vascular, transesophageal, musculo-skeletal (both conventional and superficial), laparoscopic and thoracic/pleural. This system provides high-quality ultrasound images in the following modes: B mode, M mode, Continuous Wave, Color Doppler, Pulsed Wave Doppler, Power Doppler and Combination Doppler, as well as Speckle-tracking, Tissue Harmonic Imaging, Combined Modes, Shear wave, Elastography, and Acoustic attenuation mapping. This system is suitable for use in hospital and clinical settings by physicians or appropriately trained healthcare professionals.

    In addition to the aforementioned indications for use, when EUS transducer GF-UCT180 and BF-UC190F are connected, Aplio i800 Model TUS-AI800/E3 provides image information for diagnosis of the upper gastrointestinal tract and surrounding organs, airways, tracheobronchial tree and esophagus.

    Device Description

    The Aplio i900 Model TUS-AI900, Aplio i800 Model TUS-AI800 and Aplio i700 Model TUS-AI700, V9.0 are mobile diagnostic ultrasound systems. These systems are Track 3 devices that employ a wide array of probes including flat linear array, convex, and sector array with frequency ranges between approximately 2MHz to 33MHz.

    AI/ML Overview

    This FDA 510(k) clearance letter details the substantial equivalence of the Aplio i900, Aplio i800, and Aplio i700 Software V9.0 Diagnostic Ultrasound System to its predicate device. The information provided specifically focuses on the validation of new and improved features, with particular attention to the 3rd Harmonic Imaging (3-HI), a new deep learning (DL) enabled filtering process.

    Acceptance Criteria and Device Performance for 3rd Harmonic Imaging (3-HI)

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance (3-HI)Study Details to Support Performance
    Clinical ImprovementSpatial Resolution: Demonstrate improvement relative to conventional 2nd harmonic imaging. Contrast Resolution: Demonstrate improvement relative to conventional 2nd harmonic imaging. Artifact Suppression: Demonstrate improvement relative to conventional 2nd harmonic imaging.Scores for 3-HI were higher than the middle score of 3 (on a 5-point ordinal scale) for spatial resolution, contrast resolution, and artifact suppression, as rated by radiologists in a blinded observer study.Test Set Size: 30 patients Data Provenance: U.S. clinical site, previously acquired data (retrospective). Ground Truth: Clinical images with representative abdominal organs, anatomical structures, and focal pathologies. Experts: Three (3) U.S. board-certified radiologists. Adjudication Method: Blinded observer study (comparison to images without 3-HI). MRMC Study: Yes, human readers (radiologists) compared images with and without 3-HI. The effect size is indicated by "scores for 3-HI were higher than the middle score of 3".
    Phantom Study ObjectivesLateral Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Axial Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Slice Resolution: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Contrast-to-Noise Ratio (CNR): Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Reverberation Artifact Suppression: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging. Frequency Spectra: Demonstrate capability to visualize abdominal images better than conventional 2nd harmonic imaging.All prespecified performance criteria were achieved. The phantom studies demonstrated the capability of 3-HI to visualize abdominal images better than conventional 2nd harmonic imaging across all specified metrics.Test Set Size: Not explicitly stated for each metric but "five abdominal phantoms with various physical properties". Data Provenance: Phantom data. Ground Truth: Controlled phantom targets with varying depths, sizes, and contrasts. Experts: Not applicable (objective measurements). Adjudication Method: Not applicable (objective measurements compared to prespecified criteria).

    Detailed Study Information

    1. Acceptance Criteria and Reported Device Performance

    (See table above)

    2. Sample Size Used for the Test Set and Data Provenance

    • 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
      • Sample Size: 30 patients.
      • Data Provenance: Previously acquired data from a U.S. clinical site (retrospective). Patients were selected to ensure diverse demographic characteristics representative of the intended U.S. patient population, including a wide range of body mass indices (18.5-36.3 kg/m²), roughly equivalent numbers of males and females, and ages ranging from 23-89 years old.
    • 3rd Harmonic Imaging (3-HI) Phantom Study:
      • Sample Size: Five abdominal phantoms.
      • Data Provenance: Phantom data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
      • Number of Experts: Three (3).
      • Qualifications: U.S. board-certified radiologists.

    4. Adjudication Method for the Test Set

    • 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
      • Adjudication Method: Blinded observer study. The three radiologists compared images with 3-HI to images without 3-HI (predicate functionality) using a 5-point ordinal scale. The median score was then compared with the middle score of 3.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a MRMC-like comparative effectiveness study was done for 3-HI's clinical evaluation.
    • Effect Size: The statistical analysis demonstrated that scores for 3-HI were higher than the middle score of 3 for spatial resolution, contrast resolution, and artifact suppression. This indicates that human readers (radiologists) rated images with 3-HI as improved compared to those without.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance)

    • Yes, a standalone study was performed for 3-HI in the phantom study. The phantom studies objectively examined lateral and axial resolution, slice resolution, contrast-to-noise ratio (CNR), reverberation artifact suppression, and frequency spectra without human interpretation.

    7. The Type of Ground Truth Used

    • 3rd Harmonic Imaging (3-HI) Clinical Evaluation:
      • Type of Ground Truth: Expert consensus (from the three board-certified radiologists) on image quality metrics (spatial resolution, contrast resolution, artifact suppression) through a blinded comparison against predicate functionality. The initial selection of patient images included "representative focal pathologies" suggesting clinical relevance in the images themselves.
    • 3rd Harmonic Imaging (3-HI) Phantom Study:
      • Type of Ground Truth: Objective measurements against known physical properties and targets within the phantoms.

    8. The Sample Size for the Training Set (for 3-HI)

    • The document explicitly states that the "The validation data set [30 patients] was entirely independent of the data set used to train the algorithm during its development." However, the actual sample size for the training set is not provided in the given text.

    9. How the Ground Truth for the Training Set Was Established (for 3-HI)

    • This information is not provided in the given text, beyond the statement that the algorithm was "locked upon completion of development" and had "no post-market, continuous learning capability."
    Ask a Question

    Ask a specific question about this device

    K Number
    K250337
    Device Name
    AiORTA - Plan
    Date Cleared
    2025-10-30

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    QC H3K2J2
    Canada

    Re: K250337
    Trade/Device Name: AiORTA - Plan
    Regulation Number: 21 CFR 892.2050
    Common Name** | Automated radiological image processing software |
    | Regulation Number | 21 CFR 892.2050
    system |
    | Common Name | System, Image Processing, Radiological |
    | Regulation Number | 21 CFR 892.2050
    | Manufacturer | ViTAA Medical Solutions Inc. | TeraRecon Inc. |
    | Classification | 21 CFR 892.2050
    | 21 CFR 892.2050 |
    | Product code | QIH | LLZ |
    | Intended end-user | Healthcare Practitioner

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AiORTA - Plan tool is an image analysis software tool for volumetric assessment. It provides volumetric visualization and measurements based on 3D reconstruction computed from cardiovascular CTA scans. The software device is intended to provide adjunct information to a licensed healthcare practitioner (HCP) in addition to clinical data and other inputs, as a measurement tool used in assessment of aortic aneurysm, pre-operative evaluation, planning and sizing for cardiovascular intervention and surgery, and for post-operative evaluation in patients 22 years old and older.

    The device is not intended to provide stand-alone diagnosis or suggest an immediate course of action in treatment or patient management.

    Device Description

    AiORTA - Plan is a cloud-based software tool used to make and review geometric measurements of cardiovascular structures, specifically abdominal aortic aneurysms. The software uses CT scan data as input to make measurements from 2D and 3D mesh based images. Software outputs are intended to be used as a measurement tool used in assessment of aortic aneurysm, pre-operative evaluation, planning and sizing for cardiovascular intervention and surgery, and for post-operative evaluation.

    The AiORTA - Plan software consists of two components, the Analysis Pipeline and Web Application.

    The Analysis Pipeline is the data processing engine that produces measurements of the abdominal aorta based on the input DICOM images. It consists of multiple modules that are operated by a trained Analyst to preprocess the DICOM images, compute geometric parameters (e.g., centerlines, diameters, lengths, volumes), and upload the results to the Web App for clinician review. The Analyst plays a role in ensuring the quality of the outputs. However, the end user (licensed healthcare practitioner) is ultimately responsible for the accuracy of the segmentations, the resulting measurements, and any clinical decisions based on these outputs.

    The workflow of the Analysis Pipeline can be described in the following steps:

    • Input: the Analysis Pipeline receives a CTA scan as input.
    • Segmentation: an AI-powered auto-masking algorithm performs segmentation of the aortic lumen, wall, and key anatomical landmarks, including the superior mesenteric, celiac, and renal arteries. A trained Analyst performs quality control of the segmentations, making any necessary revisions to ensure accurate outputs.
    • 3D conversion: the segmentations are converted into 3D mesh representations.
    • Measurement computation: from the 3D representations, the aortic centerline and geometric measurements, such as diameters, lengths, and volumes, are computed.
    • Follow-up study analysis: for patients with multiple studies, the system can detect and display changes in aortic geometry between studies.
    • Report generation: a report is generated containing key measurements and a 3D Anatomy Map providing multiple views of the abdominal aorta and its landmarks.
    • Web application integration: the outputs, including the segmented CT masks, 3D visualizations, and reports, are uploaded to the Web App for interactive review and analysis.

    The Web Application (Web App) is the front end and user facing component of the system. It is a cloud-based user interface offered to the qualified clinician to first upload de-identified cardiovascular CTA scans in DICOM format, along with relevant demographic and medical information about the patient and current study. The uploaded data is processed asynchronously by the Analysis Pipeline. Once processing is complete, the Web App then enables clinicians to interactively review and analyze the resulting outputs.

    Main features of the Web App include:

    • Segmentation review and correction: Clinicians can review the resulting segmentations from the Analysis Pipeline segmentations by viewing the CT slices alongside the segmentation masks. Segmentations can be revised using tools such as a brush or pixel eraser, with adjustable brush size, to select or remove pixels as needed. When clinicians revise segmentations, they can request asynchronous re-analysis by the Analysis Pipeline, which generates updated measurements and a 3D Anatomy Map of the aorta based on the revised segmentations.
    • 3D visualization: The aorta and key anatomical landmarks can be examined in full rotational views using the 3D Anatomy Map.
    • Measurement tools: Clinicians can perform measurements directly on the 3D Anatomy Map of the abdominal aorta and have access to a variety of measurement tools, including:
      • Centerline distance, which measures the distance (in mm) between two user-selected planes along the aortic centerline.
      • Diameter range, which measures the minimum and maximum diameters (in mm) within the region of interest between two user-selected planes along the aortic centerline.
      • Local diameter, which measures the diameter (in mm) at the user-selected plane along the aortic centerline.
      • Volume, which measures the volume (in mL) between two user-selected planes along the aortic centerline.
      • Calipers, which allow additional linear measurements (in mm) at user-selected points.
    • Screenshots: Clinicians can capture images of the 3D visualizations of the aorta or the segmentations displayed on the CT slices.
    • Longitudinal analysis: For patients with multiple studies, the Web App allows side-by-side review of studies. Clinicians have access to the same measurement and visualization tools available in single-study review, enabling comparison between studies.
    • Reporting: Clinicians can generate and download reports containing either the default key measurements computed by the Analysis Pipeline or custom measurements and screenshots captured during review.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the AiORTA - Plan device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    Metric/Measurement TypeAcceptance CriteriaReported Device Performance
    Auto-segmentation Masks (prior to analyst correction)
    Dice coefficient (Aortic wall)≥ 80%89% (Overall Mean)
    Dice coefficient (Aortic lumen)≥ 80%89% (Overall Mean)
    Landmark identification (Celiac artery proximal position)Within 5mm of ground truthMean distance 2.47mm
    Landmark identification (Renal arteries distal position)Within 5mm of ground truthMean distance 3.51mm
    Diameters and Lengths (after Analyst review and correction)
    Length (Mean absolute error)≤ 6.0mm
    Renal artery to aortic bifurcation lengthN/A5.3 mm (Mean absolute error)
    Renal artery to left iliac bifurcation lengthN/A7.0mm (Mean absolute error)
    Renal artery to right iliac bifurcation lengthN/A6.6mm (Mean absolute error)
    Diameter (Mean absolute error)≤ 2.3mm
    Aortic wall max diameterN/A2.0 mm (Mean absolute error)
    Aortic wall at renal artery diameterN/A2.1 mm (Mean absolute error)
    Aortic wall at left iliac bifurcation diameterN/A1.9mm (Mean absolute error)
    Aortic wall at right iliac bifurcation diameterN/A2.5 mm (Mean absolute error)
    Volumes (using analyst revised segmentations)
    Volume (Mean absolute error)≤ 1.8 mL
    Volume of the WallN/A0.00242 mL (Mean absolute error)
    Volume of the LumenN/A0.00257 mL (Mean absolute error)

    Explanation for Lengths and Diameters that did not meet initial criteria:
    For the following measurements which did not meet the initial acceptance criteria:

    • Length: renal to left iliac bifurcation (7.0mm vs ≤ 6.0mm)
    • Length: renal to right iliac bifurcation (6.6mm vs ≤ 6.0mm)
    • Diameter: wall right iliac (2.5mm vs ≤ 2.3mm)

    A Mean Pairwise Absolute Difference (MPAD) comparison was performed. The device-expert MPAD was smaller than the expert-expert MPAD in all three cases, indicating that the device's measurements were more consistent with experts than the experts were with each other.

    MeasurementExpert-expert MPADDevice-expert MPAD
    Length: renal to left iliac bifurcation7.1mm6.9mm
    Length: renal to right iliac bifurcation10.4mm9.6mm
    Diameter: wall right iliac2.7mm2.5mm

    Study Details for Device Performance Evaluation:

    1. Sample size used for the test set and the data provenance:

      • Auto-segmentation masks and Landmark Identification: The document does not explicitly state the sample size for this specific test, but it mentions using "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers."
      • Diameters and Lengths: The document does not explicitly state the sample size for this specific test, but it mentions using "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers."
      • Volumes: 40 CT scans. The data provenance is "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers." The studies were retrospective, as they involved existing clinical data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Auto-segmentation masks and Landmark Identification: 3 US-based board-certified Radiologists.
      • Diameters and Lengths: 3 US-based board-certified Radiologists.
      • Volumes: The ground truth for volumes was established using a reference device (Simpleware ScanIP Medical), not directly by human experts, although the input segmentations for both the device and the reference device were analyst-revised.
    3. Adjudication method for the test set:

      • Auto-segmentation masks and Landmark Identification: Ground truth was "annotations approved by 3 US-based board-certified Radiologists." This implies consensus or a primary reader with adjudication, but the exact method (e.g., 2+1, 3+1) is not specified.
      • Diameters and Lengths: Ground truth was "annotations from 3 US-based board-certified Radiologists." Similar to above, the specific consensus method is not detailed.
      • Volumes: Ground truth was established by a reference device, Simpleware ScanIP Medical.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was explicitly mentioned in the provided text. The testing focused on the standalone performance of the AI-powered components and the consistency of the device's measurements with expert annotations, not on human reader improvement with AI assistance.
    5. If a standalone (i.e., algorithm only without human-in-the loop performance) was done:

      • Yes, a standalone performance evaluation of the auto-masking algorithm (prior to analyst correction) was performed for auto-segmentation masks and landmark identification. The results demonstrated the performance of the auto-masking algorithm "independently of human intervention."
      • However, for diameters and lengths, the measurements were "based on segmentations that underwent Analyst review and correction, ensuring that the verification reflects real-world use conditions." This suggests a semi-automatic, human-in-the-loop performance evaluation for these specific metrics.
    6. The type of ground truth used (expert concensus, pathology, outcomes data, etc):

      • Expert Consensus: Used for auto-segmentation masks, landmark identification, diameters, and lengths. The consensus involved 3 US-based board-certified Radiologists.
      • Reference Device: Used for volumes, comparing against results from Simpleware ScanIP Medical.
    7. The sample size for the training set:

      • The document does not explicitly state the sample size for the training set. It mentions "critical algorithms were verified by comparing their outputs to ground truth data to ensure accuracy and reliability. Algorithms were first verified using synthetic data...Subsequent verification was performed using clinical data, including aortic aneurysm cases from both US and Canadian clinical centers." This refers to verification data, not necessarily the training data size.
    8. How the ground truth for the training set was established:

      • The document does not provide details on how the ground truth for the training set was established. It only describes the ground truth for the verification/test sets. It can be inferred that similar expert review or other validated methods would have been used for training data, but this is not explicitly stated.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251963
    Device Name
    LOGIQ E10s
    Date Cleared
    2025-10-29

    (125 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ultrasound Transducer, 21 CFR 892.1570, 90-ITX
    automated radiological image processing software, 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The LOGIQ E10s is intended for use by a qualified physician for ultrasound evaluation.

    Specific clinical applications and exam types include: Fetal / Obstetrics; Abdominal (including Renal, Gynecology / Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid); Neonatal Cephalic; Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular; Musculo-skeletal Conventional and Superficial; Urology (including Prostate); Transrectal; Transvaginal; Transesophageal and Intraoperative (Abdominal and Vascular).

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and Combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.

    The LOGIQ E10s is intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ E10s is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 585 mm wide (keyboard), 991 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 12-inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor.

    AI/ML Overview

    The provided text describes three AI features: Auto Abdominal Color Assistant 2.0, Auto Aorta Measure Assistant, and Auto Common Bile Duct (CBD) Measure Assistant, along with a UGFF Clinical Study.

    Here's an analysis of the acceptance criteria and study details for each, where available:

    1. Table of Acceptance Criteria and Reported Device Performance

    For Auto Abdominal Color Assistant 2.0

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    Overall model detection accuracy (sensitivity and specificity): $\ge 80%$ (0.80)Accuracy: 94.8%Yes
    Sensitivity (True Positive Rate): $\ge 80%$ (0.80)Sensitivity: 0.91Yes
    Specificity (True Negative Rate): $\ge 80%$ (0.80)Specificity: 0.98Yes
    DICE Similarity Coefficient (Segmentation Accuracy): $\ge 0.80$DICE score: 0.82Yes

    For Auto Aorta Measure Assistant

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    No explicit numerical acceptance criteria were provided for keystrokes or measurement accuracy. The study aims to demonstrate improvement in keystrokes and acceptable accuracy. The provided results are the performance reported without specific targets for acceptance.Long View Aorta:- Average keystrokes: 4.132 (without AI) vs. 1.236 (with AI)- Average accuracy: 87.2% with 95% CI of +/- 1.98%- Average absolute error: 0.253 cm with 95% CI of 0.049 cm- Limits of Agreement: (-0.15, 0.60) with 95% CI of (-0.26, 0.71)Short View AP Measurement:- Average accuracy: 92.9% with 95% CI of +/- 2.02%- Average absolute error: 0.128 cm with 95% CI of 0.037 cm- Limits of Agreement: (-0.21, 0.36) with 95% CI of (-0.29, 0.45)Short View Trans Measurement:- Average accuracy: 86.9% with 95% CI of +/- 6.25%- Average absolute error: 0.235 cm with 95% CI of 0.110 cm- Limits of Agreement: (-0.86, 0.69) with 95% CI (-1.06, 0.92)N/A

    For Auto Common Bile Duct (CBD) Measure Assistant

    Acceptance CriteriaReported Device PerformanceMeets Criteria?
    No explicit numerical acceptance criteria were provided for keystrokes or measurement accuracy. The study aims to demonstrate reduction in keystrokes and acceptable accuracy. The provided results are the performance reported without specific targets for acceptance.- Average reduction in keystrokes (manual vs. AI): 1.62 +/- 0.375Keystrokes for Porta Hepatis measurement with segmentation scroll edit- Average accuracy: 80.56% with 95% CI of +/- 8.83%- Average absolute error: 0.91 mm with 95% CI of 0.45 mm- Limits of Agreement: (-1.96, 3.25) with 95% CI of (-2.85, 4.14)Porta Hepatis measurement accuracy without segmentation scroll edit- Average accuracy: 59.85% with 95% CI of +/- 17.86%- Average absolute error: 1.66 mm with 95% CI of 1.02 mm- Limits of Agreement: (-4.75, 4.37) with 95% CI of (-6.17, 5.79)N/A

    For UGFF Clinical Study

    Acceptance Criteria (Implied by intent to demonstrate strong correlation)Reported Device PerformanceMeets Criteria?
    Strong correlation between UFF values and MRI-PDFF (e.g., correlation coefficient $\ge 0.8$)Original study: Correlation coefficient = 0.87Confirmatory study (US/EU): Correlation coefficient = 0.90(Confirmatory study (UGFF vs UDFF): Correlation coefficient = 0.88)Yes
    Acceptable Limits of Agreement with MRI-PDFF (e.g., small offset and LOA with high percentage of patients within LOA)Original study: Offset = -0.32%, LOA = -6.0% to 5.4%, 91.6% patients within LOAConfirmatory study (US/EU): Offset = -0.1%, LOA = -3.6% to 3.4%, 95.0% patients within LOAYes
    No statistically significant effect of BMI, SCD, and other demographic confounders on AC, BSC, and SNR measurements (Implied)The results of the clinical study indicate that BMI, SCD, and other demographic confounders do not have a statistically significant effect on measurements of the AC, BSC, and SNR.Yes

    2. Sample size used for the test set and the data provenance

    Auto Abdominal Color Assistant 2.0:

    • Sample Size: 49 individual subjects (1186 annotation images)
    • Data Provenance: Retrospective, from the USA (100%).

    Auto Aorta Measure Assistant:

    • Sample Size:
      • Long View Aorta: 36 subjects
      • Short View Aorta: 35 subjects
    • Data Provenance: Retrospective, from Japan and USA.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Sample Size: 25 subjects
    • Data Provenance: Retrospective, from USA (40%) and Japan (60%).

    UGFF Clinical Study:

    • Sample Size:
      • Original study: 582 participants
      • Confirmatory study (US/EU): 15 US patients and 5 EU patients (total 20)
      • Confirmatory study (UGFF vs UDFF): 24 EU patients
    • Data Provenance: Retrospective and Prospective implicitly (clinical study implies data collection).
      • Original Study: Japan (Asian population)
      • Confirmatory Study (US/EU): US and EU (demographic info unavailable for EU patients, US patients: BMI 21.0-37.5, SCD 13.9-26.9)
      • Confirmatory Study (UGFF vs UDFF): EU

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Auto Abdominal Color Assistant 2.0:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the 'anatomy'".
    • Qualifications of Experts: Not specified.

    Auto Aorta Measure Assistant:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the AP measurement..." and an "Arbitrator to select most accurate measurement among all readers." This implies multiple readers and a single arbitrator.
    • Qualifications of Experts: Not specified.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Number of Experts: Not specified. The text mentions "Readers to ground truth the diameter..." and an "Arbitrator to select most accurate measurement among all readers." This implies multiple readers and a single arbitrator.
    • Qualifications of Experts: Not specified.

    UGFF Clinical Study:

    • Number of Experts: Not applicable, as ground truth was established by MRI-PDFF measurements, not expert consensus on images.

    4. Adjudication method for the test set

    Auto Abdominal Color Assistant 2.0:

    • Adjudication Method: Not explicitly described as a specific method (e.g., 2+1). The process mentions "Readers to ground truth" and then comparison to AI predictions, but no specific adjudication among multiple readers' initial ground truths.

    Auto Aorta Measure Assistant:

    • Adjudication Method: Implies an arbitrator-based method. "Arbitrator to select most accurate measurement among all readers." This suggests multiple readers provide measurements, and a single arbitrator makes the final ground truth selection.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Adjudication Method: Implies an arbitrator-based method. "Arbitrator to select most accurate measurement among all readers." Similar to the Aorta assistant.

    UGFF Clinical Study:

    • Adjudication Method: Not applicable. Ground truth was established by MRI-PDFF measurements.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Auto Abdominal Color Assistant 2.0:

    • MRMC Study: Not explicitly stated as a comparative effectiveness study showing human improvement. The study focuses on the algorithm's performance against ground truth.
    • Effect Size (Human Improvement with AI): Not reported.

    Auto Aorta Measure Assistant:

    • MRMC Study: Yes, an implicit MRMC study comparing human performance with and without AI. Readers performed measurements with and without AI assistance.
    • Effect Size (Human Improvement with AI):
      • Long View Aorta (Keystrokes): Average keystrokes reduced from 4.132 (without AI) to 1.236 (with AI).
      • Short View Aorta (Keystrokes): Average keystrokes reduced from 7.05 (without AI) to 2.307 (with AI).
      • (No specific improvement in diagnostic accuracy for human readers with AI is stated, primarily focuses on efficiency via keystrokes).

    Auto Common Bile Duct (CBD) Measure Assistant:

    • MRMC Study: Yes, an implicit MRMC study comparing human performance with and without AI. Readers performed measurements with and without AI assistance.
    • Effect Size (Human Improvement with AI):
      • Porta Hepatis CBD (Keystrokes): Average reduction in keystrokes for measurements with AI vs. manually is 1.62 +/- 0.375.
      • (No specific improvement in diagnostic accuracy for human readers with AI is stated, primarily focuses on efficiency via keystrokes).

    UGFF Clinical Study:

    • MRMC Study: No, this was a standalone algorithm performance study compared to a reference standard (MRI-PDFF) and a predicate device (UDFF). It did not involve human readers using the AI tool.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Auto Abdominal Color Assistant 2.0:

    • Standalone Performance: Yes. The reported accuracy, sensitivity, specificity, and DICE score are for the algorithm's performance.

    Auto Aorta Measure Assistant:

    • Standalone Performance: Yes, implicitly. The "AI baseline measurement" was compared for accuracy against the arbitrator-selected ground truth. While keystrokes involved human interaction to use the AI, the measurement accuracy is an algorithm output.

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Standalone Performance: Yes, implicitly. The "AI baseline measurement" was compared for accuracy against the arbitrator-selected ground truth.

    UGFF Clinical Study:

    • Standalone Performance: Yes. The study directly assesses the correlation and agreement of the UGFF algorithm's output with MRI-PDFF and another ultrasound-derived fat fraction algorithm.

    7. The type of ground truth used

    Auto Abdominal Color Assistant 2.0:

    • Ground Truth Type: Expert consensus for anatomical visibility ("Readers to ground truth the 'anatomy' visible in static B-Mode image.")

    Auto Aorta Measure Assistant:

    • Ground Truth Type: Expert consensus from multiple readers, adjudicated by an arbitrator, for specific measurements ("Arbitrator to select most accurate measurement among all readers.")

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Ground Truth Type: Expert consensus from multiple readers, adjudicated by an arbitrator, for specific measurements ("Arbitrator to select most accurate measurement among all readers.")

    UGFF Clinical Study:

    • Ground Truth Type: Outcomes data / Quantitative Reference Standard: MRI Proton Density Fat Fraction (MRI-PDFF %).

    8. The sample size for the training set

    Auto Abdominal Color Assistant 2.0:

    • Training Set Sample Size: Not specified beyond "The exams used for test/training validation purpose are separated from the ones used during training process".

    Auto Aorta Measure Assistant:

    • Training Set Sample Size: Not specified beyond "The exams used for regulatory validation purpose are separated from the ones used during model development process".

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Training Set Sample Size: Not specified beyond "The exams used for regulatory validation purpose are separated from the ones used during model development process".

    UGFF Clinical Study:

    • Training Set Sample Size: Not specified. The study describes validation but not the training phase.

    9. How the ground truth for the training set was established

    Auto Abdominal Color Assistant 2.0:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the 'anatomy'".

    Auto Aorta Measure Assistant:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the AP measurement...".

    Auto Common Bile Duct (CBD) Measure Assistant:

    • Training Set Ground Truth: Not explicitly detailed, but implied to be similar to the test set ground truthing process: "Information extracted purely from Ultrasound B-mode images" and "Readers to ground truth the diameter...".

    UGFF Clinical Study:

    • Training Set Ground Truth: Not specified for the training set, but for the validation set, the ground truth was MRI-PDFF measurements.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251985
    Device Name
    LOGIQ E10
    Date Cleared
    2025-10-29

    (124 days)

    Product Code
    Regulation Number
    892.1550
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Ultrasound Transducer, 21 CFR 892.1570, 90-ITX
    automated radiological image processing software, 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    LOGIQ E10 is intended for use by a qualified physician for ultrasound evaluation of Fetal/Obstetrics; Abdominal (including Renal, Gynecology/Pelvic); Pediatric; Small Organ (Breast, Testes, Thyroid); Neonatal Cephalic, Adult Cephalic; Cardiac (Adult and Pediatric); Peripheral Vascular; Musculo-skeletal Conventional and Superficial; Urology (including Prostate); Transrectal; Transvaginal; Tranesophageal and Intraoperative (Abdominal and Vascular).

    Modes of operation include: B, M, PW Doppler, CW Doppler, Color Doppler, Color M Doppler, Power Doppler, Harmonic Imaging, Coded Pulse, 3D/4D Imaging mode, Elastography, Shear Wave Elastography, Attenuation Imaging and combined modes: B/M, B/Color, B/PWD, B/Color/PWD, B/Power/PWD.

    The LOGIQ E10 is intended to be used in a hospital or medical clinic.

    Device Description

    The LOGIQ E10 is a full featured, Track 3, general purpose diagnostic ultrasound system which consists of a mobile console approximately 585 mm wide (keyboard), 991 mm deep and 1300 mm high that provides digital acquisition, processing and display capability. The user interface includes a computer keyboard, specialized controls, 12-inch high-resolution color touch screen and 23.8-inch High Contrast LED LCD monitor.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and supporting studies for the LOGIQ E10 ultrasound system, derived from the provided FDA 510(k) Clearance Letter:


    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriteriaReported Device Performance
    Auto Abdominal Color Assistant 2.0
    Overall Model Detection Accuracy$\ge 80%$$94.8%$
    Sensitivity (True Positive Rate)$\ge 80%$$0.91$
    Specificity (True Negative Rate)$\ge 80%$$0.98$
    DICE Similarity Coefficient (Segmentation Accuracy)$\ge 0.80$$0.82$
    Auto Aorta Measure Assistant (Long View AP Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$87.2%$ (95% CI of $\pm 1.98%$)
    Average Absolute ErrorNot explicitly stated as a target$0.253$ cm (95% CI of $0.049$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.15, 0.60)$ cm (95% CI of $(-0.26, 0.71)$)
    Auto Aorta Measure Assistant (Short View AP Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$92.9%$ (95% CI of $\pm 2.02%$)
    Average Absolute ErrorNot explicitly stated as a target$0.128$ cm (95% CI of $0.037$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.21, 0.36)$ cm (95% CI of $(-0.29, 0.45)$)
    Auto Aorta Measure Assistant (Short View Trans Measurement)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$86.9%$ (95% CI of $\pm 6.25%$)
    Average Absolute ErrorNot explicitly stated as a target$0.235$ cm (95% CI of $0.110$ cm)
    Limits of AgreementNot explicitly stated as a target range$(-0.86, 0.69)$ cm (95% CI of $(-1.06, 0.92)$)
    Auto Common Bile Duct (CBD) Measure Assistant (Porta Hepatis measurement accuracy without segmentation scroll edit)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$59.85%$ (95% CI of $\pm 17.86%$)
    Average Absolute ErrorNot explicitly stated as a target$1.66$ mm (95% CI of $1.02$ mm)
    Limits of AgreementNot explicitly stated as a target range$(-4.75, 4.37)$ mm (95% CI of $(-6.17, 5.79)$)
    Auto Common Bile Duct (CBD) Measure Assistant (Porta Hepatis measurement accuracy with segmentation scroll edit)
    Average AccuracyNot explicitly stated as a target percentage, but implied by strong performance metrics$80.56%$ (95% CI of $\pm 8.83%$)
    Average Absolute ErrorNot explicitly stated as a target$0.91$ mm (95% CI of $0.45$ mm)
    Limits of AgreementNot explicitly stated as a target range$(-1.96, 3.25)$ mm (95% CI of $(-2.85, 4.14)$)
    Ultrasound Guided Fat Fraction (UGFF)
    Correlation Coefficient with MRI-PDFF (Japan Cohort)Strong correlation confirmed$0.87$
    Offset (UGFF vs MRI-PDFF, Japan Cohort)Not explicitly stated as a target$-0.32%$
    Limits of Agreement (UGFF vs MRI-PDFF, Japan Cohort)Not explicitly stated as a target range$-6.0%$ to $5.4%$
    % Patients within $\pm 8.4%$ difference (Japan Cohort)Not explicitly stated as a target$91.6%$
    Correlation Coefficient with MRI-PDFF (US/EU Cohort)Strong correlation confirmed$0.90$
    Offset (UGFF vs MRI-PDFF, US/EU Cohort)Not explicitly stated as a target$-0.1%$
    Limits of Agreement (UGFF vs MRI-PDFF, US/EU Cohort)Not explicitly stated as a target range$-3.6%$ to $3.4%$
    % Patients within $\pm 4.6%$ difference (US/EU Cohort)Not explicitly stated as a target$95.0%$
    Correlation Coefficient with UDFF (EU Cohort)Strong correlation confirmed$0.88$
    Offset (UGFF vs UDFF, EU Cohort)Not explicitly stated as a target$-1.2%$
    Limits of Agreement (UGFF vs UDFF, EU Cohort)Not explicitly stated as a target range$-5.0%$ to $2.6%$
    % Patients within $\pm 4.7%$ difference (EU Cohort)Not explicitly stated as a targetAll patients

    2. Sample Size for Test Set and Data Provenance

    • Auto Abdominal Color Assistant 2.0:
      • Test Set Sample Size: 49 individual subjects, 1186 annotation images.
      • Data Provenance: Retrospective, all data from the USA.
    • Auto Aorta Measure Assistant:
      • Test Set Sample Size:
        • Long View Aorta: 36 subjects (11 Male, 25 Female).
        • Short View Aorta: 35 subjects (11 Male, 24 Female).
      • Data Provenance: Retrospective, from Japan (15-16 subjects) and USA (20 subjects).
    • Auto Common Bile Duct (CBD) Measure Assistant:
      • Test Set Sample Size: 25 subjects (11 Male, 14 Female).
      • Data Provenance: Retrospective, from USA (40%) and Japan (60%).
    • Ultrasound Guided Fat Fraction (UGFF):
      • Test Set Sample Size (Primary Study): 582 participants.
      • Data Provenance (Primary Study): Retrospective, Japan.
      • Test Set Sample Size (Confirmatory Study 1): 15 US patients + 5 EU patients (total 20).
      • Data Provenance (Confirmatory Study 1): Retrospective, USA and EU.
      • Test Set Sample Size (Confirmatory Study 2): 24 EU patients.
      • Data Provenance (Confirmatory Study 2): Retrospective, EU.

    3. Number of Experts and Qualifications for Ground Truth

    • Auto Abdominal Color Assistant 2.0: Not explicitly stated, but implies multiple "readers" to ground truth anatomical visibility. No specific qualifications are mentioned beyond "readers."
    • Auto Aorta Measure Assistant: Not explicitly stated, but implies multiple "readers" for measurements and an "arbitrator" to select the most accurate measurement. No specific qualifications are mentioned beyond "readers" and "arbitrator."
    • Auto Common Bile Duct (CBD) Measure Assistant: Not explicitly stated, but implies multiple "readers" for measurements and an "arbitrator" to select the most accurate measurement. No specific qualifications are mentioned beyond "readers" and "arbitrator."
    • Ultrasound Guided Fat Fraction (UGFF): Ground truth for the primary study was MRI Proton Density Fat Fraction (MRI-PDFF %). No human experts were involved in establishing the ground truth for UGFF, as it relies on MRI-PDFF as the reference. The correlation between UGFF and UDFF also used UDFF as a reference, not human experts.

    4. Adjudication Method for the Test Set

    • Auto Abdominal Color Assistant 2.0: Not explicitly mentioned, however, the process described as "Readers to ground truth the 'anatomy' visible in static B-Mode image. (Before running AI)" and then comparing to AI predictions does not suggest an adjudication process for the ground truth generation itself beyond initial reader input. Confusion matrices were generated later.
    • Auto Aorta Measure Assistant: An "Arbitrator" was used to "select most accurate measurement among all readers" for the initial ground truth, which was then compared to AI baseline. This implies a 1 (arbitrator) + N (readers) adjudication method for measurement accuracy. For keystroke comparison, readers measured with and without AI.
    • Auto Common Bile Duct (CBD) Measure Assistant: An "Arbitrator" was used to "select most accurate measurement among all readers" for the initial ground truth, which was then compared to AI baseline. This implies a 1 (arbitrator) + N (readers) adjudication method for measurement accuracy. For keystroke comparison, readers measured with and without AI.
    • Ultrasound Guided Fat Fraction (UGFF): Ground truth was established by MRI-PDFF or comparison to UDFF. No human adjudication method was described for these.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Auto Aorta Measure Assistant: Yes, a comparative study was performed by comparing keystroke counts with and without AI assistance for human readers.
      • Effect Size:
        • Long View Aorta AP Measurement: Average reduction from $4.132 \pm 0.291$ keystrokes (without AI) to $1.236 \pm 0.340$ keystrokes (with AI).
        • Short View Aorta AP and Trans Measurement: Average reduction from $7.05 \pm 0.158$ keystrokes (without AI) to $2.307 \pm 1.0678$ keystrokes (with AI).
    • Auto Common Bile Duct (CBD) Measure Assistant: Yes, a comparative study was performed by comparing keystroke counts with and without AI assistance for human readers.
      • Effect Size: Average reduction of $1.62 \pm 0.375$ keystrokes (mean and standard deviation) from manual to AI-assisted measurements.
    • Other features (Auto Abdominal Color Assistant 2.0, UGFF): The documentation does not describe a MRMC study for improved human reader performance with AI assistance for these features.

    6. Standalone (Algorithm Only) Performance Study

    • Auto Abdominal Color Assistant 2.0: Yes, the model's accuracy (detection accuracy, sensitivity, specificity, DICE score) was evaluated in a standalone manner against the human-annotated ground truth.
    • Ultrasound Guided Fat Fraction (UGFF): Yes, the correlation and agreement of the UGFF algorithm's values were tested directly against an established reference standard (MRI-PDFF) and another device's derived fat fraction (UDFF).

    7. Type of Ground Truth Used

    • Auto Abdominal Color Assistant 2.0: Expert consensus/annotations on B-Mode images, followed by comparison to AI predictions.
    • Auto Aorta Measure Assistant: Expert consensus on measurements (human readers with arbitrator selection) and keystroke counts from these manual measurements and AI-assisted measurements.
    • Auto Common Bile Duct (CBD) Measure Assistant: Expert consensus on measurements (human readers with arbitrator selection) and keystroke counts from these manual measurements and AI-assisted measurements.
    • Ultrasound Guided Fat Fraction (UGFF): Established clinical reference standard: MRI Proton Density Fat Fraction (MRI-PDFF %). For one confirmatory study, another cleared device's derived fat fraction (UDFF) was used as a comparative reference.

    8. Sample Size for the Training Set

    • The document states that "The exams used for test/training validation purpose are separated from the ones used during training process" but does not provide the sample size for the training set itself for any of the AI features.

    9. How the Ground Truth for the Training Set was Established

    • The document implies that the ground truth for training data would have been established similarly to the test data ground truth (e.g., expert annotation for Auto Abdominal Color Assistant, expert measurements for Auto Aorta/CBD Measure Assistants). However, the specific methodology for the training set's ground truth establishment (e.g., number of experts, adjudication, qualifications) is not detailed in the provided text. It only explicitly states that "Before the process of data annotation, all information displayed on the device is removed and performed on information extracted purely from Ultrasound B-mode images" for annotation. Independence of test and training data by exam site origin or overall separation is mentioned, but not the process for creating the training set ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K253242
    Date Cleared
    2025-10-29

    (30 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Re: K253242
    Trade/Device Name: LCD Monitor (C1216W, C822W, C821W)
    Regulation Number: 21 CFR 892.2050
    Classification Name:** Medical image management and processing system

    • Regulation Number: 21 CFR 892.2050
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    -- C822W
    This Medical Monitor is intended to be used in displaying and viewing digital images for review, analysis and diagnosis by trained medical practitioners. It does not support the display of mammography images for diagnosis.

    -- C1216W
    This Medical Monitor is indicated for use in displaying radiological images (including full-field digital mammography and digital breast tomosynthesis) for review, analysis, and diagnosis by trained medical practitioners.

    -- C821W
    This Medical Monitor is intended to be used in displaying and viewing medical images for diagnosis by trained medical practitioners or certified personnel. It's intended to be used in digital mammography PACS, digital breast tomosynthesis and modalities including FFDM.

    Device Description

    C1216W is 31 inch TFT color LCD monitor, C822W and C821W are 31.5 inch TFT color LCD monitor, they are intended to be used in displaying and viewing digital images for review and analysis by trained medical practitioners.

    These products have been strictly calibrated so that they can meet DICOM Part 3.14 and other standards. They use the latest generation of LED backlight panel. The built-in brightness stabilization control circuit makes sure the brightness of these monitors is stable, so this product meets the demand of high precision medical imaging. The anti-glare screen can prevent display from reflection under highlight conditions, make the image and display clearer.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K252547
    Date Cleared
    2025-10-28

    (77 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: TheraSphere 360™ Y-90 Management Platform
    Regulation Number: 21 CFR 892.2050
    Platform

    Device Classification Name: Medical image management and processing system (LLZ)
    21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TheraSphere 360™ Y-90 Management Platform includes Treatment Planning and Activity Calculation functionalities as optional interactive tools intended for calculating the activity of TheraSphere Microspheres required at treatment time based upon the desired dose, lung shunt fraction, anticipated residual waste, and liver mass.

    The Treatment Planning and Activity Calculation functionalities include features to aid in TheraSphere Microspheres dose vial selection.

    Additionally, the TheraSphere 360 Platform includes optional post-treatment analysis functionalities to be used following treatment with TheraSphere Microspheres. For post-TheraSphere Microspheres treatment, the TheraSphere 360 Platform should only be used for the retrospective determination of dose and should not be used to prospectively calculate dose or for pre-treatment planning when there is a need for retreatment using TheraSphere Microspheres.

    Device Description

    The TheraSphere 360 Y-90 Management Platform is an end-to-end, browser-based platform that will host a wide range of resources (e.g. radioembolization activity calculations, ordering, tracking, and education) that support Authorized Users of TheraSphere Microspheres.

    The TheraSphere 360 Platform includes treatment planning functionality, activity calculation functionality, vial selection and ordering, and post-treatment analysis functionality. The treatment planning functionality and the activity calculation functionality include an interactive tool intended for calculating the activity of TheraSphere Microspheres required at the treatment time based upon the desired dose, lung shunt fraction, anticipated residual waste, and liver mass.

    The Vial Selector function allows users to select and order TheraSphere Microspheres dose vials from inventory that match desired results.

    The post-treatment analysis functionality is intended as an optional tool for post-treatment evaluation following TheraSphere Microspheres treatment.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251027
    Date Cleared
    2025-10-27

    (208 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: cvi42 Coronary Plaque Software Application
    Regulation Number: 21 CFR 892.2050
    Trade/Device Name: cvi42 Coronary Plaque Software Application
    Regulation Number: 21 CFR 892.2050
    Proposed Classification:** Device Class: II
    Product Code: QIH, LLZ
    Regulation Number: 21 CFR 892.2050
    primary predicate device, secondary predicate uses AI/ML functionality. |
    | Regulation Number | 21 CFR § 892.2050
    | 21 CFR § 892.2050 | 21 CFR § 892.2050 | Same as predicate device(s) |
    | Computer operating system

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended Use

    Viewing, post-processing, qualitative and quantitative evaluation of blood vessels and cardiovascular CT images in DICOM format.

    Indications for Use

    cvi42 Coronary Plaque Software Application is intended to be used for viewing, post-processing, qualitative and quantitative evaluation of cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.

    It enables a set of tools to assist physicians in qualitative and quantitative assessment of cardiac CT images to determine the presence and extent of coronary plaques and stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of CAD or suspected CAD.

    cvi42 Coronary Plaque's semi-automated machine learning algorithms are intended for an adult population.

    cvi42 Coronary Plaque shall be used only for cardiac images acquired from a CT scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.

    Device Description

    Circle's cvi42 Coronary Plaque Software Application ('cvi42 Coronary Plaque' or 'Coronary Plaque Module', for short) is a Software as a Medical Device (SaMD) that enables the analysis of CT Angiography scans of the coronary arteries of the heart. It is designed to support physicians in the visualization, evaluation, and analysis of coronary vessel plaques through manual or semi-automatic segmentation of vessel lumen and wall to determine the presence and extent of coronary plaques and luminal stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for the evaluation of coronary artery disease (CAD) or suspected CAD. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications that physicians use. The Coronary Plaque Module can be integrated into an image viewing software intended for visualization of cardiac images, such as Circle's FDA-cleared cvi42 Software Application. The Coronary Plaque Module does not interface directly with any data collection equipment, and its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen, can be sent to report or saved for future review.

    The Coronary Plaque Module consists of multiplanar reconstruction (MPR) views, curved planar reformation (CPR) and straightened views, and 3D rendering of the original CT data. The Module displays three orthogonal MPR views that the user can freely adjust to any position and orientation. Lines and regions of interest (ROIs) can be manually drawn on these MPR images for quantitative measurements.

    The Coronary Plaque Module implements an Artificial Intelligence/Machine Learning (AI/ML) algorithm to detect lumen and vessel wall structures. Further, the module implements post-processing methods to convert coronary artery lumen and vessel wall structures to editable surfaces and detect the presence and type of coronary plaque in the region between the lumen and vessel wall. All surfaces generated by the system are editable and users are advised to verify and correct any errors.

    The device allows users to perform the measurements listed in Table 1.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details based on the provided FDA 510(k) Clearance Letter for the cvi42 Coronary Plaque Software Application:

    1. Table of Acceptance Criteria and Reported Device Performance

    EndpointAcceptance Criteria (Implied)Reported Device PerformancePass / Fail
    Lumen Mean Dice Similarity Coefficient (DSC)Not explicitly stated but inferred as >= 0.76 with positive result0.76Pass
    Wall Mean Dice Similarity Coefficient (DSC)Not explicitly stated but inferred as >= 0.80 with positive result0.80Pass
    Lumen Mean Hausdorff Distance (HD)Not explicitly stated but inferred as <= 0.77 mm with positive result0.77 mmPass
    Wall Mean Hausdorff Distance (HD)Not explicitly stated but inferred as <= 0.87 mm with positive result0.87 mmPass
    Total Plaque (TP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.97 with positive result0.97Pass
    Calcified Plaque (CP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.99 with positive result0.99Pass
    Non-Calcified Plaque (NCP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.93 with positive result0.93Pass
    Low-Attenuation Plaque (LAP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.74 with positive result0.74Pass

    Note: The acceptance criteria for each endpoint are not explicitly numerical in the provided text. They are inferred to be "met Circle's pre-defined acceptance criteria" and are presented here as the numeric value reported, implying that the reported value met or exceeded the internal acceptance threshold for a 'Pass'.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The document mentions "All data used for validation were not used during the development of the ML algorithms" and "Image information for all samples was anonymized and limited to ePHI-free DICOM headers." However, the specific number of cases or images in the test set is not provided.
    • Data Provenance: Sourced from multiple sites, with 100% of the data sampled from US sources. The data consisted of images acquired from major vendors of CT imaging devices.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three expert annotators were used.
    • Qualifications of Experts: Not explicitly stated beyond "expert annotators." The document implies they are experts in coronary vessel and lumen wall segmentation within cardiac CT images.

    4. Adjudication Method for the Test Set

    The ground truth was established "from three expert annotators." While it doesn't explicitly state "2+1" or "3+1", the use of three annotators suggests a consensus-based adjudication, likely majority vote (e.g., if two out of three agreed, that constituted the ground truth, or a more complex consensus process). The specific method is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No. The document states, "No clinical studies were necessary to support substantial equivalence." The evaluation was primarily based on the performance of the ML algorithms against a reference standard established by experts, not on how human readers improved their performance with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes. The performance evaluation focused on the "performance of the ML-based coronary vessel and lumen wall segmentation algorithm... evaluated against pre-defined acceptance criteria and compared to a reference standard established from three expert annotators." This indicates a standalone performance assessment of the algorithm's output. The device is also described as having "semi-automated machine learning algorithms", implying the user can verify and correct.

    7. The Type of Ground Truth Used

    Expert Consensus. The ground truth was established "from three expert annotators," indicating that human experts' annotations formed the reference standard against which the algorithm's performance was measured.

    8. The Sample Size for the Training Set

    Not explicitly stated. The document mentions the ML algorithms "have been trained and tested on images acquired from major vendors of CT imaging devices," but it does not provide the specific sample size for the training set. It only clarifies that the validation data was not used for training.

    9. How the Ground Truth for the Training Set Was Established

    Not explicitly stated. The document describes how the ground truth for the validation/test set was established (three expert annotators). It does not provide details on how the ground truth for the training set was generated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250297
    Date Cleared
    2025-10-27

    (269 days)

    Product Code
    Regulation Number
    882.5320
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    882.5330 | 21 CFR 882.5330 | 21 CFR 882.5320 21 CFR 882.5250 21 CFR 882.5360 | 21 CFR 872.4120 21 CFR 892.2050
    | 21 CFR 892.2050 | 21 CFR 888.3080 |

    Page 15

    | Characteristic / Name of the device | Subject device

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The TECHFIT Patient-Specific Cranial System is intended to replace bony voids in the cranial skeleton. The devices are indicated for adults and adolescents from 18 years or older.

    Device Description

    The TECHFIT Patient-Specific Cranial System is patient specific devices intended to replace bony voids in the cranial/craniofacial skeleton. The TECHFIT Patient-Specific Cranial System includes a cranial implant, cranial model and a software component for digital planning and visualization named Digitally Integrated Surgical Reconstruction Platform DISRP®.

    TECHFIT Patient-Specific Cranial Implants are manufactured from Polyether Ether Ketone (PEEK). The TECHFIT Patient-Specific Cranial Implants are attached to the native bone using commercial plates and screws.

    The TECHFIT Patient-Specific Cranial System matches the shape and dimensions of the missing skull bone fragments. The implants are manufactured from PEEK according to ASTM F2026 and are manufactured by machining process.

    The TECHFIT Patient-Specific Cranial System models are patient specific devices manufactured from clear resin using 3D printing manufacturing process. Those models are a representation of the anatomy of the patient, and they are not indicated to enter to the OR (Operating Room).

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251059
    Date Cleared
    2025-10-24

    (203 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Germany

    Re: K251059
    Trade/Device Name: Syngo Carbon Clinicals (VA41)
    Regulation Number: 21 CFR 892.2050
    Name: Syngo Carbon Clinicals
    Classification Panel: Radiology Devices
    Classification Number: 21 CFR 892.2050
    510(k) Clearance: K232856
    Classification Panel: Radiology Devices
    Classification Number: 21 CFR 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    OrthoMatic Spine provides the means to perform musculoskeletal measurements of the whole spine, in particular spine curve angle measurements.

    The TimeLens provides the means to compare a region of interest between multiple time points.

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc…). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    When preparing the radiologist's reading workflow on a dedicated workplace or workstation, Syngo Carbon Clinicals can be called to generate additional results or renderings according to the user needs using the tools available.

    AI/ML Overview

    This document describes performance evaluation for two specific tools within Syngo Carbon Clinicals (VA41): OrthoMatic Spine and TimeLens.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/ToolAcceptance CriteriaReported Device Performance
    OrthoMatic SpineAlgorithm's measurement deviations for major spinal measurements (Cobb angles, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment) must fall within the range of inter-reader variability.Cumulative Distribution Functions (CDFs) demonstrated that the algorithm's measurement deviations fell within the range of inter-reader variability for the major Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment. This indicates the algorithm replicates average rater performance and meets clinical reliability acceptance criteria.
    TimeLensNot specified as a reader study/bench test was not required due to its nature as a simple workflow enhancement algorithm.No specific quantitative performance metrics are provided, as clinical performance evaluation methods (reader studies) were deemed unnecessary. The tool is described as a "simple workflow enhancement algorithm".

    2. Sample Size Used for the Test Set and Data Provenance

    • OrthoMatic Spine:

      • Test Set Sample Size: 150 spine X-ray images (75 frontal views, 75 lateral views) were used in a reader study.
      • Data Provenance: The document states that the main dataset for training includes data from USA, Germany, Ukraine, Austria, and Canada. While this specifies the training data provenance, the provenance of the specific 150 images used for the reader study (test set) is not explicitly segregated or stated here. The study involved US board-certified radiologists, implying the test set images are relevant to US clinical practice.
      • Retrospective/Prospective: Not explicitly stated, but the description of "collected" images and patients with various spinal conditions suggests a retrospective collection of existing exams.
    • TimeLens: No specific test set details are provided as a reader study/bench test was not required.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • OrthoMatic Spine:

      • Number of Experts: Five US board-certified radiologists.
      • Qualifications: US board-certified radiologists. No specific years of experience are mentioned.
      • Ground Truth for Reader Study: The "mean values obtained from the radiologists' assessments" for the 150 spine X-ray images served as the reference for comparison against the algorithm's output.
    • TimeLens: Not applicable, as no reader study was conducted.

    4. Adjudication Method for the Test Set

    • OrthoMatic Spine: The algorithm's output was assessed against the mean values obtained from the five radiologists' assessments. This implies a form of consensus or average from multiple readers rather than a strict 2+1 or 3+1 adjudication.
    • TimeLens: Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • OrthoMatic Spine: A reader study was performed, which is a type of MRMC study. However, this was a standalone performance evaluation of the algorithm against human reader consensus, not a comparative effectiveness study with and without AI assistance for human readers. Therefore, there is no reported "effect size of how much human readers improve with AI vs without AI assistance." The study aimed to show the algorithm replicates average human rater performance.
    • TimeLens: Not applicable.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • OrthoMatic Spine: Yes, a standalone performance evaluation of the OrthoMatic Spine algorithm (without human-in-the-loop assistance) was conducted. The algorithm's measurements were compared against the mean values derived from five human radiologists.
    • TimeLens: The description suggests the TimeLens tool itself is a "simple workflow enhancement algorithm" and its performance was evaluated through non-clinical verification and validation activities rather than a specific standalone clinical study with an AI algorithm providing measurements.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • OrthoMatic Spine:
      • For the reader study (test set performance evaluation): Expert consensus (mean of five US board-certified radiologists' measurements) was used to assess the algorithm's performance.
      • For the training set: The initial annotations were performed by trained non-radiologists and then reviewed by board-certified radiologists. This can be considered a form of expert-verified annotation.
    • TimeLens: Not specified, as no clinical ground truth assessment was required.

    8. The Sample Size for the Training Set

    • OrthoMatic Spine:
      • Number of Individual Patients (Training Data): 6,135 unique patients.
      • Number of Images (Training Data): A total of 23,464 images were collected within the entire dataset, which was split 60% for training, 20% for validation, and 20% for model selection. Therefore, the training set would comprise approximately 60% of both the patient count and image count. So, roughly 3,681 patients and 14,078 images.
    • TimeLens: Not specified.

    9. How the Ground Truth for the Training Set Was Established

    • OrthoMatic Spine: Most images in the dataset (used for training, validation, and model selection) were annotated using a dedicated annotation tool (Darwin, V7 Labs) by a US-based medical data labeling company (Cogito Tech LLC). Initial annotations were performed by trained non-radiologists and subsequently reviewed by board-certified radiologists. This process was guided by written guidelines and automated workflows to ensure quality and consistency, with annotations including vertebral landmarks and key vertebrae (C7, L1, S1).
    • TimeLens: Not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250288
    Manufacturer
    Date Cleared
    2025-10-23

    (265 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Trade/Device Name: TeraRecon Cardiovascular.Calcification.CT
    Regulation Number: 21 CFR 892.2050
    Classification Name** | Automated Radiological Image Processing Software |
    | Regulation Number | 892.2050

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TeraRecon Cardiovascular.Calcification.CT is intended to provide an automatic 3D segmentation of calcified plaques within the coronary arteries and outputs a mask for calcium scoring systems to use for calculations. The results of TeraRecon Cardiovascular.Calcification.CT are intended to be used in conjunction with other patient information by trained professionals who are responsible for making any patient management decision per the standard of care. TeraRecon Cardiovascular.Calcification.CT is a software as a medical device (SaMD) deployed as a containerized application. The device inputs are CT heart without contrast DICOM images. The device outputs are DICOM result files which may be viewed utilizing DICOM-compliant systems. The device does not alter the original input data and does not provide a diagnosis.

    TeraRecon Cardiovascular.Calcification.CT is indicated to generate results from Calcium Score CT scans taken of adult patients, 30 years and older, except patients with pre-existing cardiac devices, electrodes, previous and established ischemic diseases (IMA, bypass grafts, stents, PTCA) and Thoracic metallic devices. The device is not specific to any gender, ethnic group, or clinical condition. The device's use should be limited to CT scans acquired on General Electric (GE) or Siemens Healthcare or their subsidiaries (e.g. GE Healthcare) equipment. Use of the device with CT scans from other manufacturers is not recommended.

    Device Description

    The TeraRecon Cardiovascular.Calcification.CT algorithm is an image processing software device that can be deployed as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform. The device provides an automatic 3D segmentation of the coronary calcifications.

    When TeraRecon Cardiovascular.Calcification.CT results are used in external viewer devices such as TeraRecon's Intuition or Eureka Clinical AI medical devices, all the standard features offered by the external viewer are employed.

    The TeraRecon Cardiovascular.Calcification.CT algorithm is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities, and limitations.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the TeraRecon Cardiovascular.Calcification.CT device, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Agatston Classification Accuracy: At least 80% accuracy for the 4 revised Agatston classes (0-10, 11-100, 101-400, >400), with a lower bound 95% confidence interval (CI) of at least 75%.Passed. Mean accuracies exceeded 94% across Agatston categories, with 95% CI lower bounds above 75%.
    Vessel Calcification Classification (Dice Similarity Coefficient): At least 80% DICE with a lower bound 95% confidence interval of at least 75% for segmentation of calcifications by vessel (LM, LAD, LCX, RCA).Passed. Segmentation performance, measured by Dice similarity coefficient against expert annotations, consistently exceeded the predefined acceptance criteria (≥80% Dice with lower 95% CI ≥75%).

    Study Details

    1. Sample Size Used for the Test Set:
    The test set included 422 adult patients.

    2. Data Provenance (Country of Origin, Retrospective/Prospective):

    • Retrospective cohort study.
    • At least 50% of the ground truth data is from the US, divided between multiple locations.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • 3 annotators (experts) were used for each study to segment coronary vessels and apply thresholds to create calcification masks within the vessels.
    • Qualifications of experts: Not explicitly stated in the provided text.

    4. Adjudication Method for the Test Set:

    • Majority vote (2+1 method): The final calcification ground truth for the calcification segmentation masks was attained if a voxel was part of at least 2 of the masks defined by the three annotators.
    • For the ground truth vessel of calcification, it was also attained by majority vote among the 3 annotators.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned. The study focused on standalone device performance against expert-established ground truth.

    6. Standalone Performance:

    • Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The results reported are directly attributed to the TeraRecon Cardiovascular.Calcification.CT device's performance against ground truth.

    7. Type of Ground Truth Used:

    • Expert consensus based on annotations from three experts. The experts segmented coronary vessels and applied thresholds to create calcification masks. The final ground truth was established by majority vote among these annotators for both the calcification mask and the vessel classification.

    8. Sample Size for the Training Set:

    • The document does not explicitly state the sample size used for the training set. It only describes the test set.

    9. How the Ground Truth for the Training Set was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only describes the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 277