Search Filters

Search Results

Found 247 results

510(k) Data Aggregation

    K Number
    K250330
    Date Cleared
    2025-11-03

    (271 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K250337
    Device Name
    AiORTA - Plan
    Date Cleared
    2025-10-30

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AiORTA - Plan tool is an image analysis software tool for volumetric assessment. It provides volumetric visualization and measurements based on 3D reconstruction computed from cardiovascular CTA scans. The software device is intended to provide adjunct information to a licensed healthcare practitioner (HCP) in addition to clinical data and other inputs, as a measurement tool used in assessment of aortic aneurysm, pre-operative evaluation, planning and sizing for cardiovascular intervention and surgery, and for post-operative evaluation in patients 22 years old and older.

    The device is not intended to provide stand-alone diagnosis or suggest an immediate course of action in treatment or patient management.

    Device Description

    AiORTA - Plan is a cloud-based software tool used to make and review geometric measurements of cardiovascular structures, specifically abdominal aortic aneurysms. The software uses CT scan data as input to make measurements from 2D and 3D mesh based images. Software outputs are intended to be used as a measurement tool used in assessment of aortic aneurysm, pre-operative evaluation, planning and sizing for cardiovascular intervention and surgery, and for post-operative evaluation.

    The AiORTA - Plan software consists of two components, the Analysis Pipeline and Web Application.

    The Analysis Pipeline is the data processing engine that produces measurements of the abdominal aorta based on the input DICOM images. It consists of multiple modules that are operated by a trained Analyst to preprocess the DICOM images, compute geometric parameters (e.g., centerlines, diameters, lengths, volumes), and upload the results to the Web App for clinician review. The Analyst plays a role in ensuring the quality of the outputs. However, the end user (licensed healthcare practitioner) is ultimately responsible for the accuracy of the segmentations, the resulting measurements, and any clinical decisions based on these outputs.

    The workflow of the Analysis Pipeline can be described in the following steps:

    • Input: the Analysis Pipeline receives a CTA scan as input.
    • Segmentation: an AI-powered auto-masking algorithm performs segmentation of the aortic lumen, wall, and key anatomical landmarks, including the superior mesenteric, celiac, and renal arteries. A trained Analyst performs quality control of the segmentations, making any necessary revisions to ensure accurate outputs.
    • 3D conversion: the segmentations are converted into 3D mesh representations.
    • Measurement computation: from the 3D representations, the aortic centerline and geometric measurements, such as diameters, lengths, and volumes, are computed.
    • Follow-up study analysis: for patients with multiple studies, the system can detect and display changes in aortic geometry between studies.
    • Report generation: a report is generated containing key measurements and a 3D Anatomy Map providing multiple views of the abdominal aorta and its landmarks.
    • Web application integration: the outputs, including the segmented CT masks, 3D visualizations, and reports, are uploaded to the Web App for interactive review and analysis.

    The Web Application (Web App) is the front end and user facing component of the system. It is a cloud-based user interface offered to the qualified clinician to first upload de-identified cardiovascular CTA scans in DICOM format, along with relevant demographic and medical information about the patient and current study. The uploaded data is processed asynchronously by the Analysis Pipeline. Once processing is complete, the Web App then enables clinicians to interactively review and analyze the resulting outputs.

    Main features of the Web App include:

    • Segmentation review and correction: Clinicians can review the resulting segmentations from the Analysis Pipeline segmentations by viewing the CT slices alongside the segmentation masks. Segmentations can be revised using tools such as a brush or pixel eraser, with adjustable brush size, to select or remove pixels as needed. When clinicians revise segmentations, they can request asynchronous re-analysis by the Analysis Pipeline, which generates updated measurements and a 3D Anatomy Map of the aorta based on the revised segmentations.
    • 3D visualization: The aorta and key anatomical landmarks can be examined in full rotational views using the 3D Anatomy Map.
    • Measurement tools: Clinicians can perform measurements directly on the 3D Anatomy Map of the abdominal aorta and have access to a variety of measurement tools, including:
      • Centerline distance, which measures the distance (in mm) between two user-selected planes along the aortic centerline.
      • Diameter range, which measures the minimum and maximum diameters (in mm) within the region of interest between two user-selected planes along the aortic centerline.
      • Local diameter, which measures the diameter (in mm) at the user-selected plane along the aortic centerline.
      • Volume, which measures the volume (in mL) between two user-selected planes along the aortic centerline.
      • Calipers, which allow additional linear measurements (in mm) at user-selected points.
    • Screenshots: Clinicians can capture images of the 3D visualizations of the aorta or the segmentations displayed on the CT slices.
    • Longitudinal analysis: For patients with multiple studies, the Web App allows side-by-side review of studies. Clinicians have access to the same measurement and visualization tools available in single-study review, enabling comparison between studies.
    • Reporting: Clinicians can generate and download reports containing either the default key measurements computed by the Analysis Pipeline or custom measurements and screenshots captured during review.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the AiORTA - Plan device, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Reported Device Performance

    Metric/Measurement TypeAcceptance CriteriaReported Device Performance
    Auto-segmentation Masks (prior to analyst correction)
    Dice coefficient (Aortic wall)≥ 80%89% (Overall Mean)
    Dice coefficient (Aortic lumen)≥ 80%89% (Overall Mean)
    Landmark identification (Celiac artery proximal position)Within 5mm of ground truthMean distance 2.47mm
    Landmark identification (Renal arteries distal position)Within 5mm of ground truthMean distance 3.51mm
    Diameters and Lengths (after Analyst review and correction)
    Length (Mean absolute error)≤ 6.0mm
    Renal artery to aortic bifurcation lengthN/A5.3 mm (Mean absolute error)
    Renal artery to left iliac bifurcation lengthN/A7.0mm (Mean absolute error)
    Renal artery to right iliac bifurcation lengthN/A6.6mm (Mean absolute error)
    Diameter (Mean absolute error)≤ 2.3mm
    Aortic wall max diameterN/A2.0 mm (Mean absolute error)
    Aortic wall at renal artery diameterN/A2.1 mm (Mean absolute error)
    Aortic wall at left iliac bifurcation diameterN/A1.9mm (Mean absolute error)
    Aortic wall at right iliac bifurcation diameterN/A2.5 mm (Mean absolute error)
    Volumes (using analyst revised segmentations)
    Volume (Mean absolute error)≤ 1.8 mL
    Volume of the WallN/A0.00242 mL (Mean absolute error)
    Volume of the LumenN/A0.00257 mL (Mean absolute error)

    Explanation for Lengths and Diameters that did not meet initial criteria:
    For the following measurements which did not meet the initial acceptance criteria:

    • Length: renal to left iliac bifurcation (7.0mm vs ≤ 6.0mm)
    • Length: renal to right iliac bifurcation (6.6mm vs ≤ 6.0mm)
    • Diameter: wall right iliac (2.5mm vs ≤ 2.3mm)

    A Mean Pairwise Absolute Difference (MPAD) comparison was performed. The device-expert MPAD was smaller than the expert-expert MPAD in all three cases, indicating that the device's measurements were more consistent with experts than the experts were with each other.

    MeasurementExpert-expert MPADDevice-expert MPAD
    Length: renal to left iliac bifurcation7.1mm6.9mm
    Length: renal to right iliac bifurcation10.4mm9.6mm
    Diameter: wall right iliac2.7mm2.5mm

    Study Details for Device Performance Evaluation:

    1. Sample size used for the test set and the data provenance:

      • Auto-segmentation masks and Landmark Identification: The document does not explicitly state the sample size for this specific test, but it mentions using "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers."
      • Diameters and Lengths: The document does not explicitly state the sample size for this specific test, but it mentions using "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers."
      • Volumes: 40 CT scans. The data provenance is "clinical data, including aortic aneurysm cases from both US and Canadian clinical centers." The studies were retrospective, as they involved existing clinical data.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Auto-segmentation masks and Landmark Identification: 3 US-based board-certified Radiologists.
      • Diameters and Lengths: 3 US-based board-certified Radiologists.
      • Volumes: The ground truth for volumes was established using a reference device (Simpleware ScanIP Medical), not directly by human experts, although the input segmentations for both the device and the reference device were analyst-revised.
    3. Adjudication method for the test set:

      • Auto-segmentation masks and Landmark Identification: Ground truth was "annotations approved by 3 US-based board-certified Radiologists." This implies consensus or a primary reader with adjudication, but the exact method (e.g., 2+1, 3+1) is not specified.
      • Diameters and Lengths: Ground truth was "annotations from 3 US-based board-certified Radiologists." Similar to above, the specific consensus method is not detailed.
      • Volumes: Ground truth was established by a reference device, Simpleware ScanIP Medical.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study was explicitly mentioned in the provided text. The testing focused on the standalone performance of the AI-powered components and the consistency of the device's measurements with expert annotations, not on human reader improvement with AI assistance.
    5. If a standalone (i.e., algorithm only without human-in-the loop performance) was done:

      • Yes, a standalone performance evaluation of the auto-masking algorithm (prior to analyst correction) was performed for auto-segmentation masks and landmark identification. The results demonstrated the performance of the auto-masking algorithm "independently of human intervention."
      • However, for diameters and lengths, the measurements were "based on segmentations that underwent Analyst review and correction, ensuring that the verification reflects real-world use conditions." This suggests a semi-automatic, human-in-the-loop performance evaluation for these specific metrics.
    6. The type of ground truth used (expert concensus, pathology, outcomes data, etc):

      • Expert Consensus: Used for auto-segmentation masks, landmark identification, diameters, and lengths. The consensus involved 3 US-based board-certified Radiologists.
      • Reference Device: Used for volumes, comparing against results from Simpleware ScanIP Medical.
    7. The sample size for the training set:

      • The document does not explicitly state the sample size for the training set. It mentions "critical algorithms were verified by comparing their outputs to ground truth data to ensure accuracy and reliability. Algorithms were first verified using synthetic data...Subsequent verification was performed using clinical data, including aortic aneurysm cases from both US and Canadian clinical centers." This refers to verification data, not necessarily the training data size.
    8. How the ground truth for the training set was established:

      • The document does not provide details on how the ground truth for the training set was established. It only describes the ground truth for the verification/test sets. It can be inferred that similar expert review or other validated methods would have been used for training data, but this is not explicitly stated.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251027
    Date Cleared
    2025-10-27

    (208 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended Use

    Viewing, post-processing, qualitative and quantitative evaluation of blood vessels and cardiovascular CT images in DICOM format.

    Indications for Use

    cvi42 Coronary Plaque Software Application is intended to be used for viewing, post-processing, qualitative and quantitative evaluation of cardiovascular computed tomography (CT) images in a Digital Imaging and Communications in Medicine (DICOM) Standard format.

    It enables a set of tools to assist physicians in qualitative and quantitative assessment of cardiac CT images to determine the presence and extent of coronary plaques and stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of CAD or suspected CAD.

    cvi42 Coronary Plaque's semi-automated machine learning algorithms are intended for an adult population.

    cvi42 Coronary Plaque shall be used only for cardiac images acquired from a CT scanner. It shall be used by qualified medical professionals, experienced in examining cardiovascular CT images, for the purpose of obtaining diagnostic information as part of a comprehensive diagnostic decision-making process.

    Device Description

    Circle's cvi42 Coronary Plaque Software Application ('cvi42 Coronary Plaque' or 'Coronary Plaque Module', for short) is a Software as a Medical Device (SaMD) that enables the analysis of CT Angiography scans of the coronary arteries of the heart. It is designed to support physicians in the visualization, evaluation, and analysis of coronary vessel plaques through manual or semi-automatic segmentation of vessel lumen and wall to determine the presence and extent of coronary plaques and luminal stenoses, in patients who underwent Coronary Computed Tomography Angiography (CCTA) for the evaluation of coronary artery disease (CAD) or suspected CAD. The device is intended to be used as an aid to the existing standard of care and does not replace existing software applications that physicians use. The Coronary Plaque Module can be integrated into an image viewing software intended for visualization of cardiac images, such as Circle's FDA-cleared cvi42 Software Application. The Coronary Plaque Module does not interface directly with any data collection equipment, and its functionality is independent of the type of vendor acquisition equipment. The analysis results are available on-screen, can be sent to report or saved for future review.

    The Coronary Plaque Module consists of multiplanar reconstruction (MPR) views, curved planar reformation (CPR) and straightened views, and 3D rendering of the original CT data. The Module displays three orthogonal MPR views that the user can freely adjust to any position and orientation. Lines and regions of interest (ROIs) can be manually drawn on these MPR images for quantitative measurements.

    The Coronary Plaque Module implements an Artificial Intelligence/Machine Learning (AI/ML) algorithm to detect lumen and vessel wall structures. Further, the module implements post-processing methods to convert coronary artery lumen and vessel wall structures to editable surfaces and detect the presence and type of coronary plaque in the region between the lumen and vessel wall. All surfaces generated by the system are editable and users are advised to verify and correct any errors.

    The device allows users to perform the measurements listed in Table 1.

    AI/ML Overview

    Here's a summary of the acceptance criteria and study details based on the provided FDA 510(k) Clearance Letter for the cvi42 Coronary Plaque Software Application:

    1. Table of Acceptance Criteria and Reported Device Performance

    EndpointAcceptance Criteria (Implied)Reported Device PerformancePass / Fail
    Lumen Mean Dice Similarity Coefficient (DSC)Not explicitly stated but inferred as >= 0.76 with positive result0.76Pass
    Wall Mean Dice Similarity Coefficient (DSC)Not explicitly stated but inferred as >= 0.80 with positive result0.80Pass
    Lumen Mean Hausdorff Distance (HD)Not explicitly stated but inferred as <= 0.77 mm with positive result0.77 mmPass
    Wall Mean Hausdorff Distance (HD)Not explicitly stated but inferred as <= 0.87 mm with positive result0.87 mmPass
    Total Plaque (TP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.97 with positive result0.97Pass
    Calcified Plaque (CP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.99 with positive result0.99Pass
    Non-Calcified Plaque (NCP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.93 with positive result0.93Pass
    Low-Attenuation Plaque (LAP) Pearson Correlation Coefficient (PCC)Not explicitly stated but inferred as >= 0.74 with positive result0.74Pass

    Note: The acceptance criteria for each endpoint are not explicitly numerical in the provided text. They are inferred to be "met Circle's pre-defined acceptance criteria" and are presented here as the numeric value reported, implying that the reported value met or exceeded the internal acceptance threshold for a 'Pass'.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The document mentions "All data used for validation were not used during the development of the ML algorithms" and "Image information for all samples was anonymized and limited to ePHI-free DICOM headers." However, the specific number of cases or images in the test set is not provided.
    • Data Provenance: Sourced from multiple sites, with 100% of the data sampled from US sources. The data consisted of images acquired from major vendors of CT imaging devices.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three expert annotators were used.
    • Qualifications of Experts: Not explicitly stated beyond "expert annotators." The document implies they are experts in coronary vessel and lumen wall segmentation within cardiac CT images.

    4. Adjudication Method for the Test Set

    The ground truth was established "from three expert annotators." While it doesn't explicitly state "2+1" or "3+1", the use of three annotators suggests a consensus-based adjudication, likely majority vote (e.g., if two out of three agreed, that constituted the ground truth, or a more complex consensus process). The specific method is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No. The document states, "No clinical studies were necessary to support substantial equivalence." The evaluation was primarily based on the performance of the ML algorithms against a reference standard established by experts, not on how human readers improved their performance with AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes. The performance evaluation focused on the "performance of the ML-based coronary vessel and lumen wall segmentation algorithm... evaluated against pre-defined acceptance criteria and compared to a reference standard established from three expert annotators." This indicates a standalone performance assessment of the algorithm's output. The device is also described as having "semi-automated machine learning algorithms", implying the user can verify and correct.

    7. The Type of Ground Truth Used

    Expert Consensus. The ground truth was established "from three expert annotators," indicating that human experts' annotations formed the reference standard against which the algorithm's performance was measured.

    8. The Sample Size for the Training Set

    Not explicitly stated. The document mentions the ML algorithms "have been trained and tested on images acquired from major vendors of CT imaging devices," but it does not provide the specific sample size for the training set. It only clarifies that the validation data was not used for training.

    9. How the Ground Truth for the Training Set Was Established

    Not explicitly stated. The document describes how the ground truth for the validation/test set was established (three expert annotators). It does not provide details on how the ground truth for the training set was generated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251059
    Date Cleared
    2025-10-24

    (203 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Syngo Carbon Clinicals is intended to provide advanced visualization tools to prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by the medical imaging modalities (for example, CT, MR, etc.)

    OrthoMatic Spine provides the means to perform musculoskeletal measurements of the whole spine, in particular spine curve angle measurements.

    The TimeLens provides the means to compare a region of interest between multiple time points.

    The software package is designed to support technicians and physicians in qualitative and quantitative measurements and in the analysis of clinical data that was acquired by medical imaging modalities.

    An interface shall enable the connection between the Syngo Carbon Clinicals software package and the interconnected software solution for viewing, manipulation, communication, and storage of medical images.

    Device Description

    Syngo Carbon Clinicals is a software only Medical Device, which provides dedicated advanced imaging tools for diagnostic reading. These tools can be called up using standard interfaces any native/syngo based viewing applications (hosting applications) that is part of the SYNGO medical device portfolio. These tools help prepare and process the medical image for evaluation, manipulation and communication of clinical data that was acquired by medical imaging modalities (e.g., MR, CT etc.)

    Deployment Scenario: Syngo Carbon Clinicals is a plug-in that can be added to any SYNGO based hosting applications (for example: Syngo Carbon Space, syngo.via etc…). The hosting application (native/syngo Platform-based software) is not described within this 510k submission. The hosting device decides which tools are used from Syngo Carbon Clinicals. The hosting device does not need to host all tools from the Syngo Carbon Clinicals, a desired subset of the provided tools can be used. The same can be enabled or disabled thru licenses.

    When preparing the radiologist's reading workflow on a dedicated workplace or workstation, Syngo Carbon Clinicals can be called to generate additional results or renderings according to the user needs using the tools available.

    AI/ML Overview

    This document describes performance evaluation for two specific tools within Syngo Carbon Clinicals (VA41): OrthoMatic Spine and TimeLens.

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/ToolAcceptance CriteriaReported Device Performance
    OrthoMatic SpineAlgorithm's measurement deviations for major spinal measurements (Cobb angles, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment) must fall within the range of inter-reader variability.Cumulative Distribution Functions (CDFs) demonstrated that the algorithm's measurement deviations fell within the range of inter-reader variability for the major Cobb angle, thoracic kyphosis angle, lumbar lordosis angle, coronal balance, and sagittal vertical alignment. This indicates the algorithm replicates average rater performance and meets clinical reliability acceptance criteria.
    TimeLensNot specified as a reader study/bench test was not required due to its nature as a simple workflow enhancement algorithm.No specific quantitative performance metrics are provided, as clinical performance evaluation methods (reader studies) were deemed unnecessary. The tool is described as a "simple workflow enhancement algorithm".

    2. Sample Size Used for the Test Set and Data Provenance

    • OrthoMatic Spine:

      • Test Set Sample Size: 150 spine X-ray images (75 frontal views, 75 lateral views) were used in a reader study.
      • Data Provenance: The document states that the main dataset for training includes data from USA, Germany, Ukraine, Austria, and Canada. While this specifies the training data provenance, the provenance of the specific 150 images used for the reader study (test set) is not explicitly segregated or stated here. The study involved US board-certified radiologists, implying the test set images are relevant to US clinical practice.
      • Retrospective/Prospective: Not explicitly stated, but the description of "collected" images and patients with various spinal conditions suggests a retrospective collection of existing exams.
    • TimeLens: No specific test set details are provided as a reader study/bench test was not required.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • OrthoMatic Spine:

      • Number of Experts: Five US board-certified radiologists.
      • Qualifications: US board-certified radiologists. No specific years of experience are mentioned.
      • Ground Truth for Reader Study: The "mean values obtained from the radiologists' assessments" for the 150 spine X-ray images served as the reference for comparison against the algorithm's output.
    • TimeLens: Not applicable, as no reader study was conducted.

    4. Adjudication Method for the Test Set

    • OrthoMatic Spine: The algorithm's output was assessed against the mean values obtained from the five radiologists' assessments. This implies a form of consensus or average from multiple readers rather than a strict 2+1 or 3+1 adjudication.
    • TimeLens: Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • OrthoMatic Spine: A reader study was performed, which is a type of MRMC study. However, this was a standalone performance evaluation of the algorithm against human reader consensus, not a comparative effectiveness study with and without AI assistance for human readers. Therefore, there is no reported "effect size of how much human readers improve with AI vs without AI assistance." The study aimed to show the algorithm replicates average human rater performance.
    • TimeLens: Not applicable.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • OrthoMatic Spine: Yes, a standalone performance evaluation of the OrthoMatic Spine algorithm (without human-in-the-loop assistance) was conducted. The algorithm's measurements were compared against the mean values derived from five human radiologists.
    • TimeLens: The description suggests the TimeLens tool itself is a "simple workflow enhancement algorithm" and its performance was evaluated through non-clinical verification and validation activities rather than a specific standalone clinical study with an AI algorithm providing measurements.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • OrthoMatic Spine:
      • For the reader study (test set performance evaluation): Expert consensus (mean of five US board-certified radiologists' measurements) was used to assess the algorithm's performance.
      • For the training set: The initial annotations were performed by trained non-radiologists and then reviewed by board-certified radiologists. This can be considered a form of expert-verified annotation.
    • TimeLens: Not specified, as no clinical ground truth assessment was required.

    8. The Sample Size for the Training Set

    • OrthoMatic Spine:
      • Number of Individual Patients (Training Data): 6,135 unique patients.
      • Number of Images (Training Data): A total of 23,464 images were collected within the entire dataset, which was split 60% for training, 20% for validation, and 20% for model selection. Therefore, the training set would comprise approximately 60% of both the patient count and image count. So, roughly 3,681 patients and 14,078 images.
    • TimeLens: Not specified.

    9. How the Ground Truth for the Training Set Was Established

    • OrthoMatic Spine: Most images in the dataset (used for training, validation, and model selection) were annotated using a dedicated annotation tool (Darwin, V7 Labs) by a US-based medical data labeling company (Cogito Tech LLC). Initial annotations were performed by trained non-radiologists and subsequently reviewed by board-certified radiologists. This process was guided by written guidelines and automated workflows to ensure quality and consistency, with annotations including vertebral landmarks and key vertebrae (C7, L1, S1).
    • TimeLens: Not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250288
    Manufacturer
    Date Cleared
    2025-10-23

    (265 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TeraRecon Cardiovascular.Calcification.CT is intended to provide an automatic 3D segmentation of calcified plaques within the coronary arteries and outputs a mask for calcium scoring systems to use for calculations. The results of TeraRecon Cardiovascular.Calcification.CT are intended to be used in conjunction with other patient information by trained professionals who are responsible for making any patient management decision per the standard of care. TeraRecon Cardiovascular.Calcification.CT is a software as a medical device (SaMD) deployed as a containerized application. The device inputs are CT heart without contrast DICOM images. The device outputs are DICOM result files which may be viewed utilizing DICOM-compliant systems. The device does not alter the original input data and does not provide a diagnosis.

    TeraRecon Cardiovascular.Calcification.CT is indicated to generate results from Calcium Score CT scans taken of adult patients, 30 years and older, except patients with pre-existing cardiac devices, electrodes, previous and established ischemic diseases (IMA, bypass grafts, stents, PTCA) and Thoracic metallic devices. The device is not specific to any gender, ethnic group, or clinical condition. The device's use should be limited to CT scans acquired on General Electric (GE) or Siemens Healthcare or their subsidiaries (e.g. GE Healthcare) equipment. Use of the device with CT scans from other manufacturers is not recommended.

    Device Description

    The TeraRecon Cardiovascular.Calcification.CT algorithm is an image processing software device that can be deployed as a containerized application (e.g., Docker container) that runs on off-the-shelf hardware or on a cloud platform. The device provides an automatic 3D segmentation of the coronary calcifications.

    When TeraRecon Cardiovascular.Calcification.CT results are used in external viewer devices such as TeraRecon's Intuition or Eureka Clinical AI medical devices, all the standard features offered by the external viewer are employed.

    The TeraRecon Cardiovascular.Calcification.CT algorithm is not intended to replace the skill and judgment of a qualified medical practitioner and should only be used by individuals that have been trained in the software's function, capabilities, and limitations.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the TeraRecon Cardiovascular.Calcification.CT device, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Agatston Classification Accuracy: At least 80% accuracy for the 4 revised Agatston classes (0-10, 11-100, 101-400, >400), with a lower bound 95% confidence interval (CI) of at least 75%.Passed. Mean accuracies exceeded 94% across Agatston categories, with 95% CI lower bounds above 75%.
    Vessel Calcification Classification (Dice Similarity Coefficient): At least 80% DICE with a lower bound 95% confidence interval of at least 75% for segmentation of calcifications by vessel (LM, LAD, LCX, RCA).Passed. Segmentation performance, measured by Dice similarity coefficient against expert annotations, consistently exceeded the predefined acceptance criteria (≥80% Dice with lower 95% CI ≥75%).

    Study Details

    1. Sample Size Used for the Test Set:
    The test set included 422 adult patients.

    2. Data Provenance (Country of Origin, Retrospective/Prospective):

    • Retrospective cohort study.
    • At least 50% of the ground truth data is from the US, divided between multiple locations.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    • 3 annotators (experts) were used for each study to segment coronary vessels and apply thresholds to create calcification masks within the vessels.
    • Qualifications of experts: Not explicitly stated in the provided text.

    4. Adjudication Method for the Test Set:

    • Majority vote (2+1 method): The final calcification ground truth for the calcification segmentation masks was attained if a voxel was part of at least 2 of the masks defined by the three annotators.
    • For the ground truth vessel of calcification, it was also attained by majority vote among the 3 annotators.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • No MRMC comparative effectiveness study involving human readers with and without AI assistance was mentioned. The study focused on standalone device performance against expert-established ground truth.

    6. Standalone Performance:

    • Yes, a standalone (algorithm only without human-in-the-loop performance) study was conducted. The results reported are directly attributed to the TeraRecon Cardiovascular.Calcification.CT device's performance against ground truth.

    7. Type of Ground Truth Used:

    • Expert consensus based on annotations from three experts. The experts segmented coronary vessels and applied thresholds to create calcification masks. The final ground truth was established by majority vote among these annotators for both the calcification mask and the vessel classification.

    8. Sample Size for the Training Set:

    • The document does not explicitly state the sample size used for the training set. It only describes the test set.

    9. How the Ground Truth for the Training Set was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only describes the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K252298
    Device Name
    ANDI 2.0
    Date Cleared
    2025-10-22

    (91 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ANDI is intended for the display of medical images and other healthcare data. It includes functions for processing MR images, atlas-assisted visualization, segmentation, and volumetric quantification of segmentable brain structures. The output is generated for use by a system capable of reading DICOM image sets.

    The information presented by ANDI does not provide prediction, diagnosis, or interpretation of brain health. Clinical interpretation and decision-making are the responsibility of the physician, who must review all clinical information associated with a patient in order to make a diagnosis and to determine the next steps in the clinical care of the patient.

    Typical users of ANDI are medical professionals, including but not limited to neurologists and radiologists. ANDI should be used only as adjunctive information. The decision made by trained medical professionals will be considered final.

    Device Description

    ANDI is software as a medical device (SaMD) that can be deployed on a cloud-based system, or installed on-premises. It is delivered as software as a service (SaaS) and operates without a graphical user interface. The software can be used to perform DICOM image viewing, image processing, and analysis, specifically designed to analyze brain MRI data. It processes diffusion-weighted and T1-weighted images to quantify and visualize white matter microstructure, providing adjunctive information to aid clinical evaluation. An optional AI-based segmentation feature enables quantification of the volume of gray matter regions. The results are output in a report that presents reference information to assist trained medical professionals in clinical decision-making by enabling comparisons between a patient's data, a normative database, and the patient's longitudinal data.

    AI/ML Overview

    The document is a 510(k) clearance letter for ANDI 2.0. The device is a "Medical image management and processing system" that processes MR images for atlas-assisted visualization, segmentation, and volumetric quantification of segmentable brain structures. It provides adjunctive information to aid clinical evaluation, with the final clinical interpretation and decision-making remaining the responsibility of the physician.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. A table of acceptance criteria and the reported device performance:

    Acceptance CriteriaReported Device Performance
    Accuracy and Robustness of Brain Regions Segmentation (Dice Coefficient)
    ≥ 0.75 for major subcortical brain structuresAverage Dice coefficients ranged from 0.89 to 0.96 for major subcortical brain structures.
    ≥ 0.8 for major cortical brain structuresAverage Dice coefficients ranged from 0.79 to 0.93 for major cortical brain structures.
    Reproducibility of Brain Region Segmentation (Maximum Absolute Volume Difference)
    Maximum absolute volume difference below 7%Mean absolute volume difference of 2.1% across major cortical and subcortical brain structures, with individual structures ranging from 1.2% to 3.9%.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective):

    • Accuracy and Robustness Test Set:

      • Sample Size: 71 subjects.
      • Demographics: 35 females, 36 males; age range 18-86 years.
      • Geographic Origin: 38 subjects were of USA origin.
      • Health Status: 35 healthy subjects; 36 diseased subjects (Multiple Sclerosis (n=11), Parkinson's disease (n=12), Alzheimer's disease (n=12), mild cognitive impairment (n=1)).
      • Data Provenance: The document does not explicitly state if this data was retrospective or prospective, but it implies pre-existing data with the phrase "images preprocessed by ANDI". The selection process (stratified by age, gender, pathology, MRI manufacturer, and field strength) suggests a retrospective collection to represent a diverse population.
    • Reproducibility Test Set:

      • Sample Size: 59 subjects (with 2 timepoints each).
      • Demographics: 30 females, 29 males; age range 23-86 years.
      • Geographic Origin: 38 subjects were of USA origin.
      • Health Status: Only healthy subjects were selected to avoid bias from disease progression.
      • Data Provenance: The document does not explicitly state if this data was retrospective or prospective. The mention of "2 timepoints" indicates longitudinal data, which could be from either prospective follow-ups or retrospectively re-analyzed longitudinal datasets.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: A panel of 3 board-certified neuroradiologists.
    • Qualifications: Board-certified neuroradiologists.
    • Process: 71 preprocessed T1 images were first pre-segmented using Freesurfer v7.4.1. The resulting segmentations were then manually corrected by "an expert" (singular, qualifications not specified, but likely a trained individual) and subsequently "approved" by the panel of 3 board-certified neuroradiologists.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    The adjudication method for establishing ground truth was a consensus-based approval process by a panel of 3 board-certified neuroradiologists, following initial manual correction by a single expert. This can be seen as a form of expert consensus and approval, but not a specific "2+1" or "3+1" voting method as typically applied when multiple readers independently rate and then adjudicate. Here, the neuroradiologists approved an already corrected segmentation.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No, a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not done. The document explicitly states: "No clinical studies were considered necessary and performed." The performance testing focused on the standalone algorithm's accuracy and reproducibility against an expert-approved ground truth.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    Yes, a standalone algorithm-only performance study was done. The sections "AI / ML performance data" detail the evaluation of "ANDI 2.0's brain regions segmentation" against an "expert approved ground truth" using Dice coefficients and volume differences. This assesses the algorithm's performance independent of real-time human interaction.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    The ground truth used for the test set was expert consensus, derived from Freesurfer pre-segmentations, manually corrected by an expert, and then approved by a panel of 3 board-certified neuroradiologists.

    8. The sample size for the training set:

    • AI/ML Module Training Set: 140 representative subjects.
    • AI/ML Module Validation Set: 747 independent subjects.

    9. How the ground truth for the training set was established:

    The document states that the device incorporates a "pretrained third-party brain segmentation algorithm." It mentions that this algorithm was "subjected to training using 140 representative subjects" and "Validation data included 747 independent subjects." However, the document does not explicitly describe how the ground truth for this training or validation set was established by the third party. It only mentions that the data for evaluation of ANDI's integration of this algorithm was independent ("ensuring data independence since ANDI-preprocessed images were not available for the training of the algorithm by the third-party algorithm").

    Ask a Question

    Ask a specific question about this device

    K Number
    K252665
    Manufacturer
    Date Cleared
    2025-10-20

    (56 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    brAIn™ Shoulder Positioning is intended to be used as an information tool to assist in the preoperative surgical planning and visualization of a primary total shoulder replacement.

    Device Description

    The brAIn™ Shoulder Positioning software is a cloud-based application intended for shoulder surgeons. The software does not perform surgical planning but provides tools to assist the surgeon with planning primary anatomic and reverse total shoulder replacement surgeries using FX Shoulder Solutions implants. The software is accessible via a web-based interface, where the user is prompted to upload their patient's shoulder CT-scan (DICOM series) accompanied with their information in a dedicated interface. The software automatically segments (using machine learning) and performs measurements on the scapula and humerus anatomy contained in the DICOM series. These segmentations serve as a foundation for the surgeon's manual planning, which is performed using an interactive 3D viewer that allows for soft tissue visualization. The surgeon positions the glenoid and humerus implants manually within this same 3D interface using a dedicated manipulation panel. The changes in shoulder anatomy resultant from the implants are relayed in a post-position interface that displays information related to distalization and lateralization. The software outputs a planning multimodal summary that includes textual information (patient information, pre- and post-op measurements) and visual information (screen captures of the shoulder pre- and post-implantation).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the brAIn™ Shoulder Positioning device, based on the provided FDA 510(k) clearance letter:


    Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriteriaReported Device Performance
    Segmentation PerformanceMean Dice Similarity Coefficient (DSC) $\geq$ 0.95Met acceptance criteria that the segmentation performance meets the acceptance criteria. The validation criterion was a Dice Similarity Coefficient (DSC) coefficient of 0.95 or higher, demonstrating that the segmentation produced by the model after post-processing closely matches the ground truth.
    Shoulder Side DetectionCorrect shoulder side (right or left) in DICOM imagesAll performance tests for Shoulder Side Detection validation were successfully completed with no deviations, confirming compliance with the required performance standards.
    Measurement Accuracy (Angles)$\leq$ 1° for angle measurementAll performance tests for Measurement Accuracy Validation were successfully completed with no deviations, confirming compliance with the required performance standards.
    Measurement Accuracy (Distances)$\leq$ 1 mm for distance measurementAll performance tests for Measurement Accuracy Validation were successfully completed with no deviations, confirming compliance with the required performance standards.
    Measurement Accuracy (3D Subluxation)$\leq$ 1% for 3D subluxationAll performance tests for Measurement Accuracy Validation were successfully completed with no deviations, confirming compliance with the required performance standards.
    Landmark PerformanceMean distance $\leq$ 3 mm (compared to final positions adjusted by experts)All performance tests for landmark validation were successfully completed with no deviations, confirming compliance with the required performance standards; achieving accuracy similar to manual positioning.
    Streaming StabilityNo performance degradation (frames per second, jitter, packet loss) with simultaneous multiple usersAll performance tests for the streaming stability were successfully completed with no deviations, confirming compliance with the required performance standards.
    Ruler PerformancePrecision of one millimeter for linear (Euclidean) distance between two user-selected points on the scapula’s unreamed 3D mesh.All performance tests for the ruler tool accuracy were successfully completed with no deviations, confirming compliance with the required performance standards.

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size for Test Set: 173 samples.
      • Data Provenance: Retrospective, with a split based on patient gender, shoulder side, and geographical region of origin.
        • Geographical Origin (Test Set):
          • Left shoulder: 58.2% Europe (46), 41.8% USA (33)
          • Right shoulder: 56.4% Europe (53), 43.6% USA (41)
        • The data corresponds to patients that underwent total shoulder arthroplasty with an FX Shoulder implant, with diversity in gender, imaging equipment, institutions, and study year. The image acquisition protocol was standard for this type of procedure.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • The document does not explicitly state the number of experts used.
      • Qualifications: "Medical professionals" are mentioned for creating manual segmentation labels. For landmark performance, "experts" adjusted final positions, but their specific qualifications are not detailed beyond "medical professionals." For shoulder side detection, a "Clinical Solutions Specialist" performed a manual assessment.
    3. Adjudication Method for the Test Set:

      • The document does not specify an explicit adjudication method such as 2+1 or 3+1 for establishing ground truth from multiple experts. It mentions labels created "manually by medical professionals" and "final positions adjusted by experts," implying a consensus or single-expert approach, but no detailed adjudication process is described.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with AI vs. without AI assistance was not explicitly mentioned or described in the provided information. The studies primarily focus on the standalone performance of the AI for various tasks.
    5. Standalone (Algorithm Only) Performance Study:

      • Yes, a standalone performance study was conducted. The "Segmentation Performance Testing," "Shoulder Side Detection performance testing," "Measurement Accuracy performance testing," "Landmark Performance Testing," "Streaming Stability Testing," and "Ruler Performance Testing" sections all describe the evaluation of the brAIn™ Shoulder Positioning software's algorithmic performance against established ground truths or benchmarks, without explicit human-in-the-loop interaction as part of the primary evaluation metrics.
    6. Type of Ground Truth Used:

      • Segmentation: Manual segmentation performed by medical professionals.
      • Shoulder Side Detection: Manual assessment performed by a Clinical Solutions Specialist.
      • Measurement Accuracy: Reported accuracy of the predicate device (for comparison when editing positions) and theoretical distances calculated from spatial coordinates (for ruler tool).
      • Landmark Performance: Final positions adjusted by experts.
    7. Sample Size for the Training Set:

      • Sample Size for Training Set: 335 samples (corresponding to 65.9% of the total dataset).
    8. How the Ground Truth for the Training Set Was Established:

      • The labels (ground truth) for both the training and testing sets were created "manually by medical professionals."
    Ask a Question

    Ask a specific question about this device

    K Number
    K252214
    Device Name
    AIAS Cephalon
    Manufacturer
    Date Cleared
    2025-10-07

    (84 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AIAS Cephalon is an image-processing software indicated to assist in the positioning of femur fracture implant components for adult patients. It is intended to assist in precisely positioning femur fracture implant components intraoperatively by measuring their positions relative to the bone structures of interest provided that the points of interest can be identified from radiology images (C-arm images). Clinical judgement and experience are required to properly use the device. The device is not for primary image interpretation. The software is not for use on mobile phones.

    Device Description

    AIAS Cephalon is a fully automated software as a medical device (SaMD) that provides image analysis features and intraoperative instructions for proximal femur fracture surgery supporting the positioning of the Depuy Synthes TFN-ADVANCED Proximal femoral nail (TFNA) implant.

    The instructions are based on the intraoperative X-ray images acquired. The system automatically detects anatomy, implants, and tools in the X-ray images. Based on what it sees in the X-ray image, it provides different kinds of information. Every new X-ray image acquired triggers a new system response. This means updated information is only available when a new X-ray image is acquired.

    AIAS Cephalon supports all of the following surgical steps of the procedure: determining the angle of anteversion and the Caput Collum Diaphyseal (CCD)/neck-shaft angle, finding the entry point at the greater trochanter, nail insertion, determination of length of head element, insertion of head element, distal locking of long TFNA nails, and determination of lengths of distal locking screws.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for AIAS Cephalon:

    1. Table of Acceptance Criteria and Reported Device Performance

    Measurement TypeAcceptance Criteria (Median Error)Acceptance Criteria (95% Error Quantile)Reported Device Performance (Implied by "are met")
    Anteversion AngleMetMetMet
    CCD AngleMetMetMet
    Head Element LengthMetMetMet
    Tip-Apex-DistanceMetMetMet
    Distal Locking Screw LengthMetMetMet

    Note: The document states that the evaluations "have demonstrated that the acceptance criteria for the median error and 95% error quantiles of the measurements provided by the device (anteversion angle, CCD angle, head element length, tip-apex-distance, and distal locking screw length) are met." It does not provide the specific numerical thresholds for these criteria or the exact numerical results, but rather confirms that the criteria were satisfied.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Description: "Comprehensive digitally reconstructed radiographic (DRR) datasets, which cover a wide range of real-world imaging scenarios, as well as real X-ray image data acquired from a study with human specimen."
    • DRR Test Set Size: At least 11,000 DRRs were computed from CT scans of 18 patients.
    • DRR Test Set Provenance: The CT scans were from patients with diverse demographic groups (56% male, 44% female; average age 72.3 and 71.1 years respectively; 56% White, 28% Asian, 11% Hispanic or Latino, and 6% Black or African American). The country of origin is not explicitly stated, but the mention of "real-world imaging scenarios" and diverse demographics suggests a broad representation. The data appears to be retrospective as it's CT scans used to generate DRRs for testing a device.
    • Real X-ray Image Data: "real X-ray image data acquired from a study with human specimen." The sample size for this specific real X-ray data is not explicitly given, nor is its provenance.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: One senior orthopedic surgeon.
    • Qualifications: "senior orthopaedic surgeon." No further details on years of experience or specific subspecialty are provided.

    4. Adjudication Method for the Test Set

    • The document states, "For all tests, the reference standard [ground truth] was validated by a senior orthopaedic surgeon." This implies a single expert validation of the ground truth, rather than a multi-expert adjudication method like 2+1 or 3+1.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study was done. The clearance letter states, "Substantial equivalence was not based on an assessment of clinical performance data." The device is intended to assist in positioning, and the focus of the performance testing was on the accuracy of its measurements and detections in a standalone capacity.

    6. Standalone Performance Study

    • Yes, a standalone study was done. The performance of the deep neural networks was tested both stand-alone as well as integrated into the device. The "dedicated validation was performed on image processing accuracy" using DRR datasets and real X-ray data demonstrates standalone algorithm performance. This study evaluated the "measurements provided by the device," indicating the algorithm's direct output.

    7. Type of Ground Truth Used

    • Expert Consensus (single expert validation): "For all tests, the reference standard was validated by a senior orthopaedic surgeon." This indicates expert-defined measurements or annotations as the ground truth.

    8. Sample Size for the Training Set

    • The document states, "The patients whose CT scans were used to compute the DRRs for testing were not used for training the neural networks nor for tuning of any image processing algorithms." However, the sample size for the training set is not provided in the given text.

    9. How Ground Truth for the Training Set Was Established

    • The document does not explicitly state how the ground truth for the training set was established. It only mentions that the ground truth for the test set was validated by a senior orthopedic surgeon.
    Ask a Question

    Ask a specific question about this device

    K Number
    K252007
    Device Name
    BlineSlide
    Manufacturer
    Date Cleared
    2025-10-06

    (101 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended for noninvasive processing of ultrasound images to detect, measure, and calculate relevant medical parameters of structures and function of patients aged 18 years or older with suspected disease.

    Device Description

    Blineslide is a cloud service application that helps qualified users with image-based assessment of lung ultrasound (LUS) cines acquired from the anterior or anterolateral chest regions during a physician-led LUS examination of patients aged 18 years or older. It does not directly interface with ultrasound systems.

    Blineslide takes as input user-uploaded B-Mode LUS video clips (cines) in MP4 format and allows users to detect the relevant medical parameters of structures and function (LUS artifacts). Key features of the software are:

    • B Line Artifact Module: an AI-assisted tool for detecting the presence or absence of B line artifacts in LUS cines

    Blineslide is incompatible with:

    • Cines that are acquired from Linear array ultrasound transducers;
    • Cines acquired at less than 18 frames per second;
    • Cines that require more than 2048 megabytes of memory;
    • Cines that are less than 2600 milliseconds in duration; and
    • Cines that are greater than 7800 milliseconds in duration

    Each of these exclusion criteria are automatically assessed by the software. If detected, an output of Cannot Evaluate is returned to the user to minimize the risk of false LUS artifact detections.

    Blineslide does not perform any function that could not be accomplished by a trained user manually. Patient management decisions should not be made solely on the results of Blineslide's analysis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the BlineSlide device, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance

    1. A table of acceptance criteria and the reported device performance:

    MetricAcceptance Criteria (Implied)Reported Device Performance
    SensitivityNot explicitly stated, but high agreement expected0.91 (95% CI: 0.88 – 0.94)
    SpecificityNot explicitly stated, but high agreement expected0.84 (95% CI: 0.81 – 0.86)

    Note: The FDA 510(k) summary does not explicitly state pre-defined acceptance criteria for statistical metrics like sensitivity and specificity. Instead, the reported performance is presented to demonstrate substantial equivalence to the predicate device. The "implied" acceptance criteria are derived from the need for the device to be "as safe and as effective as the predicate device."


    Study Details

    2. Sample size used for the test set and the data provenance:

    • Sample Size for Test Set: Initially 1005 cines. After exclusions for poor image quality, the final dataset comprised 326 positive class examples (B Line Artifacts Present) and 679 negative class examples (B Line Artifacts Absent), totaling 1005 cines.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but mentioned as "various clinical sites in cities with diverse race and ethnicity populations," and "geographically distinct from the data sources used in the development set." This implies a diverse, likely multi-site, geographical origin.
      • Retrospective or Prospective: Not explicitly stated, but typical for these types of studies, the data is likely retrospective, collected from existing archives, then curated into a test set.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: "Two or more experts."
    • Qualifications of Experts: Not explicitly stated beyond "experts." However, given the context of identifying B line artifacts in lung ultrasound, it can be inferred that these experts would be physicians credentialed to use lung ultrasound clinically, such as intensivists, emergency physicians, pulmonologists, or other clinicians interpreting LUS cines, as described in the "Intended User" section.

    4. Adjudication method for the test set:

    • Adjudication Method: Consensus agreement of two or more experts. In rare cases where consensus could not be reached due to poor image quality, clips were excluded.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly reported in this 510(k) summary. The evaluation focused on the standalone performance of the AI algorithm against expert ground truth.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Yes, a standalone performance assessment was done. The summary explicitly states: "The performance of the B Line Artifact Detection Module was successfully evaluated on a test dataset..." and "Performance was assessed by measuring agreement using sensitivity and specificity as co-primary endpoints with Cannot Evaluate outputs scored as false predictions." This directly describes standalone performance.

    7. The type of ground truth used:

    • Type of Ground Truth: Expert consensus (two or more experts).

    8. The sample size for the training set:

    • The sample size for the training set is not explicitly stated in the provided 510(k) summary. It only mentions that the "test data was entirely separated from that used for development" and the "data sources used in the test set were entirely different and geographically distinct from the data sources used in the development set."

    9. How the ground truth for the training set was established:

    • How the ground truth for the training set was established is not explicitly stated in the provided 510(k) summary. It is implied that ground truth was established during the development phase to train the "non-adaptive machine learning algorithms." This would typically involve expert annotations or labels, similar to the test set, but the specific methodology is not detailed.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250023
    Device Name
    SMART PCFD
    Manufacturer
    Date Cleared
    2025-09-29

    (269 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SMART PCFD software includes AI-powered algorithms and is intended to be used to support orthopedic healthcare professionals in the diagnosis and surgical planning of Progressive Collapsing Foot Deformity (PCFD) in a hospital or clinic environment. The medical image modality intended to be used in the software is weight-bearing CT (WBCT).

    SMART PCFD software provides for the user:

    • Visualization report of the three-dimensional (3D) mathematical models and measurements of the anatomical structures of foot and ankle and three-dimensional models of orthopedic fixation devices,
    • Measurement templates containing radiographic measures of foot and ankle, and
    • Surgical planning application for visualization of foot and ankle anatomical three-dimensional structures, radiographic measures, and surgical instrument parameters supporting the following common flatfoot procedures: Medial Displacement Calcaneal Osteotomy (MDCO), Lateral Column Lengthening (LCL), and Cotton Osteotomy (CO).

    The visualization report containing the measurements is intended to be used to support orthopedic healthcare professionals in the diagnosis of PCFD. The surgical planning application contains the visualizations of the three-dimensional structural models, orthopedic fixation device models and surgical instrument parameters combined with the measurements is intended to be used to support orthopedic healthcare professionals in surgical planning of PCFD.

    Device Description

    The SMART PCFD software is intended to be used in reviewing and digitally processing computed tomography images for the purposes of interpretation by a specialized medical practitioner. The device segments the medical images and creates a 3D model of the bones of the foot and ankle. Measurements, including anatomical axes, are provided to the user and the device allows for presurgical planning.

    The device includes the same machine learning derived outputs as the primary predicate SMART Bun-Yo-Matic CT (K240642) device and no new validations were conducted.

    Details on the previously performed validation are summarized below. The testing for 82 CT image series presented 100% correctly identified bones of foot and ankle. The existence of metal was identified correctly for 98.8% of the images (specificity 98%, sensitivity 100%).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the SMART PCFD device, as extracted from the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    The clearance letter does not explicitly state acceptance criteria in a formal table format with specific thresholds for each metric. Instead, it describes performance results. Based on the provided text, the acceptance criteria can be inferred from the reported performance, implying that these levels of performance were deemed acceptable.

    Feature AssessedAcceptance Criteria (Inferred from Performance)Reported Device Performance
    Bone Identification100% correctly identified bones of foot and ankle100% correctly identified bones of foot and ankle (for 82 CT image series)
    Metal IdentificationHigh specificity and sensitivity for metal identification98.8% correctly identified metal (specificity 98%, sensitivity 100%) (for 82 CT image series)
    Surgical Planning ComponentAppropriate outputs for surgical planning (e.g., mathematical operations for estimated correction within certain tolerances)Surgical planning executes mathematical operations for estimated correction ±1 degree for angular measurements and ±1.0 mm for distance measurements.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 82 CT image studies.
    • Data Provenance:
      • Country of Origin: Various sites across USA and Europe, with a minimum of 50% of the images originating from the USA.
      • Retrospective/Prospective: Not explicitly stated, but the description of collected studies from "patients with different ages and racial groups" and "clinical subgroups ranging from control/normal feet to pre-/post-operative clinical conditions" suggests retrospective data collection.
      • Patient Demographics: Different ages and racial groups, minimum of 35% male/female within each dataset, mean age approximately 47 years (SD 15 years), and representatives from White, (Non-)Hispanic, African American, and Native racial groups.
      • Clinical Conditions: Balanced in terms of subjects with different foot alignment, and subjects from clinical subgroups ranging from control/normal feet (44% with test data) to pre-/post-operative clinical conditions such as Hallux Valgus, Progressive Collapsing Foot Deformity, fractures, or with metal implants (40% of the test data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three (3).
    • Qualifications of Experts: U.S. Orthopedic surgeons. Specific years of experience are not mentioned.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Majority vote. "Based on the majority vote of three, two same responses were required to establish a ground truth on each of the DICOM series." This indicates a "2-out-of-3" or "2+1" adjudication where two experts must agree to establish ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The document describes standalone algorithm performance, and comparison to human readers with or without AI assistance is not mentioned.

    6. Standalone Performance Study

    • Was a standalone study done? Yes. The "Details on the previously performed validation are summarized below" section describes testing conducted on the algorithm itself, independently of human interaction. The reported device performance for bone and metal identification comes directly from this standalone evaluation.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus. The ground truths for bone and metal identification were "independently established by three (3) U.S. Orthopedic surgeons" who "reviewed each of the DICOM series through axial/sagittal/coronal views and/or 3D reconstruction and marked on a spreadsheet the presence of a bone and metal."

    8. Sample Size for the Training Set

    • AI algorithm for bone identification: 145 CT image studies.
    • Metal identification: 130 CT image studies.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the "AI algorithm for bone identification was developed using 145 CT image studies and metal identification was developed using 130 CT image studies." It then goes on to describe how ground truths for the test set were established by three U.S. Orthopedic surgeons. However, the document does not explicitly describe how the ground truth for the training set was established. It's common practice for training data to also be annotated by experts, but the details of that process are not provided in this specific excerpt.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 25