Search Filters

Search Results

Found 231 results

510(k) Data Aggregation

    K Number
    K244002
    Device Name
    AngioWaveNet
    Date Cleared
    2025-09-10

    (258 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251072
    Manufacturer
    Date Cleared
    2025-09-09

    (155 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K251656
    Date Cleared
    2025-09-04

    (97 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K252539
    Device Name
    Tempus Pixel
    Manufacturer
    Date Cleared
    2025-09-03

    (22 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K243859
    Device Name
    PRAEVAorta®2
    Manufacturer
    Date Cleared
    2025-08-29

    (256 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PRAEVAorta®2 is a software intended to be run on its own or as part of another medical device to automatically calculate maximum diameters of anatomical zones from a DICOM CT image containing blood vessels.

    PRAEVAorta®2 is designed to measure the maximal transverse diameter of vessels and determine the maximal general diameter using a non-adaptative machine learning algorithm.

    Intended users of the software are aimed to the clinical specialists, physicians or other licensed practitioners in healthcare institutions, such as clinics, hospitals, healthcare facilities, residential care facilities and long-term care services. Any results obtained from the software by an intended user other than a physician must be validated by the physician responsible of the patient.

    The system is suitable for adults. Its results are not intended to be used on a standalone basis for clinical decision making or otherwise preclude clinical assessment of any disease.

    Device Description

    PRAEVAorta®2 is a decision-making support software for diagnosis and follow-up of vascular diseases. It is intended for automatic segmentation and geometric analysis of vessels.

    It is a companion software whose purpose is to accompany the doctor in the first assessment of several indicators from CT scan images

    The software is able to reconstruct automatically the vascular structures from CT (Computerized Tomography) scans images and automatically segments aneurysms and associated thrombus.

    With this reconstruction, the software is able to provide diameters, volumes, and angles. In addition, the software provides diameters, volumes, angles and distances between anatomic points.

    This software is cloud based or can be installed on premises. PRAEVAorta®2 is a server software usable through APIs. However, it is hardly recommended to use it via a client software. The client aims to provide a user interface to send images and receive the analysis results. It can either be a web client, a getaway / PACS client, an integrating solution, or a marketplace

    AI/ML Overview

    Based on the provided FDA 510(k) clearance letter for PRAEVAorta®2 (K243859), here's a description of the acceptance criteria and the study that proves the device meets those criteria:


    Acceptance Criteria and Device Performance Study for PRAEVAorta®2

    PRAEVAorta®2 is a software intended to automatically calculate maximum diameters of anatomical zones from DICOM CT images containing blood vessels, specifically focusing on the aorta and iliac arteries. The device utilizes a non-adaptive machine learning algorithm to measure maximal transverse diameters of vessels and determine the maximal general diameter.

    1. Table of Acceptance Criteria and Reported Device Performance

    The primary performance validation criterion for PRAEVAorta®2 was based on the measurement accuracy of the total maximum orthogonal aorta diameter compared to a ground truth established by expert manual measurements.

    VariableAcceptance CriteriaReported Device Performance
    Total Maximum Orthogonal Aorta Diameter
    Mean Absolute Error (MAE)Must be less than or equal to 5 mm2.04 mm (95% CI: [1.75 mm; 2.34 mm])
    Percentage of values within ≤ 5 mm limitMust be in at least 96% of cases96.9% of values (within a ≤ 5 mm limit)
    Pearson Correlation CoefficientMust be at least greater than 0.90 (defined as a very strong correlation)0.97
    Bias (Mean Difference)(Implied acceptance: close to zero, within reasonable limits)-0.75 mm (95% CI: [-1.17 mm; -0.33 mm])
    Percentage of values within 95% Limit of Agreement (Bland-Altman)(Implied acceptance: high percentage)96.9% (within the 95% limit of agreement, ranged from –6.01 mm to +4.51 mm)

    Conclusion: The device successfully met all defined acceptance criteria based on its reported performance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 159 unique cases (patients).
    • Data Provenance: The dataset included both contrast-enhanced and non-contrast-enhanced CT scans from:
      • United States (81 CT scans)
      • France (40 CT Scans)
      • Canada (38 CT Scans)
      • Retrospective/Prospective: The document does not explicitly state whether the data was retrospective or prospective, but the description "The selected CT scans were not used for AI training" and "Information collected on the dataset included patient demographics... imaging characteristics... and clinical management details" suggests these were pre-existing, retrospectively collected CT scans.
      • The dataset included images from numerous scanner manufacturers (e.g., GE Medical System, Siemens, Philips, Toshiba) and comprised 95 preoperative and 64 postoperative CT scans (62 with an aortic stent graft). The patients were aged over 18 years, including 130 males, 28 females, and one patient of unknown sex.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • Number of Experts: Four (4)
    • Qualifications of Experts: All experts were vascular surgeons with at least five years of clinical experience in vascular diseases following board certification. They had no financial conflicts of interest and received adequate training from NUREA.

    4. Adjudication Method for the Test Set

    The ground truth was established by manual measurements performed by the four vascular surgeons. The document states, "The measurements performed by these professionals showed no discrepancy greater than 5 mm at the end of the collected data process." This suggests that a form of consensus or high agreement among experts was achieved, though a specific adjudication method (e.g., 2+1 tie-breaker, majority rule) for resolving discrepancies if they exceeded 5mm is not explicitly detailed. The statement implies that no significant discrepancies requiring formal adjudication arose.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was explicitly described in the provided text. The study focused on the standalone performance of the PRAEVAorta®2 algorithm against expert-established ground truth manual measurements, rather than comparing human reader performance with and without AI assistance.

    6. Standalone (Algorithm Only) Performance Study

    Yes, a standalone performance study was conducted. The "Performance assessment" section details a technical performance assessment of PRAEVAorta®2 to validate its accuracy against measurements provided by the Ground Truth using manual measurement tools. This means the algorithm's measurements were directly compared to expert manual measurements to determine its accuracy and reliability.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus (or high agreement) based on manual measurements performed by a panel of four qualified vascular surgeons. This is explicitly stated as the "reference standard" and "ground truth."

    8. Sample Size for the Training Set

    The document explicitly states, "The selected CT scans were not used for AI training." However, the exact sample size for the training set is not provided in this document. The focus of this section is on the validation/test dataset used for performance assessment.

    9. How the Ground Truth for the Training Set Was Established

    The document states that the testing dataset was explicitly not used for AI training, but it does not describe how the ground truth for the training set was established. This information would typically be detailed in the development methodology rather than the performance validation section of a submission summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250947
    Manufacturer
    Date Cleared
    2025-08-27

    (152 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VistaSoft 4.0 and VisionX 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition.

    The software must only be used by authorized healthcare professionals in dental areas for the following tasks:

    • Filter optimization of the display of 2D and 3D images for improved diagnosis
    • Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
    • Forwarding of images and additional data to external software (third-party software)

    The software is not intended for mammography use.

    Device Description

    VisionX 4.0 / VistaSoft 4.0 imaging software is an image management system that allows dentists to acquire, display, edit, view, store, print, and distribute medical images. VisionX 4.0 / VistaSoft 4.0 runs on user provided PC compatible computers and utilize previously cleared digital image capture devices for image acquisition. Additional information: The software is intended for the viewing and diagnosis of image data in relation to dental issues. Its proper use is documented in the operating instructions of the corresponding image-generating systems. Image-generating systems that can be used with the software include optical video cameras, digital X-ray cameras, phosphor storage plate scanner, extraoral X-ray devices, intraoral scanners and TWAIN compatible image sources.

    The software must only be used by authorized healthcare professionals in dental areas for the following tasks:

    • Filter optimization of the display of 2D and 3D images for improved diagnosis
    • Acquisition, storage, management, display, analysis, editing and supporting diagnosis of digital/digitised 2D and 3D images and videos
    • Forwarding of images and additional data to external software (third-party software)
    AI/ML Overview

    The provided document is a 510(k) clearance letter for VistaSoft 4.0 and VisionX 4.0. It does not contain any information regarding acceptance criteria or a study proving the device meets those criteria, specifically concerning AI performance or clinical efficacy.

    The document primarily focuses on regulatory compliance, outlining:

    • The device's classification and regulation.
    • Its intended use and indications for use.
    • Comparison with a predicate device (VisionX 3.0), highlighting new features (image filter operations, annotations, cloud interface, cybersecurity enhancements).
    • Compliance with FDA recognized consensus standards and guidance documents for software development and cybersecurity (e.g., ISO 14971, IEC 62304, IEC 82304-1, IEC 81001-5-1, IEC 62366-1).
    • Statement that "Software verification and validation were conducted."

    However, there is no specific information presented that describes:

    • Quantitative acceptance criteria for device performance (e.g., sensitivity, specificity, accuracy).
    • Details of a clinical or analytical study to demonstrate meeting these criteria.
    • Sample sizes for test sets or training sets.
    • Data provenance.
    • Number or qualifications of experts for ground truth establishment.
    • Adjudication methods.
    • MRMC study results or effect sizes.
    • Standalone algorithm performance.
    • Type of ground truth used (e.g., pathology, expert consensus).
    • How ground truth was established for training data.

    The FDA 510(k) clearance process for this type of device (Medical image management and processing system, Product Code QIH) often relies on demonstrating substantial equivalence to a predicate device based on similar technological characteristics and performance, rather than requiring extensive clinical studies with specific performance metrics like those for AI-driven diagnostic aids. The "new features" listed (filter optimization, acquisition/storage/etc., forwarding data) appear to be enhancements to image management and display, not necessarily new diagnostic algorithms that would typically necessitate rigorous performance studies with specific acceptance criteria.

    Therefore, based solely on the provided text, I cannot complete the requested tables and details regarding acceptance criteria and study results, as this information is not present in the document.

    If such information were available, it would typically be found in a separate section of the 510(k) submission, often within the "Non-Clinical and/or Clinical Tests Summary & Conclusions" section in more detail than what is provided here, or in a specific performance study report referenced by the submission. The current document only states that "Software verification and validation were conducted" and lists the standards used for software development and cybersecurity, but not the outcomes of performance testing against specific acceptance criteria.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243685
    Device Name
    MammoScreen BD
    Manufacturer
    Date Cleared
    2025-08-22

    (266 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MammoScreen® BD is a software application intended for use with compatible full-field digital mammography and digital breast tomosynthesis systems. MammoScreen BD evaluates the breast tissue composition to provide an ACR BI-RADS 5th Edition breast density category. The device is intended to be used in the population of asymptomatic women undergoing screening mammography who are at least 40 years old.

    MammoScreen BD only produces adjunctive information to aid interpreting physicians in the assessment of breast tissue composition. It is not a diagnostic software.

    Patient management decisions should not be made solely based on analysis by MammoScreen BD.

    Device Description

    MammoScreen BD is a software-only device (SaMD) using artificial intelligence to assist radiologists in the interpretation of mammograms. The purpose of the MammoScreen BD software is to automatically process a mammogram to assess the density of the breasts.

    MammoScreen BD processes the 2D-mammograms standard views (CC and/or MLO of FFDM and/or the 2DSM from the DBT) to assess breast density.

    For each examination, MammoScreen BD outputs the breast density following the ACR BI-RADS 5th Edition breast density category.

    MammoScreen BD outputs can be integrated with compatible third-party software such as MammoScreen Suite. Results may be displayed in a web UI, as a DICOM Structured Report, a DICOM Secondary Capture Image, or within patient worklists by the third-party software.

    MammoScreen BD takes as input a folder with images in DICOM formats and outputs breast density assessment in a form of a JSON file.

    Note that the MammoScreen BD outputs should be used as complementary information by radiologists while interpreting breast density. Patient management decisions should not be made solely on the basis of analysis by MammoScreen BD, the medical professional interpreting the mammogram remains the sole decision-maker.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves MammoScreen BD meets them, based on the provided FDA 510(k) clearance letter:

    Acceptance Criteria and Device Performance Study

    The study primarily focuses on the standalone performance of MammoScreen BD in assessing breast density against an expert consensus Ground Truth. The key metric for performance is the quadratically weighted Cohen's Kappa (${\kappa}$).

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Primary Objective: Superiority in standalone performance for density assignment of MammoScreen BD compared to a pre-determined reference value (${\kappa_{\text{reference}} = 0.85}$).Hologic: ${\kappa_{\text{quadratic}} = 89.03}$ [95% CI: 87.43 – 90.56]
    Acceptance Criteria (Statistical): The one-sided p-value for the test $H_0: \kappa \leq 0.85$ is less than the significance level ($\alpha=0.05$) AND the lower bound of the 95% confidence interval for Kappa $> 0.85$, indicating that the observed weighted Kappa is statistically significantly greater than 0.85.Hologic Envision: ${\kappa_{\text{quadratic}} = 89.54}$ [95% CI: 86.88 – 91.69]
    GE: ${\kappa_{\text{quadratic}} = 93.19}$ [95% CI: 90.50 – 94.92]

    All reported Kappa values exceed the reference value of 0.85, and their 95% confidence intervals' lower bounds are also above 0.85, satisfying the acceptance criteria.

    2. Sample Size and Data Provenance

    Test Set:

    • Hologic (original dataset): 922 patients / 1,155 studies
    • Hologic Envision (new system for subject device): 500 patients / 500 studies
    • GE (new system for subject device): 376 patients / 490 studies

    Data Provenance:

    • Hologic (original dataset):
      • USA: 658 studies (distributed as A:85, B:269, C:241, D:63)
      • EU: 447 studies (distributed as A:28, B:169, C:214, D:86)
    • Hologic Envision: USA: 500 studies (distributed as A:50, B:200, C:200, D:50)
    • GE:
      • USA: 359 studies (distributed as A:38, B:155, C:139, D:31)
      • EU: 129 studies (distributed as A:4, B:45, C:61, D:19)

    All data for the test sets appears to be retrospective, as it's stated that the "Data used for the standalone performance testing only belongs to the test group" and is distinct from the training data.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: 5 breast radiologists
    • Qualifications: At least 10 years of experience in breast imaging interpretation.

    4. Adjudication Method for the Test Set

    The ground truth was established by majority rule among the assessment of the 5 breast radiologists. This implies a 3-out-of-5 or more agreement for a given breast density category to be assigned as ground truth.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no mention of an MRMC comparative effectiveness study being performed to assess how much human readers improve with AI vs. without AI assistance. The study focuses solely on the standalone performance of the AI algorithm. The device is described as "adjunctive information to aid interpreting physicians," but its effect on radiologist performance isn't quantified in this document.

    6. Standalone Performance (Algorithm Only)

    Yes, a standalone performance study was explicitly conducted. The results for the quadratically weighted Cohen's Kappa presented in the table above (89.03 for Hologic, 89.54 for Hologic Envision, and 93.19 for GE) are all for the algorithm's performance only ("MammoScreen BD against the radiologist consensus assessment").

    7. Type of Ground Truth Used

    The ground truth used was expert consensus based on the visual assessment of 5 breast radiologists.

    8. Sample Size for the Training Set

    • Total number of studies: 108,775
    • Total number of patients: 32,368

    9. How the Ground Truth for the Training Set was Established

    The document states that the training modules are "trained with very large databases of annotated mammograms." While "annotated" implies ground truth was established, the specific method for establishing ground truth for the training set is not detailed in the provided text. It only specifies the ground truth establishment method for the test set (majority rule of 5 radiologists). It's common for training data to use various methods for annotation, which might differ from the rigorous expert consensus used for the test set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K252362
    Device Name
    GBrain MRI
    Manufacturer
    Date Cleared
    2025-08-22

    (24 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    GBrain MRI is a post processing medical device software intended for analyzing and quantitatively reporting signal hyperintensities in the brain on T2w FLAIR MR images and T1w post contrast images in the context of diagnostic radiology.

    GBrain MRI is intended to provide automatic segmentation, quantification, and reporting of derived image metrics. It is not intended for detection or specific diagnosis of any disease nor for the detection of signal hyperintensities.

    GBrain MRI should not be used in-lieu of a full evaluation of the patient's MRI scans. The physician retains the ultimate responsibility for making the final patient management and treatment decisions.

    Device Description

    GBrain MRI is a non-invasive MR imaging post-processing medical device software that aids in the volumetric quantification of hyperintensities in T2-weighted Fluid Attenuated Inversion Recovery (T2w FLAIR), and in post contrast T1-weighted (T1c) brain MR images. It is intended to aid the trained radiologist in quantitative measurements.

    The input to the software are the T2w FLAIR and the T1w post contrast brain MR images.

    The outputs are volume measurements in Secondary Capture DICOM format, a DICOM Encapsulated pdf file, as well as a DICOM SR. More specifically, the total volume of hyperintensities in the input T2w FLAIR and the T1c are shown in two new secondary capture image series, called GBrain T2 FLAIR & GBrain T1 CE respectively, with a segmentation overlay on the hyperintensities that were used to measure the total volumes. These volume measurements are summarized in the DICOM encapsulated pdf and DICOM SR files.

    The outputs are provided in standard DICOM format that can be displayed on most third-party DICOM workstations and Picture Archive and Communications Systems (PACS).

    The software is suitable for use in routine patient care as a support tool for radiologists in assessment of structural adult brain MRIs, by providing them with complementary quantitative information.

    The GBrain MRI processing architecture includes a proprietary automated internal pipeline that performs skull stripping, signal normalization, segmentations, volume calculations, and report generation.

    From a workflow perspective, GBrain MRI is packaged as a computing appliance that is capable of supporting DICOM file transfer for input, and output of results. The software is designed without the need for a user interface after installation. Any processing errors are reported either in the output series report, or in the system log files.

    GBrain MRI software is intended to be used by trained personnel only and is to be installed by trained technical personnel.

    Quantitative reports and derived image data sets are intended to be used as complementary information in the review of a case.

    The GBrain MRI software does not have any accessories or patient contacting components.

    The GBrain MRI device is intended to be used for the adult population only.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) Clearance Letter.


    Acceptance Criteria and Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance Criteria (Lower Bound of 95% CI)Reported Device Performance (Lower Bound of 95% CI)
    Volume Measurement (R²)N/A (explicit value not stated, but implied by "passed planned acceptance criteria")0.94 (Contrast Enhancement)
    Segmentation Overlap (Dice Similarity Coefficient)N/A (explicit value not stated, but implied by "passed planned acceptance criteria")0.81 (Contrast Enhancement)
    Reproducibility (R²)N/A (explicit value not stated, but implied by "passed planned acceptance criteria")0.92

    Note: While explicit acceptance values for R² and Dice were not provided in the document, the statement "passed the planned acceptance criteria" indicates that the reported performance values met the internal thresholds set by the manufacturer.

    Study Details

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Test Set): 131 patient cases for Contrast Enhancement measurements.
    • Data Provenance:
      • Country of Origin: United States (collected from four separate hospital systems in Alabama, Florida, Kentucky, and California).
      • Retrospective/Prospective: Not explicitly stated, but "collected from four separate hospital systems" and "external dataset used for validation was independent from the internal training datasets" typically implies a retrospective collection of existing data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three independent experts.
    • Qualifications: US board-certified, experienced neuroradiologists.

    4. Adjudication Method for the Test Set

    • Method: Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm was used to generate a consensus ground truth from the three expert-labeled segmentations. This effectively acts as an automated adjudication method.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • No. The document describes a "standalone" performance evaluation of the algorithm against expert-derived ground truth, not a comparative effectiveness study involving human readers with and without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • Yes. The performance testing described focuses on comparing the software's segmentations to expert segmentations, indicating a standalone evaluation of the algorithm's accuracy.

    7. The Type of Ground Truth Used

    • Type: Expert consensus, specifically using the STAPLE algorithm to combine three independent expert-labeled segmentations.

    8. The Sample Size for the Training Set

    • Not explicitly stated. The document mentions the validation dataset was "independent from the internal training datasets" but does not specify the size of the training datasets.

    9. How the Ground Truth for the Training Set Was Established

    • Not explicitly stated. The document mentions "internal training datasets" but does not detail the method for establishing their ground truth. Given the validation approach, it's highly probable that similar expert-derived ground truth methods were used for training data, but this is an inference rather than a direct statement.
    Ask a Question

    Ask a specific question about this device

    K Number
    K251837
    Manufacturer
    Date Cleared
    2025-08-20

    (65 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Salix Coronary Plaque (V1.0.0) is a web-based, non-invasive software application that is intended to be used for viewing, post-processing, and analyzing cardiac computed tomography (CT) images acquired from a CT scanner in a Digital Imaging and Communications in Medicine (DICOM) Standard format.

    This software provides cardiologists and radiologists with interactive tools that can be used for viewing and analyzing cardiac computed tomography (CT) data for quantification and characterization of coronary plaques (i.e. atherosclerosis), stenosis and to perform calcium scoring in non-contrast cardiac CT

    Salix Coronary Plaque (V1.0.0) is intended to complement standard care as an adjunctive tool and is not intended as a replacement to a medical professional's comprehensive diagnostic decision-making process. The software's semi-automated features are intended for an adult population and should only be used by qualified medical professionals experienced in examining and evaluating cardiac CT images.

    Users should be aware that certain views make use of interpolated data. These data are created by the software based on the original data set. Interpolated data may give the appearance of healthy tissue in situations where pathology that is near or smaller than the scanning resolution may be present.

    Device Description

    Salix Coronary Plaque (K251837) is a web-based software application, hosted on Amazon Web Services cloud computing services, delivered using a SaaS model. The software provides interactive, post-processing tools for trained radiologists or cardiologists for viewing, analyzing, and characterizing cardiac computed tomography (CT) image data obtained from a CT scanner. The physician-driven coronary analysis is used to review CT image data to prepare a standard coronary report that may include the presence and extent of physician-identified coronary plaques (i.e., atherosclerosis) and stenosis, and assessment of calcium score performed on a non-contrast cardiac CT scan. The Cardiac CT image data are physician-ordered and typically obtained from patients who underwent CCTA or CAC CT for evaluation of CAD or suspected CAD.

    AI/ML Overview

    Here's a detailed breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) Clearance Letter for Salix Coronary Plaque (V1.0.0):

    Acceptance Criteria and Device Performance

    Salix Coronary Plaque OutputStatisticReported Device Performance (Estimate [95% CI])Acceptance CriteriaResult
    Vessel Level StenosisPercentage within one CAD-RADS category95.8% [94.1%, 97.3%]90%Pass
    Total plaqueICC3¹0.96 [0.94, 0.98]0.70Pass
    Calcified plaqueICC3¹0.96 [0.90, 0.99]0.80Pass
    Noncalcified plaqueICC3¹0.91 [0.84, 0.95]0.55Pass
    Low attenuating plaqueICC3¹0.61 [0.41, 0.93]0.30Pass
    Calcium ScoringPearson Correlation0.958 [0.947, 0.966]0.90Pass
    Centerline ExtractionOverlap score0.8604 [0.8445, 0.8750]0.80Pass
    Vessel LabellingF1 Score0.8264 [0.8047, 0.8479]0.70Pass
    Lumen Wall SegmentationDice Score0.8996 [0.8938, 0.9055]0.80Pass
    Vessel Wall SegmentationDice Score0.9016 [0.8962, 0.9070]0.80Pass

    ¹ Intraclass correlation coefficient two-way mixed model ICC(3, 1) was used.


    Study Details

    1. A table of acceptance criteria and the reported device performance:
    See table above.

    2. Sample size used for the test set and the data provenance:

    • Multi-reader Multi-case (MRMC) study for Plaque Volumes and CAD-RADS:
      • Sample Size: 103 adult patients (58 women, 45 men; mean 61 ± 12 years, range 23–84).
      • Data Provenance: Retrospective data from seven geographically diverse U.S. centers (Wisconsin, New York, Arizona, and Alabama). Self-reported race was 57% White, 22% Black or African American, 12% Asian, 2% American Indian/Alaska Native; 7% declined/unknown. 13% identified as Hispanic or Latino. Scans were acquired on contemporary 64-detector row or newer systems from Canon, GE, Philips, and Siemens, ensuring vendor diversity.
    • Standalone Performance Validation for ML-enabled Outputs (Calcium Scoring, Centerline Extraction, Vessel Labelling, Lumen and Vessel Wall Segmentation):
      • Sample Size:
        • 302 non-contrast series for calcium scoring.
        • 107 contrast-enhanced series for centerline extraction, vessel labeling, and wall segmentation.
      • Data Provenance: Sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm. The validation dataset consisted of de-identified cardiac CT studies from seven (7) centers across four (4) US states. Included representation of multiple scanner manufacturers (Canon, GE, Philips, and Siemens) and disease severity based on calcium score and maximum stenosis (CAD-RADS classification) based on source clinical radiology reports.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • For Plaque Volumes and CAD-RADS (MRMC study):
      • Number of Experts: Multiple (implied, at least two initial experts plus one adjudicator).
      • Qualifications: Independent Level III-qualified (or equivalent experience) experts.
    • For ML-enabled Outputs (Standalone Performance Validation):
      • Number of Experts: Multiple (implied, by using "board certified cardiologists and radiologists").
      • Qualifications: Board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience).

    4. Adjudication method for the test set:

    • For Plaque Volumes and CAD-RADS (MRMC study): Discrepancies between the initial expert readers were resolved by a third independent adjudicator with Level III qualifications or equivalent experience. This is a "2+1" adjudication method.
    • For ML-enabled Outputs (Standalone Performance Validation): The ground truth was "independently established" by the experts from the source clinical image interpretation. The document does not specify an adjudication method for these specific tasks if there were multiple ground truth annotations.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • Yes, an MRMC study was done.
    • The study was not designed to measure the improvement of human readers with AI vs without AI assistance (i.e., a comparative effectiveness study of reader performance with and without the device).
    • Instead, the MRMC study evaluated the performance of human readers using the Salix Coronary Plaque device compared to an expert ground truth. It states, "Eight U.S.-licensed radiologists and cardiologists... acted as SCP readers... They began with the device's standalone automated output and made any refinements they deemed necessary."
    • The conclusion states: "This data supports our claim that qualified clinicians with minimal SCP specific training can achieve SCCT expert-level performance with SCP without the support of a core laboratory or specialized technician pre-read." This implies that the device enables a standard clinical user to achieve expert-level performance, but it does not quantify 'effect size' of improvement over their performance without the device.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Yes, a standalone performance validation was done for "ML-enabled Salix Coronary Plaque outputs for calcium scoring, centerline extraction, vessel labelling, and lumen and vessel wall segmentation against reference ground truth." These results are presented in the "Reported Device Performance" table and were shown to meet or exceed acceptance criteria.
    • The MRMC study also started with the "device's standalone automated output," suggesting that the algorithm's initial automated output was part of the workflow, though readers could refine it.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • Expert consensus/annotation:
      • For the MRMC study (plaque volumes and CAD-RADS), ground truth was established by "Independent Level III-qualified (or equivalent experience) experts [who] produced vessel-wall and lumen segmentations and assigned CAD-RADS stenosis categories." Discrepancies were adjudicated by a third expert.
      • For the standalone ML-enabled outputs, ground truth was established by "board certified cardiologists and radiologists with SCCT Level III certification (or equivalent experience) using manual annotation and segmentation tools."

    8. The sample size for the training set:

    • The document states that the validation data was "sourced from multiple unique centers in the USA that did not contribute any data to the training datasets for any Salix Central algorithm."
    • However, the actual sample size used for the training set for Salix Coronary Plaque (V1.0.0) is not provided in the given text.

    9. How the ground truth for the training set was established:

    • This information is not provided in the given text for the Salix Coronary Plaque device. While it mentions how ground truth for the test set was established, it does not detail the process for the training data (nor the training data size).
    Ask a Question

    Ask a specific question about this device

    K Number
    K251747
    Manufacturer
    Date Cleared
    2025-08-15

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Product Code :

    QIH

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    VEA Align:
    This cloud-based software is intended for orthopedic applications in both pediatric and adult populations.

    2D X-ray images acquired in EOS imaging's imaging systems is the foundation and resource to display the interactive landmarks overlayed on the frontal and lateral images. These landmarks are available for users to assess patient-specific global alignment.

    For additional assessment, alignment parameters compared to published normative values may be available.

    This product serves as a tool to aid in the analysis of spinal deformities and degenerative diseases, and lower limb alignment disorders and deformities through precise angle and length measurements. It is suitable for use with adult and pediatric patients aged 7 years and older.

    Clinical judgment and experience are required to properly use the software.

    spineEOS:
    spineEOS is indicated for assisting healthcare professionals with preoperative planning of spine surgeries. The product provides access to EOS images with associated 3D datasets and measurements. spineEOS includes surgical planning tools that enable users to define a patient specific surgical strategy.

    Device Description

    VEA Align:
    VEA Align is a software indicated for assisting healthcare professionals with global alignment assessment through clinical parameters computation.

    The product uses biplanar 2D X-ray images, exclusively generated by EOS imaging's EOS (K152788) and EOSedge (K202394) systems and generates an initial placement of the patient anatomic landmarks on the images using a machine learning-based algorithm. The user may adjust the landmarks to align with the patient's anatomy. Landmark locations require user validation. The clinical parameters communicated to the user are inferred from the landmarks and are recalculated as the user adjusts the landmarks. 3D datasets may be exported for use in spineEOS for surgical planning.

    The product is hosted on a cloud infrastructure and relies on EOS Insight for support capabilities, such as user access control and data access. 2D X-ray image transmissions from healthcare institutions to the cloud are managed by EOS Insight. EOS Insight is classified as non-device Clinical Decision Support (CDS) software.

    spineEOS:
    spineEOS is indicated for assisting healthcare professionals with preoperative planning of spine surgeries. spineEOS provides access to EOS images with associated 3D datasets and measurements. spineEOS includes surgical planning tools that enable users to define a patient specific surgical strategy.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for VEA Align.

    1. Table of Acceptance Criteria and Reported Device Performance

    MetricAcceptance CriteriaReported Device Performance (Implicitly Met)
    Median Error≤ 3 mmMet (All studies performed indicate acceptable performances)
    3rd Quartile Error≤ 5 mmMet (All studies performed indicate acceptable performances)

    Note: The document states that "All the studies performed indicate acceptable performances of the algorithm for its intended population," implying that both acceptance criteria (Median error ≤ 3mm and 3rd Quartile ≤ 5mm) were met. Actual reported numerical values for the performance metrics are not explicitly provided in this document, only that the criteria were met.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: 361 patients.
    • Data Provenance: Images were collected from EOS (K152788) and EOSedge (K202394) systems at a variety of sites from 2007-2023. The subgroups analysis includes "Data site location - Different US states," indicating that at least some of the data originates from the US. The document does not explicitly state whether the data was retrospective or prospective, but given the collection period (2007-2023), it is likely retrospective.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    • The document states that the ground truth was established by "EOS 3DServices reconstruction (model) from sterEOS (K172346)." This implies that the ground truth is derived from a previously cleared and validated 3D reconstruction system.
    • The number and qualifications of experts involved in creating these "EOS 3DServices reconstruction" models are not specified in this document.

    4. Adjudication Method for the Test Set

    • The document does not describe an explicit adjudication method for the test set involving multiple human reviewers. The ground truth for the test set is established by the "EOS 3DServices reconstruction (model) from sterEOS."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A formal MRMC comparative effectiveness study comparing human readers with and without AI assistance is not explicitly mentioned in this document. The performance evaluation is focused on the standalone AI algorithm compared to ground truth.

    6. Standalone (Algorithm Only) Performance Study

    • Yes, a standalone (algorithm only) performance study was done.
      • Description: "To assess the standalone performance of the AI algorithm of the VEA Align, the test was performed with:
        • A dedicated test data set containing different data from the training data set...
        • For each patient of this data set, a ground truth EOS 3DServices reconstruction (model) from sterEOS (K172346) that was available for comparison with VEA Align reconstruction generated by the AI algorithm."
      • This confirms that the study focused on the AI algorithm's performance without direct human intervention in the loop for the performance metrics measured.

    7. Type of Ground Truth Used

    • The ground truth used for the test set was based on "EOS 3DServices reconstruction (model) from sterEOS (K172346)." This refers to 3D anatomical models and landmark placements generated by a previously cleared medical imaging and reconstruction system (sterEOS). This can be categorized as a type of expert-system-derived ground truth, as sterEOS itself relies on validated methodologies and presumably expert input/validation in its operation.

    8. Sample Size for the Training Set

    • Training Set Sample Size: 10,376 X-ray images, with a total of 5,188 corresponding 3D reconstructions.

    9. How the Ground Truth for the Training Set Was Established

    • The document states, "The AI algorithm was trained using 10,376 X-ray images and a total of 5,188 corresponding 3D reconstructions." It also notes that the images were collected from EOS and EOSedge systems. While it doesn't explicitly detail the method for establishing the ground truth for each training image, the context implies that these 3D reconstructions served as the ground truth. Similar to the test set, it's highly probable these "corresponding 3D reconstructions" were also derived from the established and validated EOS 3DServices/sterEOS pipeline.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 24