Search Filters

Search Results

Found 569 results

510(k) Data Aggregation

    K Number
    K251827
    Device Name
    Azurion R3.1
    Date Cleared
    2025-10-24

    (133 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    5684PC
    NETHERLANDS

    Re: K251827
    Trade/Device Name: Azurion R3.1
    Regulation Number: 21 CFR 892.1650
    Classification Name:** Image-Intensified Fluoroscopic X-Ray System
    Classification Regulation: 21 CFR §892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Azurion series (within the limits of the used Operating Room table) are intended for use to perform:

    • Image guidance in diagnostic, interventional and minimally invasive surgery procedures for the following clinical application areas: vascular, non-vascular, cardiovascular and neuro procedures.
    • Cardiac imaging applications including diagnostics, interventional and minimally invasive surgery procedures.

    Additionally:

    • The Azurion series can be used in a hybrid Operating Room.
    • The Azurion series contain a number of features to support a flexible and patient centric procedural workflow.

    Patient Population:
    All human patients of all ages. Patient weight is limited to the specification of the patient table.

    Device Description

    The Azurion R3.1 is classified as an interventional fluoroscopic X-Ray system. The primary performance characteristics of the Azurion R3.1 include:

    • Real-time image visualization of patient anatomy during procedures
    • Imaging techniques and tools to assist interventional procedures
    • Post processing functions after interventional procedures
    • Storage of reference/control images for patient records
    • Compatibility with hospital information systems (HIS) and image archiving systems via DICOM
    • Built in radiation safety controls

    This array of functions offers the physician the imaging information and tools needed to perform and document minimally invasive interventional procedures.

    The Azurion R3.1 is available in identical models and configurations as the predicate device Azurion R2.1. Configurations are composed of detector type, monoplane (single C-arm) or biplane (dual arm), floor or ceiling mounted geometry, standard or OR table type and available image processing.

    Identical to the predicate device, the FlexArm option is available for the 7M20 configuration in Azurion R3.1 to increase flexibility in stand movement.

    Additionally, identical to the predicate device, Azurion R3.1 can be used in a hybrid operating room when supplied with a compatible operating room table.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251602
    Date Cleared
    2025-10-10

    (136 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    :** Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging
    Regulation Number: 21 CFR 892.1650
    Classification Name:** Image-Intensified Fluoroscopic X-ray System
    b) Regulation Number: 21 CFR 892.1650
    Classification Name** | Image-Intensified Fluoroscopic X-ray System |
    | Regulation Number | 21 CFR 892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This device is a digital radiography/fluoroscopy system used in a diagnostic and interventional angiography configuration. The system is indicated for use in diagnostic and angiographic procedures for blood vessels in the heart, brain, abdomen and lower extremities.

    αEvolve Imaging is an imaging chain intended for adults, with Artificial Intelligence Denoising (AID) designed to reduce noise in real-time fluoroscopic images and signal enhancement algorithm, Multi Frequency Processing (MFP).

    Device Description

    The Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging, is an interventional X-ray system with a floor mounted C-arm as its main configuration. An optional ceiling mounted C-arm is available to provide a bi-plane configuration where required. Additional units include a patient table, X-ray high-voltage generator and a digital radiography system. The C-arms can be configured with designated X-ray detectors and supporting hardware (e.g. X-ray tube and diagnostic X-ray beam limiting device). The Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging includes αEvolve Imaging, an imaging chain intended for adults, with Artificial Intelligence Denoising (AID) designed to reduce noise in real-time fluoroscopic images and signal enhancement algorithm, Multi Frequency Processing (MFP).

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based solely on the provided FDA 510(k) summary:

    Overview of the Device and its New Feature:

    The device is the Alphenix, INFX-8000V/B, INFX-8000V/S, V9.6 with αEvolve Imaging. It's an interventional X-ray system. The new feature, αEvolve Imaging, includes Artificial Intelligence Denoising (AID) to reduce noise in real-time fluoroscopic images and a signal enhancement algorithm, Multi Frequency Processing (MFP). The primary claim appears to be improved image quality (noise reduction, sharpness, contrast, etc.) compared to the previous version's (V9.5) "super noise reduction filter (SNRF)."


    1. Table of Acceptance Criteria and Reported Device Performance

    The 510(k) summary does not explicitly state "acceptance criteria" with numerical thresholds for each test. Instead, it describes various performance evaluations and their successful outcomes. For the clinical study, the success criteria are clearly defined.

    Acceptance Criteria (Inferred/Stated)Reported Device Performance
    Bench Testing (Image Quality)
    1. Change in Image Level, Noise & Structure: AID to be better at preserving mean image intensity, improved denoising, and image structure preservation compared to SNRF.AID determined to be better at preserving mean image intensity and suggested to have improved denoising and image structure preservation (using student's t-test).
    2. Signal-to-Variance Ratio (SVR) and Signal-to-Noise Ratio (SNR): AID to show improved ability to preserve image signal while decreasing image noise compared to SNRF.AID determined to have improved ability to preserve image signal while decreasing image noise (using student's t-test).
    3. Modulation Transfer Function (MTF): Improved performance for low-to-mid frequencies and similar high-frequency region compared to SNRF.Results showed improved performance for low-to-mid frequencies in all test cases, and high-frequency region of MTF curve was similar for AID and SNRF in majority of cases (using student's t-test).
    4. Robustness to Detector Defects: Detector defects to be sufficiently obvious to inform clinician of service need, and image quality outside the defect area to remain visually unaffected, facilitating procedure completion.Detector defects were sufficiently obvious, and image quality outside the area of the detector defect remained visually unaffected, facilitating sufficient image quality to finish the procedure.
    5. Normalized Noise Power Spectrum (NNPS): AID to have smaller noise magnitude in the frequency range of ~0.1 cycles/mm to 1.4 cycles/mm, with negligible differences above 1.4 cycles/mm.AID had a smaller noise magnitude in the frequency range of ~0.1 cycles/mm to 1.4 cycles/mm. Noise magnitudes above 1.4 cycles/mm were very small and differences considered negligible.
    6. Image Lag Measurement: AID to perform better in reducing image lag compared to SNRF.AID determined to perform better in reducing image lag (using student's t-test).
    7. Contrast-to-Noise Ratio (CNR) of Low Contrast Object: AID to show significantly higher CNR for low-contrast elements compared to SNRF.AID had a significantly higher CNR than images processed with SNRF for all elements and test cases (using student's t-test).
    8. Contrast-to-Noise Ratio (CNR) of High Contrast Object: AID to show significantly higher CNR for high-contrast objects (guidewire, vessels) compared to SNRF.AID had a significantly higher vessel and guidewire CNR than images processed with SNRF for all test cases (using student's t-test).
    Clinical Study (Reader Study)
    Overall Preference (Binomial Test): Image sequences denoised by AID chosen significantly more than 50% of the time over SNRF.The Binomial test found that image sequences denoised by AID were chosen significantly more than 50% of the time (indicating overall preference).
    Individual Image Quality Metrics (Wilcoxon Signed Rank Test): Mean score of AID images significantly higher than SNRF for sharpness, contrast, confidence, noise, and absence of image artifacts.The mean score of AID imaging chain images was significantly higher than that of the SNRF imaging chain for sharpness, contrast, confidence, noise, and the absence of image artifacts.
    Generalizability: Algorithm to demonstrate equivalent or improved performance compared to the predicate with diverse clinical data.Concluded that the subject algorithm demonstrated equivalent or improved performance, compared to the predicate device, as demonstrated by the results of the above testing.

    2. Sample Size Used for the Test Set and Data Provenance

    The 510(k) summary provides the following information about the clinical test set:

    • Clinical Dataset Source: Patient image sequences were acquired from three hospitals:
      • Memorial Hermann Hospital (Houston, Texas, USA)
      • Waikato Hospital (Hamilton, New Zealand)
      • Saiseikai Kumamoto Hospital (Kumamoto, Japan)
    • Data Provenance: The study used retrospective "patient image sequences" for side-by-side comparison. The summary does not specify if the acquisition itself was prospective or retrospective, but the evaluation of pre-existing sequences makes it a retrospective study for the purpose of algorithm evaluation.
    • Sample Size: The exact number of patient image sequences or cases used in the clinical test set is not specified in the provided document. It only mentions that the sequences were split into four BMI subgroups.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document states the clinical comparison was "reviewed by United States board-certified interventional cardiologists." The exact number of cardiologists is not specified.
    • Qualifications: "United States board-certified interventional cardiologists." No mention of years of experience or other specific qualifications is provided.

    4. Adjudication Method for the Test Set

    The document describes a "side-by-side comparison" reviewed by experts in the clinical performance testing section. For the overall preference and individual image quality metrics, statistical tests (Wilcoxon signed rank test and Binomial test) were used. This implies that the experts rated or expressed preference for both AID and SNRF images, and these individual ratings/preferences were then aggregated and analyzed.

    The exact adjudication method (e.g., 2+1, 3+1 consensus) for establishing a ground truth or a final decision on image quality aspects is not explicitly stated. It seems each expert provided their assessment, and these assessments were then statistically analyzed for superiority rather than reaching a consensus for each image pair.


    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: Yes, a type of MRMC comparative study was conducted. The clinical performance testing involved multiple readers (US board-certified interventional cardiologists) evaluating multiple cases (patient image sequences).

    • Effect Size of Human Readers' Improvement with AI Assistance: The study directly compared AID-processed images to SNRF-processed images in a side-by-side fashion. It doesn't measure how much humans improve with AI assistance in a diagnostic task (e.g., how much their accuracy or confidence improves when using AI vs. not using AI). Instead, it measures the perceived improvement in image quality of the AI-processed images when evaluated by human readers.

      • The study determined: "the mean score of the AID imaging chain images was significantly higher than that of the SNRF imaging chain with regard to sharpness, contrast, confidence, noise, and the absence of image artifacts."
      • And for overall preference, "the Binomial test found that the image sequences denoised by AID were chosen significantly more than 50% of the time."

      This indicates a statistically significant preference for and higher perceived image quality in AID-processed images by readers. However, it does not quantify diagnostic performance improvement with AI assistance, as it wasn't a study of diagnostic accuracy but rather image quality assessment. The "confidence" metric might hint at improved reader confidence using AID images, but it's not a direct measure of diagnostic effectiveness.


    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, extensive standalone performance testing of the AID algorithm was conducted through "Performance Testing – Bench" and "Image Quality Evaluations." This involved objective metrics and phantom studies without human subjective assessment.

    Examples include:

    • Change in Image Level, Noise and Structure
    • Signal-to-Variance Ratio (SVR) and Signal-to-Noise Ratio (SNR)
    • Modulation Transfer Function (MTF)
    • Robustness to Detector Defects (visual comparison, but the algorithm's output is purely standalone)
    • Normalizes Noise Power Spectrum (NNPS)
    • Image Lag Measurement
    • Contrast-to-Noise Ratio of a Low Contrast Object
    • Contrast-to-Noise Ratio of a High Contrast Object

    7. The Type of Ground Truth Used

    • For Bench Testing: The ground truth for bench tests was primarily established through physical phantoms and objective image quality metrics. For example, the anthropomorphic chest phantom, low-contrast phantom, and flat field fluoroscopic images provided known characteristics against which AID and SNRF performance were measured using statistical tests.
    • For Clinical Study: The ground truth for the clinical reader study was established by expert opinion/subjective evaluation (preference and scores for sharpness, contrast, noise, confidence, absence of artifacts) from "United States board-certified interventional cardiologists." There is no mention of a more objective ground truth like pathology or outcomes data for the clinical image evaluation.

    8. The Sample Size for the Training Set

    The document does not provide any information about the sample size used for the training set of the Artificial Intelligence Denoising (AID) algorithm.


    9. How the Ground Truth for the Training Set was Established

    The document does not provide any information about how the ground truth for the training set was established. It describes the AID as "Artificial Intelligence Denoising (AID) designed to reduce noise," implying a machine learning approach, but details on its training are missing from this summary.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251645
    Date Cleared
    2025-09-26

    (120 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    a) Classification Name: Image-Intensified Fluoroscopic X-ray System

    b) Regulation Number: 21 CFR 892.1650
    Alphenix, INFX-8000C/B, INFX-8000C/S, V9.2
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Self-Propelled CT Scan Base Kit, CGBA-035A:
    The movable gantry base unit allows the Aquilion ONE (TSX-308A) system to be installed in the same procedure room as the INFX-8000C system, enabling coordinated clinical use within a shared workspace. This configuration provides longitudinal positioning along the z-axis for image acquisition.

    Alphenix, INFX-8000C/B, INFX-8000C/S, V9.6 with Calculated DAP:
    This device is a digital radiography/fluoroscopy system used in a diagnostic and interventional angiography configuration. The system is indicated for use in diagnostic and angiographic procedures for blood vessels in the heart, brain, abdomen and lower extremities. The Calculated Dose Area Product (DAP) feature provides an alternative method for determining dose metrics without the use of a physical area dosimeter. This function estimates the cumulative reference air kerma, reference air kerma rate, and cumulative dose area product based on system parameters, including X-ray exposure settings, beam hardening filter configuration, beam limiting device position, and region of interest (ROI) filter status. The calculation method is calibration-dependent, with accuracy contingent upon periodic calibration against reference measurements.

    Device Description

    The Alphenix 4DCT is composed of the INFX-8000C interventional angiography system and the dynamic volume CT system, Aquilion ONE, TSX-308A. This combination enables patient access and efficient workflow for interventional procedures. Self-Propelled CT Scan Base Kit, CGBA-035A, is an optional kit intended to be used in conjunction with an Aquilion ONE / INFX-8000C based IVR-CT system. This device is attached to the Aquilion ONE CT gantry to support longitudinal movement and allow image acquisition in the z-direction (Z-axis), both axial and helical. When this option is installed, the standard CT patient couch is replaced with the fixed catheterization table utilized by the interventional x-ray system, INFX-8000C. The Self-Propelled CT Scan Base Kit, CGBA-035A, will be used as part of an Aquilion ONE / INFX-8000C based IVR-CT system. Please note, the intended uses of the Aquilion ONE CT System and the INFX-8000C Interventional X-Ray System remain the same. There have been no modifications made to the imaging chains in these FDA cleared devices and the base system software remains the same. Since both systems will be installed in the same room and to prevent interference during use, system interlocks have been incorporated into the systems.

    The Alphenix, INFX-8000C/B, INFX-8000C/S, V9.6 with Calculated DAP, is an interventional x-ray system with a ceiling suspended C-arm as its main configuration. Additional units include a patient table, x-ray high-voltage generator and a digital radiography system. The C-arms can be configured with designated x-ray detectors and supporting hardware (e.g. x-ray tube and diagnostic x-ray beam limiting device). The INFX-8000C system incorporates a Calculated Dose Area Product (DAP) feature, which provides an alternative method for determining dose metrics without the use of a physical area dosimeter. This function estimates the cumulative reference air kerma, reference air kerma rate, and cumulative dose area product based on system parameters, including X-ray exposure settings, beam hardening filter configuration, beam limiting device position, and region of interest (ROI) filter status. The calculation method is calibration-dependent, with accuracy contingent upon periodic calibration against reference measurements.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K251650
    Date Cleared
    2025-09-16

    (110 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K251650**
    Trade/Device Name: Insight Enhanced™ DRF (EN-1002-01)
    Regulation Number: 21 CFR 892.1650
    system |
    | Classification Name | System, X-Ray, Fluoroscopic, Image-Intensified |
    | Regulation Number | 892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended for use by a qualified/trained physician or technician for obtaining fluoroscopic and radiographic images of the skull, spinal column, chest, abdomen, and extremities in adult and pediatric patients. Rx only.

    Device Description

    The Insight Enhanced™ DRF is an upgrade package that is designed to be installed on existing fluoroscopic systems, referred to as host system, to convert the imaging chain from analog to digital. It is comprised of the Insight Enhanced™ DRF Digital Imaging chain and associated interfacing hardware. The Insight Enhanced™ DRF system includes medical grade monitors, computer, and flat panel detector. Interface boards, cabling, and signal converters are included for interfacing with the host system. The x-ray generator and x-ray tube are not modified in any way. A flat panel detector replaces the image intensifier and camera on the base system. All of the fundamental features and principles of operation Insight Enhanced™ DRF are identical to the predicate device, Insight Enhanced™ (K200369). Both systems are upgrade packages that replace an analog imaging chain with a new digital one. The main components of the product, the PC, software, and detector for the predicate device and for the subject device are identical. Mounting hardware and interfacing components differ to add compatibility to GE OEC 9800/9900 C-arms. Insight Enhanced™ DRF is designed as an upgrade package for General Electric Legacy and P500 fluoroscopy rooms. This submission adds compatibility to a device, the OEC 9800/9900.

    AI/ML Overview

    N/A

    Ask a Question

    Ask a specific question about this device

    K Number
    K244002
    Device Name
    AngioWaveNet
    Date Cleared
    2025-09-10

    (258 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Workstation), V9.5
    Manufacturer: Canon Medical Systems Corporation
    Regulation Number: 21 CFR 892.1650
    Secondary OWB | LLZ | OWB, JAA |
    | Regulation | 21 CFR 892.2050 | 21 CFR 892.2050 | 21 CFR 892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AngioWaveNet is indicated for use by qualified physicians or under their supervision to aid in the analysis and interpretation of X-ray coronary angiographic cines. AngioWaveNet is intended for use in adults during X-ray coronary angiographic imaging procedures as a clinically useful complement to the viewing of standard angiographic cines acquired during diagnostic coronary angiography procedures. AngioWaveNet software is intended for use to enhance the visibility of blood vessels, vascular structures, and related anatomical features within angiographic images, which may be clinically useful to the treating physician

    Device Description

    AngioWaveNet spatio-temporal enhancement processing (STEP) is an artificial Intelligence (AI) and machine learning (ML) system designed to enhance the visibility of blood vessels in angiograms using the unique spatial and temporal information contained in the frames of angiographic cines. The Angiowave STEP method employs a neural network architecture in the form of an encoderdecoder, which sequentially takes multiple contiguous frames of an angiogram as input and uses this information to provide enhanced visualization of vessels in the central frame. Angiowave has developed a novel and versatile implementation of its processing in a DICOM node, which has the benefit of no additional on-premises hardware. In addition to the cine processing, the DICOM node handles other logistical tasks such as anonymization, image storage and retrieval (e.g. to/from a cloud location), communication and interoperability, data integrity and security, DICOM conformance, and data archiving and management. This full implementation utilizing a cloud location for processor intensive tasks is termed AngioWaveNet. The AI/ML model at the heart of STEP was trained on a comprehensive dataset of 300 anonymized angiograms, averaging 70 frames each, provided by a large non-profit healthcare organization that operates in Maryland and the Washington, D.C. region. The dataset spanned a range of clinical and demographic characteristics presenting to the catheterization laboratory and was acquired from 2003 to 2016 using Philips Allura Xper systems. The dataset was randomly sampled from a large clinical study population, whose baseline patient characteristics have been published and were consistent with a typical coronary catheterization lab population.

    AI/ML Overview

    Here's a structured summary of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for AngioWaveNet:


    Acceptance Criteria and Device Performance Study for AngioWaveNet

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific CriterionReported Device Performance (AngioWaveNet)
    I. Processing Success Rate100% processing success rate for analyzed cases.Achieved: 100% processing success rate, all analyzed cases met predefined patient-level success criteria.
    II. Clinical Decision Impact (CPI)Neutral or positive clinical decision impact (Likert score $\ge$ 3).Achieved: Mean Likert score of 3.23 (range 3.12–3.44 across three readers), indicating neutral or positive impact.
    III. Ease of Visualization ImprovementImprovement in ease of visualization for a significant percentage of tasks.Achieved: Improved in 99.4% of tasks.
    IV. False Positives/Negatives0% unresolved false positives/negatives for most readers.Achieved: 0% unresolved false positives/negatives for most readers.

    2. Sample Size and Data Provenance for the Test Set

    • Sample Size (Patients): 31 individual patients.
    • Sample Size (Cines/Angiograms): 97 angiograms (cines), with each patient contributing 3-4 cines (mean 3.13 cines/patient).
    • Sample Size (Vessels Assessed): 169 vessels.
    • Sample Size (Tasks Performed): 3,211 tasks (detection, localization, quantification, characterization) performed across all cines.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but derived from the "Corewell Angiographic database," suggesting data from a healthcare system in the United States (potentially consistent with the "Maryland and the Washington, D.C. region" mentioned for training data, though this is for the test set).
      • Retrospective/Prospective: The data was "sourced from the Corewell Angiographic database," indicating it is retrospective. The cines were captured in "March of 2025," which seems to be a clerical error given the "Date Prepared: August 4, 2025" and "Dated: August 5, 2025" for the submission, and the study being "conducted in July and August of 2025." It is highly likely the data was captured prior to the study conduct date.

    3. Number of Experts and Qualifications for Ground Truth for the Test Set

    • Number of Experts: Three (3).
    • Qualifications of Experts: "Experienced interventional cardiologists." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    The document states, "Blinding of readers to each other's assessments... prevented influence of one reader on another." This suggests that the readers made their assessments independently. However, it does not explicitly describe an adjudication method (like 2+1 or 3+1 consensus) for resolving discrepancies or establishing a single "ground truth" for the test set from the three readers' evaluations. The reported results (e.g., mean Likert score, percentage improvement) appear to be an aggregate of their individual assessments without a formal adjudication process to reconcile differences.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Yes, a task-based reader study involving three interventional cardiologists evaluating patient cases with the software was conducted.
    • Effect size of human readers improvement with AI vs. without AI assistance: The study focused on the impact of the software on decision-making and visualization, rather than a direct comparison of readers with and without AI assistance for a specific metric.
      • Clinical Decision Impact: Mean Likert score of 3.23 (neutral to positive impact).
      • Ease of Visualization: Improved in 99.4% of tasks.
      • False Positives/Negatives: 0% unresolved for most readers.
        While these indicate improvement in perception and influence on decisions, a direct "effect size" of how much readers improve in accuracy or efficiency due to AI assistance compared to no AI assistance is not quantified in the provided text (e.g., AUC difference, sensitivity/specificity gains). The study rather reports the performance when using the AI as a complement.

    6. Standalone Performance Study

    The information provided describes a "Task-Based Reader Study" where human readers (cardiologists) assessed the software's impact. The software's performance is reported in terms of its ability to enhance visualization and influence clinical decisions when used by these readers. This is not a standalone (algorithm only without human-in-the-loop performance) study. The results are intrinsically linked to human interpretation of the enhanced images.

    7. Type of Ground Truth Used for the Test Set

    The ground truth appears to be based on the expert consensus or interpretation of the three interventional cardiologists regarding the "angiographic pathologic determination tasks" and "ease of visualization." There is no mention of an independent, objective ground truth such as pathology reports or long-term outcomes data for the test set.

    8. Sample Size for the Training Set

    • Sample Size (Angiograms): 300 anonymized angiograms.
    • Sample Size (Frames): Averaging 70 frames each (total of approximately 21,000 frames).

    9. How the Ground Truth for the Training Set Was Established

    The document states, "The AI/ML model at the heart of STEP was trained on a comprehensive dataset of 300 anonymized angiograms... provided by a large non-profit healthcare organization that operates in Maryland and the Washington, D.C. region."

    It does not explicitly describe how the ground truth for this training set was established. It mentions the dataset "spanned a range of clinical and demographic characteristics" and was "randomly sampled from a large clinical study population." Typically, for training such models, ground truth would involve expert annotations (e.g., outlining vessels, identifying pathologies) on the original images, but this detail is missing from the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K251523
    Device Name
    Cios Spin
    Date Cleared
    2025-07-29

    (74 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    MALVERN, PA 19355

    Re: K251523
    Trade/Device Name: Cios Spin
    Regulation Number: 21 CFR 892.1650
    Interventional Fluoroscopic X-Ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650
    Fluoroscopic X-Ray System
    Classification Panel: Radiology

    Page 6

    Regulation Number: 21 CFR §892.1650
    Interventional Fluoroscopic X-Ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cios Spin is a mobile X-ray system designed to provide X-ray imaging of the anatomical structures of patients during clinical applications. Clinical applications may include but are not limited to interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care, and emergency room procedures. The patient population may include pediatric patients.

    Device Description

    The Cios Spin (VA31A) mobile fluoroscopic C-arm X-ray System is designed for the surgical environment. The Cios Spin provides comprehensive image acquisition modes to support orthopedic and vascular procedures. The system consists of two major components:
    a. The C-arm with X-ray source on one side and the flat panel detector on the opposite side. The c-arm can be angulated in both planes and be lifted vertically, shifted to the side and move forward/backward by an operator.
    b. The second unit is the image display station with a moveable trolley for the image processing and storage system, image display and documentation. Both units are connected to each other with a cable.

    The following modifications were made to the Predicate Device the Cios Spin Mobile X-ray System cleared under Premarket Notification K210054 on February 5, 2021. Siemens Medical Solutions USA, Inc. submits this Traditional 510(k) to request clearance for the Subject Device Cios Spin (VA31A). The following modification is incorporated in the Predicate Device to create the Subject Device, for which Siemens is seeking 510(k) clearance:

    1. Software updated from VA30 to VA31A to support the below software features
      A. Updated Retina 3D for optional enlarged 3D Volume of 25cm x 25cm x 16cm
      B. Introduction of NaviLink 3D Lite
      C. Universal Navigation Interface (UNI)
      D. Updated InstantLink with Extended NXS Interface
    2. Updated Collimator
    3. Updated FLC Imaging system PC with new PC hardware Updated AppHost PC with High Performance Graphic Card
    4. New Eaton UPS 5P 850i G2 as successor of UPS 5P 850i due to obsolescense
    AI/ML Overview

    Based on the provided FDA 510(k) clearance letter for the Siemens Cios Spin (VA31A), here's an analysis of the acceptance criteria and the study proving the device meets them:

    Important Note: The provided document is a 510(k) summary, which often summarizes testing without providing granular details on study design, sample sizes, and ground truth establishment to the same extent as a full clinical study report. Therefore, some information requested (e.g., specific number of experts for ground truth, adjudication methods) may not be explicitly stated in this summary. The focus of this 510(k) is primarily on demonstrating substantial equivalence to a predicate device, especially for software and hardware modifications, rather than a de novo effectiveness study.


    Acceptance Criteria and Reported Device Performance

    The 510(k) summary primarily focuses on demonstrating that the modifications to the Cios Spin (VA31A) do not introduce new safety or effectiveness concerns compared to its predicate device (Cios Spin VA30) and a reference device (CIARTIC Move VB10A) that incorporates some of the new features. The acceptance criteria are implicitly tied to meeting various industry standards and demonstrating functionality and safety through non-clinical performance testing.

    Table 1: Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Criteria (Implicit/Explicit from Text)Reported Device Performance / Evidence
    Software FunctionalitySoftware specifications met acceptance criteria as stated in test plans."All test results met all acceptance criteria."
    Enlarged Volume Field of View (Retina 3D)Functionality and performance of new 25cm x 25cm x 16cm 3D volume."A non-clinical test 'Enlarged Volume Field of View' testing were conducted." The feature was cleared in the CIARTIC Move (K233748), implying its performance was previously validated.
    NaviLink 3D Lite FunctionsFunctionality and performance of the new navigation interface.Part of software updates VA31A; "All test results met all acceptance criteria."
    Universal Navigation Interface (UNI)Functionality and performance of UNI.Part of software updates VA31A; "All test results met all acceptance criteria." UNI was present in the reference device CIARTIC Move (K233748).
    InstantLink with Extended NXS InterfaceFunctionality and performance of updated interface.Part of software updates VA31A; "All test results met all acceptance criteria."
    Electrical SafetyCompliance with IEC 60601-1, IEC 60601-2-43, IEC 60601-2-54."The system complies with the IEC 60601-1, IEC 60601-2-43, and IEC 60601-2-54 standards for safety."
    Electromagnetic Compatibility (EMC)Compliance with IEC 60601-1-2."The system complies with... the IEC 60601-1-2 standard for EMC."
    Human Factors/UsabilityDevice is safe and effective for intended users, uses, and environments. Human factors addressed."The Human Factor Usability Validation showed that Human factors are addressed in the system test according to the operator's manual and in clinical use tests with customer reports and feedback forms."
    Risk MitigationIdentified hazards are controlled; risk analysis completed."The Risk analysis was completed, and risk control was implemented to mitigate identified hazards."
    Overall Safety & EffectivenessNo new issues of safety or effectiveness introduced by modifications."Results of all conducted testing and clinical assessments were found acceptable and do not raise any new issues of safety or effectiveness."
    Compliance with Standards/RegulationsAdherence to various 21 CFR regulations and standards (e.g., ISO 14971, IEC 62304).Extensive list of complied standards, including 21 CFR sections 1020.30, 1020.32, and specific IEC/ISO standards mentioned in Section 9.

    Study Details Proving Device Meets Acceptance Criteria

    The study described is primarily a non-clinical performance testing and software verification and validation effort rather than a traditional clinical trial.

    1. Sample sizes used for the test set and data provenance:

      • Test Set Sample Size: Not explicitly stated as a "sample size" in the context of patients or images for performance evaluation. The testing described is "Unit, Subsystem, and System Integration testing" and "software verification and regression testing." This type of testing uses a diverse set of test cases designed to cover functionality, performance, and safety requirements. For the "Enlarged Volume Field of View," it's a non-clinical test, likely using phantoms or simulated data.
      • Data Provenance: Not applicable in terms of patient data provenance for the non-clinical and software testing described. This is bench testing and software validation. Customer reports and feedback forms are mentioned for human factors, but specific details on their origin (country, etc.) are not provided. The manufacturing site is Kemnath, Germany.
    2. Number of experts used to establish the ground truth for the test set and qualifications of those experts:

      • Not explicitly stated. For non-clinical performance and software testing, "ground truth" is typically established by engineering specifications, known correct outputs for given inputs, and compliance with industry standards. If clinical use tests involved subjective evaluation, the number and qualifications of experts are not detailed, but they are implied to be "healthcare professionals" (operators are "adequately trained").
    3. Adjudication method for the test set:

      • Not applicable/Not explicitly stated. For software and bench testing, adjudication usually refers to a process of resolving discrepancies in ratings or measurements. Given the nature of this submission (software/hardware modifications and non-clinical testing), formal clinical adjudication methods (like 2+1, 3+1 for image reviews) are not described as part of the primary evidence. Acceptance is based on test cases meeting predefined engineering requirements.
    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No. An MRMC study was not conducted. This 510(k) is for a mobile X-ray system with software and hardware updates, not an AI-assisted diagnostic device where evaluating human reader performance with and without AI would be relevant. The "AI" mentioned (Retina 3D, NaviLink 3D) refers to advanced imaging/navigation features, not machine learning for diagnostic interpretation.
    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

      • Yes, implicitly. The "non-clinical test 'Enlarged Volume Field of View' testing" and other "Unit, Subsystem, and System Integration testing" for functionality and performance are essentially standalone tests of the device's components and software without immediate human interpretation in a diagnostic loop. The acceptance criteria for these tests refer to technical performance endpoints, not diagnostic accuracy.
    6. The type of ground truth used:

      • Engineering Specifications and Standard Compliance: For the performance and safety testing, the "ground truth" is adherence to predefined engineering requirements (e.g., image dimensions, system response times, electrical safety limits) and compliance with national and international industry standards (e.g., IEC 60601 series, ISO 14971, NEMA PS 3.1).
      • For the Human Factors Usability Validation, "customer reports and feedback forms" serve as a form of "ground truth" regarding user experience and usability.
    7. The sample size for the training set:

      • Not applicable. This submission describes modifications to an X-ray imaging system, not the development of a machine learning algorithm that requires a separate training set. The existing software (VA30) was updated to VA31A. The "training" for the software itself would have occurred during its initial development, not for this specific 510(k) submission.
    8. How the ground truth for the training set was established:

      • Not applicable. As above, this information is not relevant to this specific 510(k) submission, as it focuses on modifications to an existing device rather than the development of a new AI/ML algorithm requiring a training set and its associated ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243432
    Manufacturer
    Date Cleared
    2025-07-22

    (259 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Name:** Vascular Navigation PAD 2.0
    Navigation Software Vascular PAD
    Regulation Number: 21 CFR 892.1650
    Interventional fluoroscopic x-ray system |
    | Product Code | OWB; LLZ |
    | Regulation Number | 892.1650
    | Panel | Radiology |
    | Predicate Device | K222070 – EndoNautRegulation Number: 21 CFR 892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software supports image guidance by overlaying vessel anatomy onto live fluoroscopic images in order to navigate guidewires, catheters, stents and other endovascular devices.

    The device is indicated for use by physicians for patients undergoing endovascular PAD interventions of the lower limbs including iliac vessels.

    The device is intended to be used in adults.

    There is no other demographic, ethnic or cultural limitation for patients.

    The information provided by the software or system is in no way intended to substitute for, in whole or in part, the physician's judgment and analysis of the patient's condition.

    Device Description

    The Subject Device is a standalone medical device software supporting image guidance in endovascular procedures of peripheral artery disease (PAD) in the lower limbs, including the iliac vessels. Running on a suitable platform and connected to an angiographic system, the Subject Device receives and displays the images acquired with the angiographic system as a video stream. It provides the ability to save and process single images out of that video stream and is able to create a vessel tree consisting of angiographic images. This allows to enrich the video stream with the saved vessel tree to continuously localize endovascular devices with respect to the vessel anatomy.

    The medical device is intended for use with compatible hardware and software and must be connected to a compatible angiographic system via video connection.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Vascular Navigation PAD 2.0, based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance for Vascular Navigation PAD 2.0

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/MetricAcceptance CriteriaReported Device Performance
    Video Latency (Added)$\le$ 250 ms$\le$ 250 ms (for Ziehm Vision RFD 3D, Siemens Cios Spin, and combined)
    Capture Process Timespan (initiation to animation start)$\le$ 1sSuccessfully passed
    Stitching Timespan (entering stitching to calculation result)$\le$ 10sSuccessfully passed
    Roadmap/Overlay Display Timespan (manual initiation / selection / realignment to updated display)$\le$ 10sSuccessfully passed
    System Stability (Stress and Load, Anti-Virus)No crashes, responsive application (no significant waiting periods), no significant latencies of touch interaction/animations, normal interaction possible.Successfully passed
    Level Selection and Overlay Alignment (True-Positive Rate for suggested alignments)Not explicitly stated as a number, but implied to be high for acceptance.95.71 %
    Level Selection and Overlay Alignment (Average Registration Accuracy for proposed alignments)Not explicitly stated (but the stated "2D deviation for roadmapping $\le$ 5 mm" likely applies here as an overall accuracy goal).1.49 $\pm$ 2.51 mm
    Level Selection Algorithm FailuresNo failuresNo failures during the test
    Modality Detection (Prediction Rate in determining image modality)Not explicitly stated ("consequently, no images were misidentified" implies 100% accuracy)99.25 %
    Modality Detection (Accuracy for each possible modality)Not explicitly stated (but 100% for acceptance)100 %
    Roadmapping Accuracy (Overall Accuracy)$\le$ 5 mm1.57 $\pm$ 0.85 mm
    Stitching Algorithm (True-Positive Rate for suggested alignments)$\ge$ 75 %95 %
    Stitching Algorithm (False-Positive Rate for incorrect proposal of stitching)$\le$ 25 %6.4 %

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated as a single number.
      • For Latency Tests: Data from Siemens Cios Spin and Ziehm Vision RFD 3D.
      • For Level Selection and Overlay Alignment: Images acquired with Siemens Cios Spin, Ziehm Vision RFD 3D, and GE OEC Elite CFD.
      • For Modality Detection: Image data from Siemens Cios Spin, GE OEC Elite CFD, Philips Zenition, and Ziehm Vision RFD 3D.
      • For Roadmapping Accuracy: Image data from Siemens Cios Spin.
      • For Stitching Algorithm: Image data from Philips Azurion, Siemens Cios Spin, GE OEC Elite CFD, and Ziehm Vision RFD 3D.
    • Data Provenance:
      • Retrospective/Prospective: Not explicitly stated for all tests. However, the Level Selection and Overlay Alignment and Roadmapping Accuracy tests mention using "cadaveric image data" which implies a controlled, likely prospective, acquisition for testing purposes rather than retrospective clinical data. Other tests reference "independent image data" or data "acquired using" specific devices, suggesting a dedicated test set acquisition.
      • Country of Origin: Not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: Not explicitly stated. The document mentions "manually achieved gold standard registrations" for Level Selection and Overlay Alignment and "manually comparing achieved gold standard (GS) stitches" for the Stitching Algorithm, implying human expert involvement in establishing ground truth, but specific details on the number or qualifications of these "manual" reviewers are absent. The phrase "if a human would consider the image pairs matchable" in the stitching section further supports human-determined ground truth.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly described. The ground truth seems to be established through "manually achieved gold standard" or "manual comparison," implying a single expert or a common understanding rather than a formal adjudication process between multiple conflicting expert opinions (e.g., 2+1 or 3+1).

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    • Was it done? No. The submission focuses on standalone technical performance measures and accuracy metrics of the algorithm rather than comparing human reader performance with and without AI assistance.

    6. Standalone Performance Study

    • Was it done? Yes. The entire "Performance Data" section details the algorithm's performance in various standalone tests, such as latency, stress/load, level selection and overlay alignment, modality detection, roadmapping accuracy, and stitching algorithm performance. The results are quantitative metrics of the device itself.

    7. Type of Ground Truth Used

    • Type of Ground Truth:
      • Expert Consensus / Manual Gold Standard: For Level Selection and Overlay Alignment ("manually achieved gold standard registrations") and for the Stitching Algorithm ("manually comparing achieved gold standard (GS) stitches"). This implies human experts defined the correct alignment or stitch.
      • Technical Metrics: For Latency, Capture Process, Stitching Timespan, Roadmap/Overlay Display Timespan, and System Stability, the ground truth is based on objective technical measurements against defined criteria.
      • True Modality: For Modality Detection, the ground truth is simply the actual modality of the image (fluoroscopy vs. angiography) as known during test data creation or acquisition.

    8. Sample Size for the Training Set

    • Sample Size: Not provided. The submission focuses solely on the performance characteristics of the tested device and its algorithms, without detailing the training data or methods used to develop those algorithms.

    9. How the Ground Truth for the Training Set Was Established

    • How Established: Not provided. As with the training set size, the information about the training process and ground truth for training is outside the scope of the clearance letter's performance data section.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250660
    Date Cleared
    2025-07-14

    (131 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K250660**
    Trade/Device Name: LUMINOS Q.namix T
    LUMINOS Q.namix R
    Regulation Number: 21 CFR 892.1650
    fluoroscopic x-ray system
    Classification Panel: Radiology
    Classification Regulation: 21 CFR §892.1650
    System, x-ray, fluoroscopic, image-intensified, Solid State X-ray imager | Same |
    | Regulation Number | 892.1650
    | 892.1650 | Same |
    | Classification Product Code | JAA, OWB | JAA, OWB | Same |
    | Indications for use
    | 892.1650 | Same |
    | Classification Product Code | JAA, OWB | JAA, OWB | Same |
    | Indications for use

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    LUMINOS Q.namix T and LUMINOS Q.namix R are devices intended to visualize anatomical structures by converting an X-ray pattern into a visible image. It is a multifunctional, general R/F system, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics.

    LUMINOS Q.namix T and LUMINOS Q.namix R are not intended to be used for mammography examinations.

    Device Description

    The LUMINOS Q.namix T is an under-table fluoroscopy system and the LUMINOS Q.namix R is an over-table fluoroscopy system. Both systems are multifunctional, general R/F systems, suitable for routine radiography and fluoroscopy examinations, including gastrointestinal- and urogenital examinations and specialist areas like arthrography, angiography and pediatrics. They are designed as modular systems with components such as main fluoro table including fixed fluoroscopy detector and X-ray tube, a ceiling suspension with X-ray tube, Bucky wall stand, X-ray generator, monitors, a bucky tray in the table as well as portable wireless and fixed integrated detectors that may be combined into different configurations to meet specific customer needs.

    AI/ML Overview

    This FDA 510(k) clearance letter and summary discuss the LUMINOS Q.namix T and LUMINOS Q.namix R X-ray systems. The provided documentation does not include specific acceptance criteria (e.g., numerical thresholds for image quality, diagnostic accuracy, or performance metrics) in the same way an AI/ML device often would. Instead, it relies on demonstrating substantial equivalence to predicate devices and adherence to recognized standards.

    The study presented focuses primarily on image quality evaluation for the new detectors (X.fluoro and X.wi-D24) for diagnostic acceptability, rather than establishing acceptance criteria for the entire system's overall performance.

    Here's an attempt to extract and present the requested information based on the provided document:


    1. Table of Acceptance Criteria and Reported Device Performance

    As explicit quantitative acceptance criteria for the overall device performance are not stated in the provided 510(k) summary, this section will reflect the available qualitative performance assessment for the new detectors. The primary "acceptance criterion" implied for the overall device is substantial equivalence to predicate devices and acceptability for diagnostic use.

    Feature/MetricAcceptance Criteria (Implied/Direct)Reported Device Performance (LUMINOS Q.namix T/R with new detectors)
    Overall Device EquivalenceSubstantially equivalent to predicate devices (Luminos Agile Max, Luminos dRF Max) in indications for use, design, material, functionality, technology, and energy source.Systems are comparable and substantially equivalent to predicate devices. Test results show comparability.
    New Detector Image Quality (X.fluoro, X.wi-D24)Acceptable for diagnostic use in radiography & fluoroscopy.Evaluated images and fluorography studies from different body regions were qualified for proper diagnosis by a US board-certified radiologist and by expert evaluations.
    Compliance with StandardsCompliance with relevant medical electrical safety, performance, and software standards (e.g., IEC 60601 series, ISO 14971, IEC 62304, DICOM).The LUMINOS Q.namix T/LUMINOS Q.namix R systems were tested and comply with the listed voluntary standards.
    Risk ManagementApplication of risk management process (per ISO 14971).Risk Analysis was applied.
    Software Life CycleApplication of software life cycle processes (per IEC 62304).IEC 62304 (Medical device software - Software life cycle processes) was applied.
    UsabilityCompliance with usability engineering standards (per IEC 60601-1-6, IEC 62366-1).IEC 60601-1-6 and IEC 62366-1 were applied.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Description: "expert evaluations" for the new detectors X.fluoro and X.wi-D24.
    • Sample Size: The exact number of images or fluorography studies evaluated is not specified. The document mentions "multiple images and fluorography studies from different body regions" for the US board-certified radiologist's evaluation.
    • Data Provenance:
      • Countries of Origin: Germany (University Hospital Augsburg, Klinikum rechts der Isar Munich, Herz-Jesu-Krankenhaus Münster/Hiltrup) and Belgium (ZAS Jan Palfijn Hospital of Merksem).
      • Retrospective or Prospective: Not explicitly stated, but clinical image quality evaluations often involve prospective data collection or a mix with retrospective cases. Given they are evaluating "new detectors" and "clinical image quality evaluation", it implies real or simulated clinical scenarios.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts:
      • Initial Evaluations: Multiple "expert evaluations" (implies more than one) were conducted across the listed hospitals. The exact number of individual experts is not specified.
      • Specific Evaluation: One "US board-certified radiologist" performed a dedicated clinical image quality evaluation.
    • Qualifications of Experts:
      • For the general "expert evaluations": Not specified beyond being "experts."
      • For the specific evaluation: "US board-certified radiologist." No mention of years of experience is provided.

    4. Adjudication Method for the Test Set

    The document does not specify any formal adjudication method (e.g., 2+1, 3+1 consensus voting) for establishing ground truth or evaluating the image quality. The evaluations appear to be individual or group assessments leading to a conclusion of "acceptability for diagnostic use."


    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? The document does not describe a formal MRMC comparative effectiveness study designed to quantify the improvement of human readers with AI vs. without AI assistance.
    • Effect Size of Human Reader Improvement: Therefore, no effect size is reported.
      • Note: While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the study described is an evaluation of the detectors' image quality and the overall system's substantial equivalence, not the clinical impact of these specific AI features on human reader performance.

    6. Standalone Performance Study (Algorithm Only)

    • The document primarily describes an evaluation of the new detectors within the LUMINOS Q.namix T/R systems and the overall system's substantial equivalence.
    • While the device includes "AI-based Auto Cropping" and "AI based Automatic collimation," the document does not report on a standalone performance study specifically for these AI algorithms in isolation from the human-in-the-loop system. The AI features are listed as technological characteristics that contribute to the device's overall updated design.

    7. Type of Ground Truth Used

    For the detector image quality evaluation, the ground truth was based on expert assessment ("qualified for proper diagnosis"). This falls under expert consensus or expert judgment regarding diagnostic acceptability.


    8. Sample Size for the Training Set

    The document does not provide any information regarding the sample size used for the training set for any AI components. The focus of this 510(k) summary is on substantiating equivalence and safety/effectiveness of the entire X-ray system, not on the development of individual AI algorithms within it.


    9. How the Ground Truth for the Training Set Was Established

    Since no information is provided about a training set, the method for establishing its ground truth is not mentioned in the document.


    Ask a Question

    Ask a specific question about this device

    K Number
    K251520
    Date Cleared
    2025-07-09

    (54 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    19355

    Re: K251520
    Trade/Device Name: Cios Alpha; Cios Flow
    Regulation Number: 21 CFR 892.1650
    Interventional Fluoroscopic X-Ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650
    Interventional Fluoroscopic X-Ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650
    Interventional Fluoroscopic X-ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650
    Interventional Fluoroscopic X-ray System
    Classification Panel: Radiology
    Regulation Number: 21 CFR §892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Cios Alpha is a mobile X-Ray system designed to provide X-ray imaging of the anatomical structures of patient during clinical applications. Clinical applications may include, but are not limited to: interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care, and emergency room procedures. The patient population may include pediatric patients.

    The Cios Flow is a mobile X-Ray system designed to provide X-ray imaging of the anatomical structures of patient during clinical applications. Clinical applications may include, but are not limited to: interventional fluoroscopic, gastro-intestinal, endoscopic, urologic, pain management, orthopedic, neurologic, vascular, cardiac, critical care and emergency room procedures. The patient population may include pediatric patients.

    Device Description

    The Cios Alpha and Cios Flow (VA31A) mobile fluoroscopic C-arm X-ray System is designed for the surgical environment. The Cios Alpha and Cios Flow provide comprehensive image acquisition modes to support orthopedic and vascular procedures. The system consists of two major components:

    a) The C-arm with X-ray source on one side and the flat panel detector on the opposite side. The C-arm can be angulated in both planes and lifted vertically, shifted to the side, and moved forward/backward by an operator.

    b) The second unit is the image display station with a movable trolley for the image processing and storage system, image display, and documentation. Both units are connected with a cable.

    The main unit is connected to the main power outlet, and the trolley is connected to a data network.

    The following modifications were made to the predicate device Cios Alpha and Cios Flow. Siemens Medical Solutions USA, Inc. submits this Bundled Traditional 510k to request clearance for Subject Devices Cios Alpha and Cios Flow (VA31A) for the following device modifications made to the Predicates Device Cios Alpha and Cios Flow (VA30).

    This 510k submission, Subject Devices "Cios Alpha" and "Cios Flow" with software version VA31A, will support the following categories of modifications made to the Subject Devices in comparison to the Predicate Devices:

    1. Software updated from VA30 to VA31A to support the following software features: A. Updated InstantLink with Extended NXS Interface
    2. Updated Collimator
    3. New optional flat detector Trixell Pixium 3131SOD with IGZO (Indium Gallium Zinc Oxide) technology
    4. Updated FLC imaging system with new PC hardware Updated the High Performance Graphic Card on the Apphost PC
    5. Updated Eaton UPS 5P 850i G2 as successor of UPS 5P 850i due to obsolescense
    6. The Cios Alpha is also known as "Cios Alpha.neo" The Cios Flow is also known as Cios Flow.neo
    AI/ML Overview

    The provided 510(k) clearance letter details modifications to an existing fluoroscopic X-ray system, Cios Alpha and Cios Flow, specifically focusing on software updates and hardware changes (e.g., a new flat detector).

    However, the provided text does not contain explicit acceptance criteria tables for performance metrics (such as image quality, diagnostic accuracy, sensitivity, specificity, or AUC) or the results of a statistically powered, pre-specified study proving the device meets these criteria in a comparative effectiveness setting (e.g., MRMC study).

    The document primarily focuses on bench testing, software validation, and compliance with recognized standards to demonstrate the substantial equivalence of the modified device to its predicate. It states that "All test results met all acceptance criteria" for software modifications and that a "Clinical Cadaver Report" was conducted to assess the non-inferiority of a new flat panel detector's subjective image quality. This suggests acceptance criteria were established internally for these tests, but they are not detailed in the provided document.

    Therefore, many of the requested details about acceptance criteria, study design, and performance metrics for clinical effectiveness are not present in this 510(k) clearance letter summary. The document's purpose is to justify substantial equivalence based on safety, hardware/software changes, and compliance with standards, rather than proving enhanced clinical effectiveness through a comparative study.

    Here's an attempt to answer based on the available information, noting what is not provided:


    Acceptance Criteria and Device Performance

    No explicit quantitative acceptance criteria table for clinical performance (e.g., diagnostic accuracy metrics like sensitivity, specificity, AUC) is provided in the document. The document discusses "acceptance criteria" in the context of:

    • Software Validation: "The testing results show that all the software specifications have met the acceptance criteria." (Page 14)
    • Non-clinical Testing: "All test results met all acceptance criteria." (Page 10)
    • Clinical Cadaver Report (Subjective Image Quality): The IGZO detector was considered "non-inferior (equal or better) concerning the subjective image quality for four anatomical regions that have been investigated in the ortho-trauma setting." (Page 14) This implies a qualitative acceptance criterion of non-inferiority for subjective image quality, but no numerical thresholds are given.

    Since no specific performance metrics with numerical acceptance criteria are provided for clinical use, a table demonstrating reported device performance against such criteria cannot be created from this text. The document refers broadly to testing results meeting "acceptance criteria" but does not define them publicly.

    Study Details Proving Device Meets Acceptance Criteria

    The primary "study" mentioned for clinical relevance is a Clinical Cadaver Report.

    1. Sample Size and Data Provenance:

      • Test Set Sample Size: Not specified for the Clinical Cadaver Report.
      • Data Provenance: The study was a "Clinical Cadaver Report." This implies an experimental, non-human, pre-clinical study. The country of origin is not specified but given the manufacturing site in Germany, it's possible the testing was conducted there or at Siemens facilities elsewhere. It is inherently prospective as it's a pre-market development activity.
    2. Number of Experts and Qualifications:

      • Number of Experts: Not specified.
      • Qualifications: Not specified.
    3. Adjudication Method:

      • Adjudication Method: Not specified. Given it was a "subjective image quality" assessment, it would likely involve multiple readers, but the method (e.g., 2+1, 3+1) is not disclosed.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • Was an MRMC study done? The document does not indicate that a formal MRMC comparative effectiveness study was done to show human readers improve with AI vs. without AI assistance. The "Clinical Cadaver Report" focused on the subjective image quality of the new detector, not human performance with AI. The device described primarily appears to be an imaging system, not an AI-assisted diagnostic tool that would typically undergo MRMC studies for improved human interpretation.
    5. Standalone Performance:

      • Was a standalone (algorithm only without human-in-the-loop performance) done? Not explicitly stated in the context of clinical performance. The "software functional, verification, and System validation testing" (Page 11) and "software validation data" (Page 14) refer to the algorithm's internal performance against specifications, not its standalone diagnostic accuracy on clinical images.
    6. Type of Ground Truth Used:

      • Ground Truth for Clinical Cadaver Report: In the context of "subjective image quality," the "ground truth" would be the consensus assessment of the evaluating experts regarding the quality of the images generated by the new IGZO detector compared to the a-Si detector. It is not pathology or outcomes data.
    7. Training Set (if applicable for AI/Software components):

      • Sample Size for Training Set: The document does not mention an AI component that would require a distinct "training set" in the common understanding of machine learning. The "software" referred to is control software for the X-ray system, not a diagnostic AI algorithm.
    8. Ground Truth for Training Set:

      • How ground truth was established for training set: Not applicable, as there's no indication of machine learning model training. The software modifications are described as updates to system control, interfaces, and hardware support.

    In summary: The provided 510(k) clearance letter demonstrates that the modified Cios Alpha and Cios Flow systems meet regulatory requirements for substantial equivalence, primarily through non-clinical testing, compliance with safety standards, and software validation against internal acceptance criteria. A "Clinical Cadaver Report" assessed the subjective image quality of a new detector, finding it non-inferior. However, the document does not contain the specific details of clinical performance acceptance criteria, sample sizes for such studies, or a multi-reader comparative effectiveness study as would be seen for AI-enabled diagnostic tools.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243884
    Device Name
    TAVIPILOT
    Manufacturer
    Date Cleared
    2025-07-07

    (201 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Predicate For
    N/A
    Why did this record match?
    510k Summary Text (Full-text Search) :

    75013
    FRANCE

    Re: K243884
    Trade/Device Name: TAVIPILOT
    Regulation Number: 21 CFR 892.1650
    ** CTO
    Date of Preparation: July 7, 2025
    Trade Name: TAVIPILOT
    Regulation: 21 CFR 892.1650
    5684 PC Best The Netherlands |

    Trade Name: HeartNavigator Release 2.0
    Regulation: 21 CFR 892.1650
    Classification name: Image-intensified fluoroscopic x-ray systemClassification regulation: 21 CFR 892.1650
    Classification name: Image-intensified fluoroscopic x-ray systemClassification regulation: 21 CFR 892.1650

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    TAVIPILOT is an intra-operative software which provides real-time fluoroscopy detection, tracking and marking of the Non-Coronary Cusp and the prosthetic valve, to allow optimal guidance for precise positioning of the prosthetic valve, according to the planning phase, for TAVI/TAVR (transcatheter aortic valve implantation/replacement) procedures. The guidance provided by TAVIPILOT is not intended to substitute the cardiac surgeon's or the interventional cardiologist's judgment and analysis of the patient's condition.

    The device is only intended for adults (i.e., 21 years and older).

    Contra-indications:

    • Patients who have already undergone a TAVI/TAVR or SAVR (surgical aortic valve replacement)
    • Patients diagnosed with aortic insufficiency
    • Patients for whom the main access for the TAVI/TAVR catheter is not femoral
    • Patients who have a non-tricuspid native valve
    • Patients who have a permanent Pacemaker implant or temporary Pacemaker within 2 cm from the aortic root, other than the pacing guidewire
    • Patients who have thoracic surgical implants
    • Patients who are not adults
    Device Description

    TAVIPILOT is an intra-operative software which provides real-time fluoroscopy detection, tracking and marking of the Non-coronary Cusp and the prosthetic valve, to allow optimal guidance for precise positioning of the prosthetic valve, according to the planning phase, for TAVI/TAVR (transcatheter aortic valve implantation/replacement) procedures.

    The guidance provided by the TAVIPILOT is not intended to substitute the cardiac surgeon's or the interventional cardiologist's judgment and analysis of the patient's condition.

    The TAVIPILOT software tool is intended to be used in combination with FDA cleared X-ray systems to assist cardiac surgeons and interventional cardiologists with the treatment of structural heart diseases using minimal invasive interventional techniques for which TAVI/TAVR is indicated.

    In addition to conventional live fluoroscopy TAVIPILOT provides the user with tools to guide the procedure using a 2D projection of the aortic root-related landmarks and transcatheter aortic valve overlayed on the 2D X-ray image data from the FDA cleared X-ray systems.

    During Live phase the SD offers anatomical detection and tracking of the aortic root-related landmarks and transcatheter aortic valve which is overlayed in 2D on the 2D fluoroscopy x-ray image data using trained AI/ML model.

    TAVIPILOT does not change or influence the TAVI procedure.

    The main operating principle of TAVIPILOT consists of the following SW workflow:

    1. Preparing for Live task
    2. Live Task
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets those criteria, based on the provided FDA 510(k) clearance letter for TAVIPILOT:


    Acceptance Criteria and Device Performance Study for TAVIPILOT

    The TAVIPILOT device, an intra-operative software utilizing AI/ML for real-time fluoroscopy detection, tracking, and marking of the Non-Coronary Cusp (NCC) and transcatheter aortic valve (TAV), was validated to demonstrate its performance and substantial equivalence to a predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance

    ParameterAcceptance CriteriaReported Device Performance
    NCC Detection, Tracking, and Marking AccuracyNCC detected, tracked, and marked within ≤ 2 mmNCC detected, tracked, and marked ≤ 2 mm in 100% of all patients tested with statistical significance.
    TAV Detection, Tracking, and Marking AccuracyTAV detected, tracked, and marked within ≤ 1 mmTAV detected, tracked, and marked ≤ 1 mm in 100% of all patients tested with statistical significance.
    Accuracy in Contrasted/Non-contrasted ImagesAccuracy maintained in both contrasted and non-contrasted images.Accuracy was obtained in both contrasted and non-contrasted images.
    Comparison to Predicate Device (NCC)Equivalent or better detection, tracking, and marking of the NCC compared to the predicate device.TAVIPILOT has equivalent or better detection, tracking, and marking of the NCC compared to the predicate device in all patients.
    Compatibility/Interoperability with C-arm X-ray DevicesCompatible and interoperable with FDA cleared GE, Philips, and Siemens C-arm X-ray devices meeting specified requirements.TAVIPILOT was validated for compatibility and interoperability with FDA cleared C-arm X-ray devices (using data from GE, Philips, and Siemens devices) and confirmed compatible.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: "a representative and statistically supported number of patients, representative of both EU and US TAVI populations, including gender and ethnicity considerations." (Specific number not provided, but stated to be statistically supported and representative).
    • Data Provenance: The patients/data were "representative of both EU and US TAVI populations, including gender and ethnicity considerations." This implies retrospective or prospective acquisition from these regions. The document does not explicitly state if the data was retrospective or prospective.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated, but implies a collective of experts ("Ground truth was performed by board certified experts").
    • Qualifications of Experts: "board certified experts with substantial experience with relevant clinical tasks, thus ensuring quality annotations."

    4. Adjudication Method for the Test Set

    The adjudication method is not explicitly mentioned. It only states that "Ground truth was performed by board certified experts." This could imply single-reader, multiple-reader consensus, or other methods, but no specific method like 2+1 or 3+1 is detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No. The provided document details a standalone performance test of the algorithm's accuracy against ground truth, and a comparison to a predicate device but without human readers. The study focuses on the AI's direct detection, tracking, and marking performance rather than its impact on human reader performance.
    • Effect Size of Human Readers Improvement: Not applicable, as no MRMC study with human-in-the-loop was reported.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Study Done: Yes. The "NCC and TAV detection, tracking and marking validation testing" and "Comparison to Predicate Device testing" sections describe the algorithm's performance directly against ground truth, independent of human readers. This represents a standalone (algorithm-only) performance evaluation.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert Consensus. "Ground truth was performed by board certified experts with substantial experience with relevant clinical tasks, thus ensuring quality annotations."

    8. Sample Size for the Training Set

    The document does not provide information regarding the sample size used for the training set. It only mentions the use of "trained AI/ML model" and refers to the validation test set.

    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. It only refers to the training of the AI/ML model. However, given that "Ground truth was performed by board certified experts" for the test set, it is plausible that a similar method (expert annotation) was used for the training data, but this is not confirmed in the text.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 57