Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K223490
    Date Cleared
    2023-03-21

    (120 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181403, K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps clinicians visualize vasculature to aid in the planning of endovascular embolization procedures throughout the body.

    A new option, called AI Segmentation, was developed from the modifications to the predicate device, GE HealthCare's FlightPlan for Embolization [K193261]. It includes two new algorithms. This Al Segmentation option is what triggered this 510(k) submission.

    The software process 3D X-ray angiography images (CBCT) acquired from GE HealthCare's interventional X-ray system [K181403], operates on GEHC's Advantage Workstation (AW) [K110834] platform and AW Server (AWS) [K081985] platform, and is an extension to the GE HealthCare's Volume Viewer application [K041521].

    FlightPlan for Embolization is intended to be used during the planning phase of embolization procedures.

    The primary features/functions of the proposed software are:

    • Semi-automatic segmentation of vasculature from a starting point determined by the user, when AI Segmentation option is not activated;
    • Automatic segmentation of vasculature powered by a deep-learning algorithm, when Al Segmentation option is activated;
    • Automatic definition of the root point powered by a deep-learning algorithm, when AI Segmentation option is activated;
    • Selective display (Live Tracking) of proximal vessels from a point determined by the user's cursor;
    • Ability to segment part of the selected vasculature;
    • Ability to mark points of interest (POI) to store cursor position(s);
    • Save results and optionally export them to other applications such as GEHC's Vision Applications ● [K092639] for 3D road-mapping.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the GE Medical Systems SCS's FlightPlan for Embolization device, based on the provided text:

    Acceptance Criteria and Device Performance

    Feature / AlgorithmAcceptance CriteriaReported Device Performance
    Vessel Extraction90% success rate93.7% success rate
    Root Definition90% success rate95.2% success rate

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: 207 contrast-injected CBCT scans, each from a unique patient.
    • Data Provenance: The scans were acquired during the planning of embolization procedures from GE HealthCare's interventional X-ray system. The text indicates that these were from "clinical sites" and were "representative of the intended population" but does not specify countries of origin. The study appears to be retrospective, using existing scans.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

    • Vessel Extraction: 3 board-certified radiologists.
    • Root Definition: 1 GEHC advanced application specialist.

    3. Adjudication Method for the Test Set:

    • Vessel Extraction: Consensus of 3 board-certified radiologists. (Implies a qualitative agreement, not a specific quantitative method like 2+1).
    • Root Definition: The acceptable area was manually defined by the annotator (the GEHC advanced application specialist).

    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • No, an MRMC comparative effectiveness study was not explicitly described in terms of human readers improving with AI vs. without AI assistance. The non-clinical testing focused on the algorithms' performance against ground truth and the clinical assessment used a Likert scale to evaluate the proposed device with the AI option, rather than a direct comparison of human reader performance with and without AI.

    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Yes, a standalone performance evaluation was conducted for both the vessel extraction and root definition algorithms. The reported success rates of 93.7% and 95.2% are measures of the algorithms' performance against established ground truth without a human in the loop for the primary performance metrics.

    6. The Type of Ground Truth Used:

    • Vessel Extraction: Expert consensus (3 board-certified radiologists).
    • Root Definition: Manual definition by an expert (GEHC advanced application specialist), defining an "acceptable area."

    7. The Sample Size for the Training Set:

    • The document states that "contrast injected CBCT scans acquired from GE HealthCare's interventional X-ray system [K181403] were used for designing and qualifying the algorithms." However, it does not specify the sample size for the training set. It only mentions that a test set of 207 scans was "reserved, segregated, and used to evaluate both algorithms."

    8. How the Ground Truth for the Training Set Was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K210807
    Date Cleared
    2021-10-22

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    [K110834], AW Server [K081985], GE's Volume Viewer application [K041521], GE's Vision application [K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray angiography images of the liver. Its output is intended as an adjunct means to help visualize vasculature and identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during the planning phase of embolization procedures.

    Device Description

    FlightPlan for Liver with the Parenchyma Analysis option is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps physicians visualize and analyze vasculature to aid in the planning of endovascular embolization procedures in the liver. It was developed from modifications to the predicate device, GE's FlightPlan for Liver [K121200], including the addition of 2 new algorithms supporting the Parenchyma Analysis option. The Parenchyma Analysis option is what triggered this 510k. The subject device also includes a feature, Live Tracking, that was cleared in the reference device, GE's FlightPlan for Embolization. The software operates on GE's Advantage Workstation [K110834] platform and AW Server [K081985] platform and is an extension to the GE's Volume Viewer application [K041521].

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for FlightPlan for Liver, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "The test results of both of the algorithms met their predefined acceptance criteria" for the deep learning-based Liver Segmentation algorithm and the non-deep learning Virtual Parenchyma Visualization (VPV) algorithm. However, the specific quantitative acceptance criteria and their corresponding reported performance values are not explicitly detailed in the provided text.

    The clinical assessment also "demonstrated that the proposed device FlightPlan for Liver with the Parenchyma Analysis option met its predefined acceptance criteria," but again, the specifics are not provided.

    Therefore, a table cannot be fully constructed without this missing information.

    Missing Information:

    • Specific quantitative acceptance criteria for Liver Segmentation algorithm (e.g., Dice score, IoU, boundary distance).
    • Specific quantitative reported performance for Liver Segmentation algorithm.
    • Specific quantitative acceptance criteria for VPV algorithm (e.g., accuracy of distal liver region estimation).
    • Specific quantitative reported performance for VPV algorithm.
    • Specific quantitative or qualitative acceptance criteria for the clinical assessment using the 5-point Likert scale.

    2. Sample Sizes and Data Provenance

    For the Deep Learning Liver Segmentation Algorithm:

    • Test Set Sample Size: Not explicitly stated, but derived from a "database of contrast injected CBCT liver acquisitions."
    • Data Provenance: Clinical sites in the USA and France. The data was prospective (implied by "clinical sites") and for the purpose of training and testing.

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Test Set Sample Size: "a test set of proximal CBCT acquisitions." The exact number is not provided.
    • Data Provenance: From the USA and France.

    For the Clinical Testing:

    • Test Set Sample Size: "A sample of 3D X-ray angiography image pairs." The exact number is not provided.
    • Data Provenance: From France and the USA.

    3. Number of Experts and Qualifications for Ground Truth

    For the Deep Learning Liver Segmentation Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified. The ground truth method itself (how segmentation was established for training and testing) is not fully detailed beyond using existing "contrast injected CBCT liver acquisitions."

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified.
    • Ground Truth Basis: "selective contrast injected CBCT exams from same patients." It's implied that these were expert-interpreted or based on a recognized clinical standard, but the specific expert involvement is not detailed.

    For the Clinical Testing:

    • Number of Experts: Not specified.
    • Qualifications: "interventional radiologists." No experience level (e.g., years of experience) is provided.

    4. Adjudication Method for the Test Set

    The document does not explicitly mention an adjudication method (like 2+1 or 3+1 consensus) for any of the test sets.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document describes a "Clinical Testing" section where "interventional radiologists using a 5-point Likert scale" assessed image pairs. This suggests a reader study.
    • However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study assessed whether the device "met its predefined acceptance criteria and helps physicians visualize and analyze... and aids in the planning..." which seems to be an evaluation of the device's utility rather than a direct comparison of reader performance with and without the AI.
    • Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance was done for both new algorithms.
      • The "deep learning-based Liver Segmentation algorithm" was evaluated, although specific performance metrics are not given.
      • The "non-deep learning Virtual Parenchyma Visualization algorithm's performance was evaluated [...] compared to selective contrast injected CBCT exams from same patients used as the ground truth."
    • This indicates that the algorithms themselves were tested for their inherent performance.

    7. Type of Ground Truth Used

    For the Deep Learning Liver Segmentation Algorithm:

    • Based on "contrast injected CBCT liver acquisitions." The precise method for establishing the "correct" segmentation (e.g., manual expert tracing, pathology, outcome data) is not detailed. It's implicitly clinical data.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • "selective contrast injected CBCT exams from same patients used as the ground truth." This implies a gold standard of directly observed, selective angiography, which is a clinical reference.

    For the Clinical Testing:

    • The "ground truth" here is the perception and evaluation of interventional radiologists using a 5-point Likert scale regarding the device's utility ("helps physicians visualize," "aids in the planning"). This is a form of expert consensus/subjective assessment of utility.

    8. Sample Size for the Training Set

    For the Deep Learning Liver Segmentation Algorithm:

    • "a database of contrast injected CBCT liver acquisitions from clinical sites in the USA and France was used for the training and testing." The exact sample size for training is not specified, only that a database was used.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • The VPV algorithm is described as "non-deep learning," so it would not have a traditional "training set" in the same way a deep learning model would. It likely relies on predefined anatomical/physiological models or rules.

    9. How Ground Truth for the Training Set Was Established

    For the Deep Learning Liver Segmentation Algorithm:

    • The document states "a database of contrast injected CBCT liver acquisitions [...] was used for the training." However, it does not explicitly detail how the ground truth labels (i.e., the correct liver segmentations) were established for these training images. This is a critical piece of information for a deep learning model. It's usually through expert manual annotation for segmentation tasks.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193261
    Date Cleared
    2020-01-24

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessel and distal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post processing software application which operates on the Advantage Workstation (AW) [K110834] platform and AW Server [K081985] platform. It is an extension to the Volume Viewer application [K041521] modified from FlightPlan for Liver (K121200) and is designed for processing 3D X-ray angiography images to help visualize vasculature

    The primary features of the software are: semi-automatic segmentation of vascular tree from a starting point determined by the user; selective display (Live Tracking) of proximal vessel and distal vessels from a point determined by the user's cursor; ability to segment part of the vasculature; ability to mark points of interest (POI) to store cursor position; save results and export to other applications such as Vision Applications [K092639] for 3D road-mapping.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for FlightPlan for Embolization:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly list quantitative acceptance criteria in a dedicated section for "acceptance criteria." However, it describes validation activities that implicitly define the performance considered acceptable. Based on the "Summary of Non-Clinical Tests" and "Summary of Clinical Tests," the following can be inferred:

    Acceptance Criteria (Inferred from Validation Activities)Reported Device Performance
    Non-Clinical:
    Capability to automatically segment and selectively display vascular structures from a single user-defined point."FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures from a single user defined point using a database of XACT exams... established satisfactory quality for FlightPlan for Embolization usage."
    Compliance with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."The FlightPlan for Embolization complies with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."
    Adherence to design control testing per GE's quality system (21 CFR 820 and ISO 13485)."FlightPlan for Embolization has successfully completed the required design control testing per GE's quality system. FlightPlan for Embolization was designed and will be manufactured under the Quality System Regulations of 21CFR 820 and ISO 13485.
    • Risk Analysis
    • Requirements Reviews
    • Design Reviews
    • Performance testing (Verification, Validation)
    • Safety testing (Verification)"
    Clinical:
    Ability of the device to help physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."The assessment demonstrated that the proposed device (FlightPlan for Embolization) helps physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."

    2. Sample Size Used for the Test Set and Data Provenance

    • Non-Clinical Test Set:

      • Sample Size: A "database of XACT exams." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but clinical scenarios are considered "representative" and include "consideration of acquisition parameters, image quality and anatomy." It can be inferred that these are existing clinical data, likely retrospective.
    • Clinical Test Set:

      • Sample Size: "A sample of 3D X-ray angiography images representative of clinical practice." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but described as "representative of clinical practice" and "most common anatomic regions where embolization procedures are performed." It can be inferred that these are retrospective clinical cases.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Clinical Test Set:
      • Number of Experts: Four
      • Qualifications: "board certified interventional radiologists." No information on years of experience is provided.

    4. Adjudication Method for the Test Set

    • Clinical Test Set: The document states that the assessment was done "using a 5-point Likert scale." This implies an individual scoring system by each radiologist, but it does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It sounds like individual assessments were performed and then aggregated or analyzed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and Effect Size

    • No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the traditional sense of comparing human readers with AI vs. without AI assistance.
      • The clinical study described is an assessment of the device's helpfulness by radiologists, rather than a comparative study measuring improvement in diagnostic accuracy or efficiency for humans using the AI vs. not using it. The statement "demonstrated that the proposed device (FlightPlan for Embolization) helps physicians" is an outcome of an assessment, not a quantitative effect size from an MRMC study comparing assisted vs. unassisted performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done

    • Yes, a form of standalone performance was assessed in the "Summary of Non-Clinical Tests." The "Engineering...validated FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures..." using a database of XACT exams. This implies an evaluation of the algorithm's output against some reference (likely manual segmentations or ground truth established by experts) without direct human interaction at the time of assessment.

    7. The Type of Ground Truth Used

    • Non-Clinical Test Set: The ground truth for the non-clinical validation of segmentation and display capabilities is implicitly based on expert-defined correct segmentations or "satisfactory quality" as determined by engineering validation. The document does not explicitly state "expert consensus" or "pathology" for this.
    • Clinical Test Set: The ground truth for the clinical assessment relies on the judgment of "four board certified interventional radiologists" using a 5-point Likert scale to determine if the device "helps physicians." This is a form of expert assessment/opinion as ground truth regarding the device's utility/helpfulness.

    8. The Sample Size for the Training Set

    • Not provided. The document does not disclose information about the training data used for the FlightPlan for Embolization algorithms.

    9. How the Ground Truth for the Training Set was Established

    • Not provided. Since the training set details are not mentioned, how its ground truth was established is also not available in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K130069
    Manufacturer
    Date Cleared
    2013-04-05

    (84 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K092639,K110834,K113034,K041521,K111200

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Innova EPVision 2.0 software application is intended to enable users to load 3D datasets and overlay and register in real time these 3D datasets with radioscopic or radiographic images of the same anatomy in order to support catheter/device guidance during interventional procedures.

    Innova EPVision 2.0 software application is intended to enable users to load, overlay and register in real time 3D datasets with radioscopic or radiographic images of the same anatomy. Electrophysiological signal information is imported and used to color-code these 3D datasets in order to support catheter/device during cardiac electrophysiology guidance interventional procedures.

    Device Description

    Innova EPVision 2.0 is the new version of Innova EPVision software, which is part of the Innova Vision Applications [K092639] software. Innova EPVision 2.0, as all Innova Vision Applications image processing algorithms, is executed on a hardware called Advantage platform Workstation (AW) [K110834].
    It can perform the following functions:

    • Superimpose the segmented DICOM 3D XA, CT, MR dataset on radioscopic or radiographic image of the same anatomy, obtained on an Innova Fluoroscopic X-ray system [K113034].
    • . Register the segmented DICOM 3D XA. CT. MR dataset with radioscopic or radiographic images obtained on an Innova Fluoroscopic X-ray system for interventional procedures.
    • . Image stabilization features such as ECG gated display or motion tracking in the image.
    • . Capability to load planning data, deposited on the 3D model in Volume Viewer [K041521], such as 3D landmarks, ablations lines, and to display them on the 3D-2D fused image to support the physician during procedures.
    • . Marking points of interest of different size and color during the procedures.
    • . The frequently used functions are also available from tableside on the Innova Central Touch Screen to provide efficient workflow during the interventional procedures.

    Innova EPVision 2.0 can perform additionally the following functions:

    • Import electrophysiology (EP) data digitized and processed . on the CardioLab system [K111200] and use them to colorcode EP recording points on 3D model of the visualized anatomy in order to support catheter/device guidance during cardiac electrophysiology interventional procedures.
    • . Catheter tip detection to help locate the catheter tip on the 2D X-ray image. The user can modify or correct the automatically proposed tip location anytime.
    • . 3D side viewer allowing the user to freely rotate the 3D model independently from the Fluoro image and the gantry angulation.

    Innova EPVision 2.0, as Innova EPVision targets clinical indication for interventional cardiology procedures and in particular cardiac electrophysiology procedures.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Innova EPVision 2.0 device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state specific quantitative acceptance criteria or detailed performance metrics for the Innova EPVision 2.0. Instead, it relies on demonstrating substantial equivalence to a predicate device (K092639, Innova Vision Applications) and adherence to various standards (NEMA PS 3.1 - 3.20 (2011) DICOM Set, IEC 60601-1-4 (2000), IEC 62304 (2006), IEC 62366 (2007)).

    The "reported device performance" is described in terms of verification and validation activities designed to ensure the device works as required and meets user needs and intended use.

    Summary of Device Performance (as described):

    Acceptance Criteria (Implied by Regulatory Compliance)Reported Device Performance
    Compliance with NEMA PS 3.1 - 3.20 (DICOM Set)Verified as compliant
    Compliance with IEC 60601-1-4 (Medical Electrical Equipment)Verified as compliant
    Compliance with IEC 62304 (Software Life Cycle Processes)Verified as compliant
    Compliance with IEC 62366 (Usability Engineering)Verified as compliant
    Risk Management effectivenessImplemented and tested
    Requirements Reviews completionPerformed
    Design Reviews completionPerformed
    Performance and Safety testing (Verification)Performed at Unit, Integration, and System levels to check functionality and risk mitigation.
    Final Acceptance Testing (Validation)Performed to ensure user needs, intended use, risk mitigation, and labeling are effective.
    Substantial Equivalence to Predicate Device (K092639)Concluded to be safe, effective, and substantially equivalent.
    No new significant indications for useApplication works within predicate's intended use/indications.
    No new issues of safety and effectivenessVerified through testing
    No new fundamental scientific technologyApplication uses same fundamental technology as predicate.

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set" in the context of clinical data for performance evaluation. Instead, it discusses verification and validation tests as part of the software development lifecycle. These tests would involve a variety of inputs and scenarios, but the exact sample sizes (e.g., number of test cases, number of images) for these engineering tests are not provided.

    There is no mention of data provenance (e.g., country of origin, retrospective/prospective) because no clinical studies were deemed necessary or performed to support substantial equivalence.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not applicable or provided as no clinical studies were performed that would require expert-established ground truth on a test set. The ground truth for the engineering verification and validation tests would be the expected behavior of the software as defined by its requirements and design specifications, not clinical expert consensus.

    4. Adjudication method for the test set

    This information is not applicable or provided since no clinical validation study involving adjudication of a test set was performed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The submission explicitly states: "The subject of this premarket submission, Innova EPVision 2.0, did not require clinical studies to support substantial equivalence." Therefore, there is no reported effect size of human readers improving with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes the device as a software application that "enables users to load, overlay and register... in order to support catheter/device guidance." It also mentions "Catheter tip detection to help locate the catheter tip on the 2D X-ray image. The user can modify or correct the automatically proposed tip location anytime." This clearly indicates a human-in-the-loop design.

    While the "Catheter tip detection" component could have an internal standalone performance evaluation during development, the submission does not report a standalone (algorithm only) performance study in the context of a regulatory submission outcome. The overall device is intended to be used with a human interventionalist.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the engineering verification and validation tests, the ground truth would be the expected software behavior and functionality based on design input (system requirements and specifications). For example, a test for "Superimpose the segmented DICOM 3D XA, CT, MR dataset on radioscopic or radiographic image" would have a ground truth defined by the correctness of the overlay as per the design specifications. There is no mention of clinical ground truth (expert consensus, pathology, or outcomes data) being used for the regulatory submission's performance evaluation because no clinical studies were conducted.

    8. The sample size for the training set

    This information is not provided. The document makes no mention of machine learning model training or a training set. The device is described as inheriting functions and employing the "same fundamental scientific technology" as its predicate, implying a rule-based or traditional image processing approach rather than a machine learning approach requiring distinct training data that would be relevant to this 510(k) submission.

    9. How the ground truth for the training set was established

    This information is not provided as no training set is mentioned in the context of this 510(k) submission.

    Ask a Question

    Ask a specific question about this device

    K Number
    K121772
    Date Cleared
    2013-03-21

    (279 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K040254, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Allura 3D-RA Rel. 6 is intended to assist physicians in analyzing two dimensional X-ray images by creating three dimensional views from sets of two dimensional images created during rotational angiographic runs. The Allura 3D-RA Rel. 6 is intended to assist in the diagnosis and treatment of endovascular diseases, for example, stenosis, aneurysms, and arteriovenous malformations. The Allura 3D-RA Rel.6 also supports measurement of lesion dimensions and anatomical distances.

    The 3D Roadmap Rel. 1 is an extension of Allura 3D-RA, which provides image guidance by superimposing live fluoroscopic images on a 3D reconstruction of the vessel anatomy to assist in catheter maneuvering.

    The MR-CT Roadmap Rel. 1 is an extension of Allura 3D-RA, which provides image guidance by superimposing live fluoroscopic images on a 3D reconstruction of the vessel anatomy to assist in catheter maneuvering.

    Device Description

    The device description of the 3D Interventional Tools software medical devices that are subject of this 510(k) submission are as follows:

    Allura 3D-RA Rel. 6 software medical device (Allura 3D-RA Rel. 6)
    The Allura 3D-RA Rel. 6 is a software product (Interventional Tool) intended to provide high-speed and high resolution 3D visualization of vessels and bones anatomy. Allura 3D-RA is intended to be used in combination with an Allura X-ray system.

    The Allura 3D-RA Rel. 6 generates a 3D reconstruction from 2D X-ray images to visualize vascular anatomy, and helps to identify vascular pathologies from a single contrast-enhanced rotational angiogram. It can be used during anv angiography procedure and can cover any anatomical area including cerebral, abdominal, and peripheral. It also allows for 3D visualization of the spine.

    3D Roadmap Rel. 1 software medical device (3D Roadmap Rel. 1)
    The 3D Roadmap Rel. 1 is a software product intended (Interventional Tool) to assist the physician during complex interventions by providing live 3D image guidance for navigating endovascular devices through vascular structures anywhere in the body. 3D Roadmap is intended to be used in combination with an Allura X-ray system.

    The 3D Roadmap Rel. 1 overlays live 2D fluoroscopic images on a 3D reconstruction of the vessel tree processed by Allura 3D-RA Rel. 6 and therefore, assist the physician in catheter maneuvering. It can be used during any endovascular intervention and can cover any anatomy, including cerebral, abdominal, and peripheral vasculature.

    MR-CT Roadmap Rel. 1 software medical device (MR-CT Roadmap Rel. 1)
    The MR-CT Roadmap Rel.1 is a software product intended to provide live 3D image guidance for navigating endovascular devices through vascular structures anywhere in the body, reusing segmented MRA or CTA data that has been acquired previously. The MR-CT Roadmap Rel.1 is intended to be used in combination with an Allura X-ray system.

    The MR-CT Roadmap Rel.1 overlays live 2D fluoroscopic images on a previously acquired MRA or CTA volume, which is registered by the Allura 3D-RA Rel. 6 with the X-ray system using a rotational scan and therefore, assist the physician in catheter maneuvering. It can be used during any endovascular intervention and can cover any anatomy, including for example cerebral, abdominal, and peripheral vasculature.

    AI/ML Overview

    The 3D Interventional Tools software medical devices (Allura 3D-RA Rel. 6, 3D Roadmap Rel. 1, and MR-CT Roadmap Rel. 1) underwent verification and validation testing to ensure compliance with intended use, claims, requirement specifications, and risk management requirements.

    Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Aneurysm tool success rate for segmentation:Average success rate of segmentation: 73% with a 95% confidence interval of (60%, 100%]
    Aneurysm detection capabilities:Successfully used for aneurysms between 2 and 22 mm
    Limitation:Not suitable for aneurysms located on crossing vessels

    Study Details

    1. Sample size used for the test set and data provenance:

      • The study was a retrospective validation study performed on 3DRA clinical datasets with aneurysms. The exact number of datasets is not specified beyond "clinical datasets." The provenance (country of origin) of the data is not mentioned.
    2. Number of experts used to establish the ground truth for the test set and their qualifications:

      • Three experts were used to validate the performance of the aneurysm tool.
      • One of the experts was an interventional neuro radiologist. The qualifications of the other two experts are not explicitly stated.
    3. Adjudication method for the test set:

      • The adjudication method is not explicitly described beyond stating that the performance was "validated by three experts."
    4. Multi-reader multi-case (MRMC) comparative effectiveness study:

      • No MRMC comparative effectiveness study is mentioned, or any effect size of human readers improving with or without AI assistance. The study focuses on the standalone performance of the aneurysm tool.
    5. Standalone (algorithm only) performance:

      • Yes, a standalone performance study was done for the aneurysm tool. The reported success rate of segmentation (73%) is the algorithm's performance.
    6. Type of ground truth used:

      • The ground truth was established by three experts, with one being an interventional neuro radiologist, based on 3DRA clinical datasets. This suggests an expert consensus/interpretation of the clinical data.
    7. Sample size for the training set:

      • The document does not specify the sample size used for the training set.
    8. How the ground truth for the training set was established:

      • The document does not specify how the ground truth for the training set was established. It only discusses the validation study and its ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K101311
    Device Name
    EP NAVIGATOR R3
    Date Cleared
    2010-09-30

    (142 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Medical purpose EP navigator is intended to provide navigation support for intra-cardiac instruments, such as catheters and guidewires, during the interventional treatment of heart rhythm disorders, by overlaying acquired and segmented 3D anatomical image data over live fluoroscopic X-ray images of the same anatomy.

    EP navigator is intended to enable users to segment previously acquired 3D CT or other datasets and overlay and register these 3D segmented data sets with live fluoroscopy X-ray images of the same anatomy in order to support catheter/device navigation. The 3D segmented data set can be displayed with a color map annotation received from an external source.

    Device Description

    EP navigator image software processing algorithms are executed on a PC based hardware platform, which can perform the following functions:

    • segment previously acquired DICOM 3D CT or other image data.(the acquisition of . the image data from a rotational angiogram is known as 3D atriography (3D ATG))
    • superimpose the segmented 3D CT or other dataset on a live fluoroscopic X-ray image of the same anatomy, obtained on a Philips Allura Xper FD angiography X-ray system,
    • . register the segmented 3D CT or other data with live fluoroscopic X-ray images obtained on a Philips Allura Xper FD angiography X-ray system for specified procedures.
    • The 3D segmented data set can be displayed with a color map annotation received ● from an external source.
    • position visual markers on the 3D volume ●
    • visualize the inside of the 3D volume (EndoView) .
    • certain buttons on the user interface control EP Logix functions; ●
    • visual marker positions are transmitted to EP Logix; .
    • color map information is received from EP Logix. .
    AI/ML Overview

    The provided document is a 510(k) Summary for the Philips EP navigator device. It describes the device, its intended use, indications for use, and a summary of testing to demonstrate substantial equivalence to predicate devices. However, the document does not contain the specific details required to fully address your request regarding acceptance criteria and the comprehensive study that proves the device meets those criteria.

    Here's what can be extracted and what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document mentions "EP navigator 3 complies with standards as detailed in annex 009 of this premarket submission" and "Non-clinical verification and validation tests were performed relative to the requirement specifications and risk management results". It also states that a "Clinical evaluation was performed to show safety and effectiveness to of EP-Navigator in the intended clinical environment."

    However, the document does not provide a table specifying the acceptance criteria (e.g., accuracy metrics, specific performance thresholds) nor does it report detailed device performance against those criteria. It merely states that the device "complies" and that "clinical evaluation" showed "safety and effectiveness."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document makes a general statement: "Corresponding clinical evaluation report and test results are included in this submission."

    However, it does not specify the sample size for any test set or the data provenance (e.g., country of origin, retrospective/prospective nature of data).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    The document mentions a "Clinical evaluation" but does not specify the number of experts, their qualifications, or how ground truth was established for any test set.

    4. Adjudication Method for the Test Set:

    No information on adjudication methods (e.g., 2+1, 3+1, none) is provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    The document does not mention any MRMC study or the effect size of human readers improving with AI assistance. It focuses on the device's standalone capabilities and its intended use for navigation support.

    6. Standalone (Algorithm Only) Performance:

    The document states that "EP navigator image software processing algorithms are executed on a PC based hardware platform," and its functions involve "segment previously acquired DICOM 3D CT or other image data," "superimpose the segmented 3D CT or other dataset on a live fluoroscopic X-ray image," and "register the segmented 3D CT or other data with live fluoroscopic X-ray images."

    This strongly implies that the algorithm has standalone performance for these image processing and registration tasks. However, the document does not explicitly present a dedicated standalone performance study with specific metrics, but rather refers to "non-clinical verification and validation tests performed relative to the requirement specifications."

    7. Type of Ground Truth Used:

    While "clinical evaluation" and "non-clinical verification and validation tests" are mentioned, the specific type of ground truth used (e.g., expert consensus, pathology, outcomes data) for validating the device's performance is not detailed.

    8. Sample Size for the Training Set:

    The document does not provide any information regarding a training set or its sample size. Given that it is an "image software processing algorithm," it likely involves some form of training or calibration, but this is not discussed in the summary.

    9. How the Ground Truth for the Training Set Was Established:

    As no training set information is provided, there is no description of how ground truth for a training set was established.

    In summary:

    The 510(k) Summary focuses on demonstrating substantial equivalence to predicate devices based on intended use, technological characteristics, and safety risks. It indicates that clinical evaluation and non-clinical verification/validation were performed, but it lacks the specific quantitative details about acceptance criteria, detailed study designs (test set size, provenance, expert involvement, adjudication), and training set information that you've requested. This level of detail is typically found in the full submission referenced by the summary, not in the summary itself.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1