Search Filters

Search Results

Found 11 results

510(k) Data Aggregation

    K Number
    K243446
    Device Name
    3DXR
    Date Cleared
    2025-02-25

    (111 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K232344, K041521

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    3DXR is the AW application which is intended to perform the three-dimensional (3D) reconstruction computation of any images acquired with a 3D Acquisition mode of the X-ray interventional system for visualization under Volume Viewer. The 3D Acquisition modes are intended for imaging vessels, bones and soft tissues as well as other internal body structures. The 3D reconstructed Volume assist the physician in diagnosis, surgical planning, Interventional procedures and treatment follow-up.

    Device Description

    3DXR is a post-processing software-only application, runs on Advantage Workstation (AW) platform [K110834], and performs 3D reconstruction for the CBCT 3D acquisition images (input) acquired from the fixed interventional X-ray system [K181403, K232344]. The reconstructed 3D volume (output) is visualized under Volume Viewer application [K041521]. The proposed 3DXR is a modification from the predicate 3DXR [included and cleared in K181403]. A new option, called CleaRecon DL, based on Deep-Learning (DL) technology, is added in the proposed subject 3DXR application.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (from Engineering Bench Testing)Reported Device Performance
    Quantitative (Image Analysis)
    Reduction of Mean Absolute Error (MAE) between images with and without CleaRecon DLA statistically significant reduction in MAE was observed between the two samples.
    Increase of Structural Similarity Index Measure (SSIM) between images with and without CleaRecon DLA statistically significant increase in SSIM was observed between the two samples.
    Reduction of MAE (phantoms)Reduction of MAE was observed.
    Reduction of Standard Deviation (SD) (phantoms)Reduction of SD was observed.
    Qualitative (Clinical Evaluation)
    CleaRecon DL removes streaks artifacts and does not introduce other artifactsClinicians confirmed that CleaRecon DL removes streaks artifacts and, in 489 reviews, did not identify any structure or pattern that has been hidden or artificially created by CleaRecon DL when compared to the original reconstruction.
    CleaRecon DL provides a clearer image and impacts confidence in image interpretationIn 98% of the cases, the CBCT images reconstructed with CleaRecon DL option are evaluated as clearer than the conventional CBCT images. Clinicians assessed how it impacts their confidence in image interpretation. (Specific quantitative impact on confidence not provided, but generally positive due to clearer images.)
    CleaRecon DL does not bring artificial structures and/or hide important anatomical structuresWithin 489 reviews, clinicians did not identify any structure or pattern that has been hidden or artificially created by CleaRecon DL when compared to the original reconstruction.
    No degradation of image quality or other concerns related to safety and performance (overall)Engineering bench testing passed predefined acceptance criteria, demonstrated performance, and "no degradation of image quality, nor other concerns related to safety and performance were observed." Clinical evaluation results "met the predefined acceptance criteria and substantiated the performance claims."

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Engineering Bench Testing:
      • Test Set 1 (Image Pairs): Two samples (number not specified beyond "two samples").
      • Test Set 2 (Phantoms): Not specified beyond "phantoms."
    • Clinical Image Quality Evaluation (Retrospective):
      • Sample Size: 110 independent exams, each from a unique patient.
      • Data Provenance: Retrospectively collected from 13 clinical sites.
        • 80 patients from the US
        • 26 patients from France
        • 4 patients from Japan
      • Patient Population: Adult patients (pediatrics excluded) undergoing interventional procedures. No inclusion criteria for age (within adult range) or sex/gender (except for prostate).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Engineering Bench Testing: Not applicable; ground truth was intrinsic (images with and without applying CleaRecon DL, or phantoms with reference).
    • Clinical Image Quality Evaluation:
      • Number of Experts: 13 clinicians.
      • Qualifications: "Clinicians" (specific specialties or years of experience are not mentioned, but their role implies expertise in image interpretation for interventional procedures).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Clinical Image Quality Evaluation: Each of the 110 exams (with and without CleaRecon DL) was compared/evaluated at least 3 times independently by the recruited clinicians. This resulted in 490 pairs of clinicians' evaluations. This suggests a multi-reader, independent review with subsequent aggregation of results, rather than a formal consensus-based adjudication like 2+1 or 3+1 for individual cases, as the "489 reviews" and "98% of cases" suggest aggregated findings.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • A multi-reader multi-case (MRMC) study was implicitly conducted as part of the clinical image quality evaluation, with 13 clinicians reviewing 110 cases (though the "multi-case" aspect is strong, the "multi-reader" aspect is also present for each case, as each was reviewed at least 3 times independently).
    • Effect Size: The study focused on the impact of the image quality on interpretation, rather than a direct measure of human reader performance improvement in diagnostic accuracy or efficiency with and without AI assistance. The results indicated:
      • "In 98% of the cases, the CBCT images reconstructed with CleaRecon DL option are evaluated as clearer than the conventional CBCT images."
      • Clinicians were asked to assess "how it impacts their confidence in image interpretation," but the specific effect size or improvement in confidence wasn't quantified.
      • No hidden or artificially created structures were identified, indicating perceived safety and reliability.
      • Therefore, while it showed a significant improvement in perceived image clarity, it did not provide a quantitative effect size for human reader improvement (e.g., in AUC, sensitivity, or specificity) with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Yes, a standalone performance evaluation of the algorithm was done as part of the "Engineering bench testing." This involved:
      • Testing on a segregated test dataset of image pairs (with and without CleaRecon DL) where Mean Absolute Error (MAE) and Structural Similarity Index Measure (SSIM) were computed.
      • Testing on phantoms where MAE and Standard Deviation (SD) were computed relative to a reference (without simulated artifacts).
      • These tests directly assessed the algorithm's capability to reduce artifacts and improve image metrics without human interaction.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    • Engineering Bench Testing:
      • For the image pair comparison, the ground truth was essentially the "ideal" or "less artifact-laden" image derived from the paired comparison. This is a form of reference-based comparison where the output without the artifact or with the artifact corrected is the standard.
      • For phantoms, the ground truth was the known characteristics of the phantom (e.g., absence of artifacts in the reference image).
    • Clinical Image Quality Evaluation:
      • The ground truth was established through expert evaluation/consensus (13 clinicians evaluating side-by-side images). However, it was focused on subjective image quality and the presence/absence of artifacts, rather than ground truth for a specific diagnosis or outcome.

    8. The sample size for the training set

    • The text states: "The CleaRecon DL algorithm was trained and qualified using pairs of images with and without streak artifacts." However, the specific sample size of the training set is not provided.

    9. How the ground truth for the training set was established

    • The ground truth for the training set was established using "pairs of images with and without streak artifacts." This implies that for each image with streak artifacts, there was a corresponding reference image without such artifacts, which allowed the algorithm to learn how to remove them. The method by which these "pairs of images" and their respective "with/without streak artifacts" labels were generated or confirmed is not detailed. It could involve expert labeling, simulation, or other methods.
    Ask a Question

    Ask a specific question about this device

    K Number
    K223490
    Date Cleared
    2023-03-21

    (120 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181403, K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps clinicians visualize vasculature to aid in the planning of endovascular embolization procedures throughout the body.

    A new option, called AI Segmentation, was developed from the modifications to the predicate device, GE HealthCare's FlightPlan for Embolization [K193261]. It includes two new algorithms. This Al Segmentation option is what triggered this 510(k) submission.

    The software process 3D X-ray angiography images (CBCT) acquired from GE HealthCare's interventional X-ray system [K181403], operates on GEHC's Advantage Workstation (AW) [K110834] platform and AW Server (AWS) [K081985] platform, and is an extension to the GE HealthCare's Volume Viewer application [K041521].

    FlightPlan for Embolization is intended to be used during the planning phase of embolization procedures.

    The primary features/functions of the proposed software are:

    • Semi-automatic segmentation of vasculature from a starting point determined by the user, when AI Segmentation option is not activated;
    • Automatic segmentation of vasculature powered by a deep-learning algorithm, when Al Segmentation option is activated;
    • Automatic definition of the root point powered by a deep-learning algorithm, when AI Segmentation option is activated;
    • Selective display (Live Tracking) of proximal vessels from a point determined by the user's cursor;
    • Ability to segment part of the selected vasculature;
    • Ability to mark points of interest (POI) to store cursor position(s);
    • Save results and optionally export them to other applications such as GEHC's Vision Applications ● [K092639] for 3D road-mapping.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the GE Medical Systems SCS's FlightPlan for Embolization device, based on the provided text:

    Acceptance Criteria and Device Performance

    Feature / AlgorithmAcceptance CriteriaReported Device Performance
    Vessel Extraction90% success rate93.7% success rate
    Root Definition90% success rate95.2% success rate

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: 207 contrast-injected CBCT scans, each from a unique patient.
    • Data Provenance: The scans were acquired during the planning of embolization procedures from GE HealthCare's interventional X-ray system. The text indicates that these were from "clinical sites" and were "representative of the intended population" but does not specify countries of origin. The study appears to be retrospective, using existing scans.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

    • Vessel Extraction: 3 board-certified radiologists.
    • Root Definition: 1 GEHC advanced application specialist.

    3. Adjudication Method for the Test Set:

    • Vessel Extraction: Consensus of 3 board-certified radiologists. (Implies a qualitative agreement, not a specific quantitative method like 2+1).
    • Root Definition: The acceptable area was manually defined by the annotator (the GEHC advanced application specialist).

    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • No, an MRMC comparative effectiveness study was not explicitly described in terms of human readers improving with AI vs. without AI assistance. The non-clinical testing focused on the algorithms' performance against ground truth and the clinical assessment used a Likert scale to evaluate the proposed device with the AI option, rather than a direct comparison of human reader performance with and without AI.

    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Yes, a standalone performance evaluation was conducted for both the vessel extraction and root definition algorithms. The reported success rates of 93.7% and 95.2% are measures of the algorithms' performance against established ground truth without a human in the loop for the primary performance metrics.

    6. The Type of Ground Truth Used:

    • Vessel Extraction: Expert consensus (3 board-certified radiologists).
    • Root Definition: Manual definition by an expert (GEHC advanced application specialist), defining an "acceptable area."

    7. The Sample Size for the Training Set:

    • The document states that "contrast injected CBCT scans acquired from GE HealthCare's interventional X-ray system [K181403] were used for designing and qualifying the algorithms." However, it does not specify the sample size for the training set. It only mentions that a test set of 207 scans was "reserved, segregated, and used to evaluate both algorithms."

    8. How the Ground Truth for the Training Set Was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K210807
    Date Cleared
    2021-10-22

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    GE's FlightPlan for Embolization, GE's Advantage Workstation [K110834], AW Server [K081985], GE's Volume

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray angiography images of the liver. Its output is intended as an adjunct means to help visualize vasculature and identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during the planning phase of embolization procedures.

    Device Description

    FlightPlan for Liver with the Parenchyma Analysis option is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps physicians visualize and analyze vasculature to aid in the planning of endovascular embolization procedures in the liver. It was developed from modifications to the predicate device, GE's FlightPlan for Liver [K121200], including the addition of 2 new algorithms supporting the Parenchyma Analysis option. The Parenchyma Analysis option is what triggered this 510k. The subject device also includes a feature, Live Tracking, that was cleared in the reference device, GE's FlightPlan for Embolization. The software operates on GE's Advantage Workstation [K110834] platform and AW Server [K081985] platform and is an extension to the GE's Volume Viewer application [K041521].

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for FlightPlan for Liver, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "The test results of both of the algorithms met their predefined acceptance criteria" for the deep learning-based Liver Segmentation algorithm and the non-deep learning Virtual Parenchyma Visualization (VPV) algorithm. However, the specific quantitative acceptance criteria and their corresponding reported performance values are not explicitly detailed in the provided text.

    The clinical assessment also "demonstrated that the proposed device FlightPlan for Liver with the Parenchyma Analysis option met its predefined acceptance criteria," but again, the specifics are not provided.

    Therefore, a table cannot be fully constructed without this missing information.

    Missing Information:

    • Specific quantitative acceptance criteria for Liver Segmentation algorithm (e.g., Dice score, IoU, boundary distance).
    • Specific quantitative reported performance for Liver Segmentation algorithm.
    • Specific quantitative acceptance criteria for VPV algorithm (e.g., accuracy of distal liver region estimation).
    • Specific quantitative reported performance for VPV algorithm.
    • Specific quantitative or qualitative acceptance criteria for the clinical assessment using the 5-point Likert scale.

    2. Sample Sizes and Data Provenance

    For the Deep Learning Liver Segmentation Algorithm:

    • Test Set Sample Size: Not explicitly stated, but derived from a "database of contrast injected CBCT liver acquisitions."
    • Data Provenance: Clinical sites in the USA and France. The data was prospective (implied by "clinical sites") and for the purpose of training and testing.

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Test Set Sample Size: "a test set of proximal CBCT acquisitions." The exact number is not provided.
    • Data Provenance: From the USA and France.

    For the Clinical Testing:

    • Test Set Sample Size: "A sample of 3D X-ray angiography image pairs." The exact number is not provided.
    • Data Provenance: From France and the USA.

    3. Number of Experts and Qualifications for Ground Truth

    For the Deep Learning Liver Segmentation Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified. The ground truth method itself (how segmentation was established for training and testing) is not fully detailed beyond using existing "contrast injected CBCT liver acquisitions."

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified.
    • Ground Truth Basis: "selective contrast injected CBCT exams from same patients." It's implied that these were expert-interpreted or based on a recognized clinical standard, but the specific expert involvement is not detailed.

    For the Clinical Testing:

    • Number of Experts: Not specified.
    • Qualifications: "interventional radiologists." No experience level (e.g., years of experience) is provided.

    4. Adjudication Method for the Test Set

    The document does not explicitly mention an adjudication method (like 2+1 or 3+1 consensus) for any of the test sets.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document describes a "Clinical Testing" section where "interventional radiologists using a 5-point Likert scale" assessed image pairs. This suggests a reader study.
    • However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study assessed whether the device "met its predefined acceptance criteria and helps physicians visualize and analyze... and aids in the planning..." which seems to be an evaluation of the device's utility rather than a direct comparison of reader performance with and without the AI.
    • Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance was done for both new algorithms.
      • The "deep learning-based Liver Segmentation algorithm" was evaluated, although specific performance metrics are not given.
      • The "non-deep learning Virtual Parenchyma Visualization algorithm's performance was evaluated [...] compared to selective contrast injected CBCT exams from same patients used as the ground truth."
    • This indicates that the algorithms themselves were tested for their inherent performance.

    7. Type of Ground Truth Used

    For the Deep Learning Liver Segmentation Algorithm:

    • Based on "contrast injected CBCT liver acquisitions." The precise method for establishing the "correct" segmentation (e.g., manual expert tracing, pathology, outcome data) is not detailed. It's implicitly clinical data.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • "selective contrast injected CBCT exams from same patients used as the ground truth." This implies a gold standard of directly observed, selective angiography, which is a clinical reference.

    For the Clinical Testing:

    • The "ground truth" here is the perception and evaluation of interventional radiologists using a 5-point Likert scale regarding the device's utility ("helps physicians visualize," "aids in the planning"). This is a form of expert consensus/subjective assessment of utility.

    8. Sample Size for the Training Set

    For the Deep Learning Liver Segmentation Algorithm:

    • "a database of contrast injected CBCT liver acquisitions from clinical sites in the USA and France was used for the training and testing." The exact sample size for training is not specified, only that a database was used.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • The VPV algorithm is described as "non-deep learning," so it would not have a traditional "training set" in the same way a deep learning model would. It likely relies on predefined anatomical/physiological models or rules.

    9. How Ground Truth for the Training Set Was Established

    For the Deep Learning Liver Segmentation Algorithm:

    • The document states "a database of contrast injected CBCT liver acquisitions [...] was used for the training." However, it does not explicitly detail how the ground truth labels (i.e., the correct liver segmentations) were established for these training images. This is a critical piece of information for a deep learning model. It's usually through expert manual annotation for segmentation tasks.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193281
    Device Name
    Hepatic VCAR
    Date Cleared
    2020-03-20

    (114 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K041521, K110834, K081985

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Hepatic VCAR is a CT image analysis software package that allows the analysis and visualization of Liver CT data derived from DICOM 3.0 compliant CT scans. Hepatic VCAR is designed for the purpose of assessing liver morphology, including liver lesion, provided the lesion has different CT appearance from surrounding liver tissue; and its change over time through automated tools for liver lobe, liver segments and liver lesion segmentation and measurement. It is intended for use by clinicians to process, review, archive, print and distribute liver CT studies.

    This software will assist the user by providing initial 3D segmentation, visualization, and quantitative analysis of liver anatomy. The user has the ability to adjust the contour and confirm the final segmentation.

    Device Description

    Hepatic VCAR is a CT image analysis software package that allows the analysis and visualization of Liver CT data derived from DICOM 3.0 compliant CT scans. Hepatic VCAR was designed for the purpose of assessing liver morphology, including liver lesion, provided the lesion has different CT appearance from surrounding liver tissue; and its change over time through automated tools for liver, liver lobe, liver segments and liver lesion segmentation and measurement.

    Hepatic VCAR is a post processing software medical device built on the Volume Viewer (K041521) platform, and can be deployed on the Advantage Workstation (AW) (K110834) and AW Server (K081985) platforms, CT Scanners, and PACS stations or cloud in the future.
    This software will assist the user by providing initial 3D segmentation, vessel analysis, visualization, and quantitative analysis of liver anatomy. The user has the ability to adjust the contour and confirm the final segmentation.
    In the proposed device, two new algorithms utilizing deep learning technology were introduced. One such algorithm segments the liver producing a liver contour editable by the user; another algorithm segments the hepatic artery based on an initial user input point. The hepatic artery segmentation is also editable by the user.

    AI/ML Overview

    The provided text describes the 510(k) summary for Hepatic VCAR, a CT image analysis software package. The submission outlines the device's intended use and the validation performed, particularly highlighting the introduction of two new deep learning algorithms for liver and hepatic artery segmentation.

    Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly define "acceptance criteria" through numerical thresholds for performance metrics. Instead, it states that "Verification and validation including risk mitigations have been executed with results demonstrating Hepatic VCAR met the design inputs and user needs with no unexpected results or risks."

    For the new deep learning algorithms, the performance is described qualitatively:

    Feature/AlgorithmAcceptance Criteria (Implied)Reported Device Performance
    Liver SegmentationProduces a liver contour that is editable by the user and is capable of segmentation.Bench tests show algorithms performed as expected.
    Demonstrated capability of liver segmentation utilizing the deep learning algorithm.
    Hepatic Artery SegmentationSegments the hepatic artery based on initial user input, editable by the user, and capable of segmentation.Bench tests show algorithms performed as expected.
    Demonstrated capability of hepatic artery segmentation utilizing the deep learning algorithm.
    Overall Software PerformanceMeets design inputs and user needs, no unexpected results or risks.Verification and validation met design inputs and user needs with no unexpected results or risks.
    Usability/Clinical AcceptanceFunctionality is clinically acceptable for assisting users in 3D segmentation, visualization, and quantitative analysis.Assessed by 3 board-certified radiologists using a 5-point Likert scale, demonstrating capability.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: "A representative set of clinical sample images" was used for the clinical assessment. The exact number of cases/images is not specified in the provided text.
    • Data Provenance: The provenance of the data (e.g., country of origin, retrospective or prospective) is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: For the clinical assessment of the deep learning algorithms, 3 board certified radiologists were used.
    • Qualifications of Experts: They are described as "board certified radiologists." The number of years of experience is not specified.
    • For the "ground truth" used in "bench tests," the text states "ground truth annotated by qualified experts," but the number and specific qualifications of these experts are not explicitly detailed beyond "qualified experts."

    4. Adjudication Method for the Test Set

    The text states that the "representative set of clinical sample images was assessed by 3 board certified radiologists using 5-point Likert scale." It does not specify an explicit adjudication method (e.g., 2+1, 3+1 consensus) for establishing the "ground truth" or assessing the device's performance based on the radiologists' Likert scale ratings. The Likert scale assessment sounds more like an evaluation of clinical acceptability/usability rather than establishing ground truth for quantitative segmentation accuracy.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    A formal MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance is not explicitly described in the provided text. The clinical assessment mentioned ("assessed by 3 board certified radiologists using 5-point Likert scale") appears to be an evaluation of the device's capability rather than a direct comparison of human performance with and without AI assistance. Therefore, no effect size for human reader improvement is provided.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone performance was assessed for the algorithms. The text states:

    • "Bench tests that compare the output of the two new algorithms with ground truth annotated by qualified experts show that the algorithms performed as expected."
      This indicates an evaluation of the algorithm's direct output against an established ground truth before human interaction/adjustment.

    7. The Type of Ground Truth Used

    Based on the document:

    • For the "bench tests" of the new deep learning algorithms, the ground truth was "ground truth annotated by qualified experts." This suggests expert consensus or expert annotation was used.
    • For the clinical assessment by 3 radiologists using a Likert scale, it's more of a qualitative assessment of the device's capability rather than establishing a definitive ground truth for each case.

    8. The Sample Size for the Training Set

    The sample size for the training set (used to train the deep learning algorithms) is not specified in the provided text.

    9. How the Ground Truth for the Training Set Was Established

    The text states that the deep learning algorithms were trained, but it does not explicitly describe how the ground truth for the training set was established. It only mentions that the ground truth for bench tests was "annotated by qualified experts."

    Ask a Question

    Ask a specific question about this device

    K Number
    K193261
    Date Cleared
    2020-01-24

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessel and distal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post processing software application which operates on the Advantage Workstation (AW) [K110834] platform and AW Server [K081985] platform. It is an extension to the Volume Viewer application [K041521] modified from FlightPlan for Liver (K121200) and is designed for processing 3D X-ray angiography images to help visualize vasculature

    The primary features of the software are: semi-automatic segmentation of vascular tree from a starting point determined by the user; selective display (Live Tracking) of proximal vessel and distal vessels from a point determined by the user's cursor; ability to segment part of the vasculature; ability to mark points of interest (POI) to store cursor position; save results and export to other applications such as Vision Applications [K092639] for 3D road-mapping.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for FlightPlan for Embolization:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly list quantitative acceptance criteria in a dedicated section for "acceptance criteria." However, it describes validation activities that implicitly define the performance considered acceptable. Based on the "Summary of Non-Clinical Tests" and "Summary of Clinical Tests," the following can be inferred:

    Acceptance Criteria (Inferred from Validation Activities)Reported Device Performance
    Non-Clinical:
    Capability to automatically segment and selectively display vascular structures from a single user-defined point."FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures from a single user defined point using a database of XACT exams... established satisfactory quality for FlightPlan for Embolization usage."
    Compliance with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."The FlightPlan for Embolization complies with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."
    Adherence to design control testing per GE's quality system (21 CFR 820 and ISO 13485)."FlightPlan for Embolization has successfully completed the required design control testing per GE's quality system. FlightPlan for Embolization was designed and will be manufactured under the Quality System Regulations of 21CFR 820 and ISO 13485.
    • Risk Analysis
    • Requirements Reviews
    • Design Reviews
    • Performance testing (Verification, Validation)
    • Safety testing (Verification)"
    Clinical:
    Ability of the device to help physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."The assessment demonstrated that the proposed device (FlightPlan for Embolization) helps physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."

    2. Sample Size Used for the Test Set and Data Provenance

    • Non-Clinical Test Set:

      • Sample Size: A "database of XACT exams." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but clinical scenarios are considered "representative" and include "consideration of acquisition parameters, image quality and anatomy." It can be inferred that these are existing clinical data, likely retrospective.
    • Clinical Test Set:

      • Sample Size: "A sample of 3D X-ray angiography images representative of clinical practice." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but described as "representative of clinical practice" and "most common anatomic regions where embolization procedures are performed." It can be inferred that these are retrospective clinical cases.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Clinical Test Set:
      • Number of Experts: Four
      • Qualifications: "board certified interventional radiologists." No information on years of experience is provided.

    4. Adjudication Method for the Test Set

    • Clinical Test Set: The document states that the assessment was done "using a 5-point Likert scale." This implies an individual scoring system by each radiologist, but it does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It sounds like individual assessments were performed and then aggregated or analyzed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and Effect Size

    • No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the traditional sense of comparing human readers with AI vs. without AI assistance.
      • The clinical study described is an assessment of the device's helpfulness by radiologists, rather than a comparative study measuring improvement in diagnostic accuracy or efficiency for humans using the AI vs. not using it. The statement "demonstrated that the proposed device (FlightPlan for Embolization) helps physicians" is an outcome of an assessment, not a quantitative effect size from an MRMC study comparing assisted vs. unassisted performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done

    • Yes, a form of standalone performance was assessed in the "Summary of Non-Clinical Tests." The "Engineering...validated FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures..." using a database of XACT exams. This implies an evaluation of the algorithm's output against some reference (likely manual segmentations or ground truth established by experts) without direct human interaction at the time of assessment.

    7. The Type of Ground Truth Used

    • Non-Clinical Test Set: The ground truth for the non-clinical validation of segmentation and display capabilities is implicitly based on expert-defined correct segmentations or "satisfactory quality" as determined by engineering validation. The document does not explicitly state "expert consensus" or "pathology" for this.
    • Clinical Test Set: The ground truth for the clinical assessment relies on the judgment of "four board certified interventional radiologists" using a 5-point Likert scale to determine if the device "helps physicians." This is a form of expert assessment/opinion as ground truth regarding the device's utility/helpfulness.

    8. The Sample Size for the Training Set

    • Not provided. The document does not disclose information about the training data used for the FlightPlan for Embolization algorithms.

    9. How the Ground Truth for the Training Set was Established

    • Not provided. Since the training set details are not mentioned, how its ground truth was established is also not available in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K163281
    Device Name
    FastStroke
    Date Cleared
    2017-01-26

    (66 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K081985

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FastStroke is a CT image analysis software package that assists in the analysis and visualization of CT data derived from DICOM 3.0 compliant CT scans. FastStroke is intended for the purpose of displaying vasculature of the head and neck at different time points of enhancement.
    The software will assist the user by providing optimized display settings to enable fast review of the images in synchronized formats, aligning the display of the images to the order of the scans and linking together multiple groups of scans. In addition, the software fuses the vascular information from different time points into a single colorized view. This multiphase information can aid the physician in visualizing the presence or absence of collateral vessels in the brain. Collateral vessel information may aid the physician in the evaluation of stroke patients.

    Device Description

    FastStroke is a CT image analysis software package that assists in the analysis and visualization of CT data derived from DICOM 3.0 compliant CT scans. FastStroke is intended for the purpose of displaying vasculature of the head and neck at different time points of enhancement.
    The software will assist the user by providing optimized display settings to enable fast review of the images in synchronized formats, aligning the display of the images to the order of the scans and linking together multiple groups of scans. In addition, the software fuses the vascular information from different time points into a single colorized view. This multiphase information can aid the physician in visualizing the presence or absence of collateral vessels in the brain. Collateral vessel information may aid the physician in the evaluation of stroke patients.
    FastStroke device has been tested with DICOM images from Discovery CT750 HD and Revolution CT using multi-phase CT Angiography. FastStroke is based on DICOM image based processing and would apply to any CT device that is able to acquire data in an equivalent multi-phase CT angiography (pursuant to the timing protocols in the user quide) manner.
    FastStroke is also made available as a standalone post processing application on the AW VolumeShare workstation (K110834) and AW Server platform (K081985) that host advanced image processing applications.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the FastStroke device based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state quantitative acceptance criteria or direct performance metrics for the FastStroke device in a table format. Instead, it describes the purpose of the study as assessing increased diagnostic capability using Likert scales and concludes that "FastStroke aids the physician in visualizing the presence or absence of collateral vessels in the brain and is a useful tool for neuroradiologists in providing a comprehensive stroke work-up."

    Acceptance Criteria (Implied)Reported Device Performance
    Aid physicians in visualizing presence/absence of collateral vessels in the brain.The study results show that FastStroke aids the physician in visualizing the presence or absence of collateral vessels in the brain.
    Be a useful tool for neuroradiologists in comprehensive stroke work-up.The study results show that FastStroke is a useful tool for neuroradiologists in providing a comprehensive stroke work-up.
    Compliance with NEMA PS 3.1 - 3.20 (2016) DICOM Set (Radiology) standard.The FastStroke software complies with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard.
    Employ the same fundamental scientific technology as the predicate device.The FastStroke software employs the same fundamental scientific technology as its predicate device.
    Uses equivalent CT DICOM image data input requirements as the predicate device.FastStroke software uses the equivalent CT DICOM image data input requirements.
    Has equivalent display, formatting, archiving, and visualization technologies compared to the predicate device.It has equivalent display, formatting, archiving and visualization technologies compared to the predicate device.
    Utilizes thresholding and fusion similar to the predicate device.FastStroke utilizes the thresholding and fusion (implied similar to predicate).
    Performance to be substantially equivalent to the predicate device.GE Healthcare considers the FastStroke software application to be as safe, as effective, and performance is substantially equivalent to the predicate device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The sample size for the clinical evaluation (test set) is not explicitly stated in the provided document. It only mentions "a retrospective clinical evaluation was conducted."
    • Data Provenance: The study was a retrospective clinical evaluation. The country of origin of the data is not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Three experts were used.
    • Qualifications of Experts: They were described as "board certified neuroradiologists who were considered experts."

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for the clinical evaluation. It only mentions that the primary endpoint was assessed by three experts using multiple 5-point Likert Scales.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly stated to be performed to measure improvement with AI vs. without AI assistance. The study described was a clinical evaluation by neuroradiologists assessing the device's aid in visualization. While it involved multiple readers, it wasn't framed as a direct comparison of human readers with and without AI assistance to quantify an "effect size" in reader performance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • A standalone performance evaluation of the algorithm without human-in-the-loop was not explicitly described or quantified in the provided text as part of the clinical study. The device is described as "assisting the user" and "aiding the physician," indicating a human-in-the-loop design.

    7. Type of Ground Truth Used

    • The ground truth for the clinical evaluation study was established through expert consensus/interpretation by the three board-certified neuroradiologists. The study assessed the device's ability to "aid the physician in visualizing the presence or absence of collateral vessels," implying that the experts' assessment of visualization aided by the device served as the "truth" for the study's purpose.

    8. Sample Size for the Training Set

    • The document does not provide any information about the sample size used for the training set of the FastStroke software.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide any information on how the ground truth for the training set (if any) was established. It primarily focuses on the clinical evaluation used for substantial equivalence.
    Ask a Question

    Ask a specific question about this device

    K Number
    K152352
    Date Cleared
    2016-01-20

    (153 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K041521

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vision 2, TrackVision 2 and HeartVision 2 software applications are intended to enable users to load 3D datasets and overlay and register in real time these 3D datasets with radioscopic or radiographic images of the same anatomy in order to support catheter/device guidance during interventional procedures.

    The Stereo 3D option enables physicians to visualize needles, points, and segments on a 3D model/space using a stereotaction of radioscopic or radiographic images at a significantly lower dose than use of a full cone beam CT acquisition. This information is intended to assist the physician during interventional procedures.

    Device Description

    Vision Applications (K092639) includes Vision 2, TrackVision 2 and HeartVision 2 applications. Vision Applications can load and dynamically fuse in real-time live 2D X-ray images from the X-ray system with 3D models from X-Ray (DICOM 3D XA), CT or MR system.

    Stereo 3D is a new option and the subject of this submission. The Stereo 3D option is designed to be used with the Vision 2 and TrackVision 2 applications which are part of Vision Applications. The Stereo 3D option enables the user to reconstruct 3D objects from radioscopic or radiographic images.

    The Stereo 3D option intends to provide an alternative to intra operative CBCT (cone beam CT) usually performed for the same purpose: to localize needles and markers within the 3D anatomy. The Stereo 3D option provides a method to reconstruct 3D contrasted objects (point and segments) from a pair of 2D X-ray images, e.g. acquisition of fluoroscopic images from different C-arm positions (2 different projections). The 3D object reconstruction is then fused in the 3D space with the 3D model used at the fusion of the x-ray image.

    Stereo 3D has a workflow that is significantly guided, to support clear and easy use of the reconstruction procedure. The workflow contains the following 4 high level steps:
    Image acquisition and registration adjustment
    Automatic or manual object identification
    Quality assessment of 3D reconstruction
    Display of the reconstructed point(s) and segment(s) on a cross section of the 3D model.
    The second step (object identification) can be done manually or automatically:

    Manual point(s) or segment(s) identification:
    After the acquisition and registration of the two x-ray images acquired at two different C-arm positions, the user has to manually select points on the two x-ray images which correspond to the object to reconstruct (e.g. endograph markers and needles).

    Automatic Mode for needles (only with Trackvision 2):
    The user first selects a planned trajectory with a needle inserted
    After the acquisition of the two X-ray images, and the registration adjustment phase, the needle will automatically be detected and reconstructed.

    AI/ML Overview

    The provided document refers to the "Stereo 3D option for Vision Applications" (K152352). This device is an enhancement to existing GE Vision Applications (Vision 2, TrackVision 2, and HeartVision 2) and aims to reconstruct 3D objects (needles, points, and segments) from 2D X-ray images.

    Based on the provided text, the device did not involve a study to establish acceptance criteria for its performance in terms of diagnostic accuracy or reader improvement. Instead, the submission relies on non-clinical tests to demonstrate substantial equivalence to predicate devices and adherence to relevant standards.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria for device performance in terms of accuracy or clinical effectiveness for the Stereo 3D option. Instead, it focuses on demonstrating compliance with standards and successful completion of various engineering and usability tests.

    Acceptance Criteria (Implied)Reported Device Performance
    Compliance with NEMA PS 3.1 - 3.20 (2011) DICOM Set"The Stereo 3D option for Vision Applications comply with NEMA PS 3.1 - 3.20 (2011) Digital Imaging and Communications in Medicine (DICOM) Set..."
    Compliance with IEC 62304 (2006) (Software Lifecycle Processes)"...and with voluntary standards IEC 62304 (2006) and IEC 62366 (2007)." (Implies successful adherence to software development and risk management for medical devices)
    Compliance with IEC 62366 (2007) (Usability)"Usability validation testing is conducted to confirm that the product can be used safely and effectively." (Reported as completed and successful, with no unexpected results.)
    Software Verification (conformance to requirements)"Product verification ensures the software conforms to its requirements including hazard mitigations risk management requirements. The verification tests confirmed that design output meets design input requirements. The tests were executed at component, software subsystems, and system levels. Functional testing and performance testing are part of system level verification." (Reported as completed and successful)
    Performance Confirmation (bench testing)"Performance has been confirmed with bench testing." and "Additional bench testing was performed to substantiate Stereo 3D's product claims." and "engineering bench testing was able to be performed using existing phantoms, methods, and performance metrics. The requirements were met and there were not any unexpected results." (Reported as completed and successful, substantiating claims.)
    Simulated Use Testing (conformance to user needs/intended uses)"Simulated Use Testing ensured the system conforms to user needs and intended uses through simulated clinical workflows using step-by step procedures that would be performed for representative clinical applications." (Reported as completed and successful, with no unexpected results.)
    Hazard Mitigation"All causes of hazard relative to the introduction of Stereo 3D option have been identified and mitigated." and "Verification and Validation testing has demonstrated that the design inputs, user requirements, and risk mitigations have been met." (Reported as adequately addressed.)
    No new issues of safety and effectiveness"The results of design validation did not raise new issues of safety and effectiveness." and "The Stereo 3D Option for Vision Applications does not raise new issues of safety and effectiveness. The Stereo 3D Option for Vision Applications does not introduce new fundamental scientific technology." (Conclusion of the submission, implying this criterion was met through the various non-clinical tests and substantial equivalence argument.)

    2. Sample size used for the test set and the data provenance

    The document explicitly states: "The Stereo 3D option for Vision Applications did not require clinical studies to assess safety and effectiveness and, thus, to establish the substantial equivalence."

    Therefore, there is no mention of a "test set" in the context of clinical data, sample size, or data provenance (country of origin, retrospective/prospective). The assessment was based on non-clinical testing, including "bench testing" and "simulated clinical workflows using step-by-step procedures."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable, as no clinical test set with human expert ground truth was used for assessing the device's performance. The "usability validation testing" involved "licensed and/or clinically trained healthcare providers or users," but this was for confirming usability, not establishing ground truth for reconstructive accuracy.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable, as no clinical test set necessitating adjudication was used.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The submission explicitly states no clinical studies were required.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    The document mentions "bench testing" and "performance testing" to confirm the device's functionality and "substantiate Stereo 3D's product claims." It also notes: "Stereo 3D contains the algorithms used to detect the 2D needles on the image and to reconstruct points, needles and segments in 3D from fluoroscopic images." This implies that the algorithms themselves were tested, which can be seen as a form of standalone testing in a controlled, non-clinical environment (e.g., using phantoms). However, specific metrics of standalone algorithmic performance (e.g., accuracy of 3D reconstruction against synthetic ground truth) or detailed study designs for this have not been provided beyond general statements about "performance being confirmed."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the non-clinical "bench testing" and "performance testing," the ground truth likely involved phantom data with known 3D object positions and measurements. The document states: "engineering bench testing was able to be performed using existing phantoms, methods, and performance metrics."

    8. The sample size for the training set

    The document does not mention a "training set" in the context of machine learning or AI models with a large dataset. The "Stereo 3D option" is described as containing "algorithms used to detect the 2D needles... and to reconstruct points, needles and segments in 3D." It is based on "established GE technology" and the 3D reconstruction technology of an earlier predicate device ("Innova 3D"). This suggests that if there was any "training" in a modern AI sense, it happened as part of the development of the underlying algorithms, which are considered "established technology," and no specific training set size or methodology is provided for this submission.

    9. How the ground truth for the training set was established

    Not applicable, as no "training set" in a modern AI context is described or detailed for this submission. The technology is based on "established GE technology," implying algorithms developed and potentially validated previously, likely using phantom data or engineered models to establish ground truth for calibration and development of reconstruction techniques.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142736
    Date Cleared
    2015-03-11

    (169 days)

    Product Code
    Regulation Number
    892.1650
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K141541, K132222, K023760, K110834

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Angio Workstation (XIDF-AWS801) is used in combination with an interventional angiography system (Infinix-i series systems and INFX series systems) to provide 2D and 3D imaging in selective catheter angiography procedures for the whole body (includes heart, chest, abdomen, brain and extremity).

    When XIDF-AWS801 is combined with Dose Tracking System (DTS), DTS is used in selective catheter angiography procedures for the heart, chest abdomen, pelvis and brain.

    Device Description

    The XIDF-AWS801 Angio Workstation is used for images input from Diagnostic lmaging System and Workstation, image processing and display. The processed images can be outputted to Diagnostic Imaging System and Workstation.

    AI/ML Overview

    This document is a 510(k) premarket notification for the Toshiba Medical Systems Corporation's XIDF-AWS801, Angio Workstation, v5.31. It describes an update to an existing device rather than a de novo submission. Therefore, it primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing a detailed study proving the device meets specific acceptance criteria in the same way a new device with novel claims would.

    However, based on the provided text, I can infer and extract some information relevant to your request, particularly regarding the additional software features introduced in this version (Left Atrium Segmentation, Parametric Images, and MAR – Metal Artifact Reduction). The document states that "Testing was performed using archived clinical images, simulation testing and bench (phantom) testing."

    Here's an attempt to answer your questions based on the available information, noting where specific details are not provided in this 510(k) summary:


    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly state quantitative acceptance criteria or detailed performance metrics for the new software features. The overarching "acceptance criterion" for this 510(k) submission is that the modifications (new software features) retain the safety and effectiveness of the cleared predicate device.

    Feature / CriterionReported Device Performance
    New Software Features
    Left Atrium Segmentation (automatic segmentation)Testing (using archived clinical images, simulation, and bench testing) verified that the performance of the changes was within specified requirements and that the modifications retained the safety and effectiveness of the cleared device. (Specific quantitative performance metrics like accuracy, sensitivity, or precision are not provided in this summary.)
    Parametric ImagesTesting (using archived clinical images, simulation, and bench testing) verified that the performance of the changes was within specified requirements and that the modifications retained the safety and effectiveness of the cleared device. (Specific quantitative performance metrics are not provided.)
    MAR (Metal Artifact Reduction)Testing (using archived clinical images, simulation, and bench testing) verified that the performance of the changes was within specified requirements and that the modifications retained the safety and effectiveness of the cleared device. (Specific quantitative performance metrics are not provided.)
    Overall Device Safety and EffectivenessThe device modifications do not change the indications for use or intended use. Safety and effectiveness have been verified via risk management and application of design controls. Testing has verified that the changes perform as intended and include user information related to their performance. The device is substantially equivalent to the predicate.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Sample Size: Not specified in the summary. The text mentions "archived clinical images" and "data sets."
    • Data Provenance: The data used for testing included "archived clinical images" as well as "simulation testing and bench (phantom) testing." The country of origin for the clinical images is not specified. "Archived clinical images" suggests the data was retrospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided in the summary. Given the context of a 510(k) amendment for new software features that are "not intended for stand-alone use or diagnosis" but rather to provide "information that is to be used in adjunct to the normal images," it's possible that formal expert consensus for ground truth on a large test set was not a primary focus for this specific submission's documented testing, or the details were not included in the public summary.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided in the summary.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study is not mentioned in the summary. The document focuses on the technical performance verification of the new features and their substantial equivalence, not on human reader performance with or without the device. The new software is explicitly stated as "not intended for stand-alone use or diagnosis" and is instead "to provide the user with information that is to be used in adjunct to the normal images."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, testing was done on the algorithms (the new software features). The summary states "Testing was conducted using bench (phantom) tests, simulations and archived images data sets. The results of this testing verified that the performance of the changes was within the specified requirements." This indicates a standalone evaluation of the software's output, as distinct from evaluating human performance. However, it also states the software is "not intended for stand-alone use or diagnosis."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The specific type of ground truth for the "archived clinical images" is not specified. For bench and simulation testing, the 'ground truth' would typically be defined by the known parameters of the phantom or simulated data.

    8. The sample size for the training set

    This information is not provided in the summary. The document mentions "archived clinical images" were used for testing, but it doesn't discuss a separate training set for the development of the algorithms.

    9. How the ground truth for the training set was established

    Since a training set size or its use isn't specified, how its ground truth was established is also not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141074
    Device Name
    CORTEX ID SUITE
    Date Cleared
    2014-09-18

    (146 days)

    Product Code
    Regulation Number
    892.1200
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CortexID software has been developed to aid physicians in the evaluation of patient pathologies via assessment and quantification of PET brain scans.

    The software aids in the assessment of human brain PET scans enabling automated analysis through quantification of tracer uptake and comparison with the corresponding tracer uptake in normal subjects. The resulting quantification is presented using volumes of interest, voxel-based or 3D stereotactic surface projection maps of the brain. The package allows the user to generate information regarding relative changes in PET-FDG glucose metabolism.

    CortexID Suite additionally allows the user to generate information in PET brain amyloid load between a subject's images and a normal database, which may be the result of brain neurodegeneration.

    PET co-registration and fusion display capabilities with CT and MR allow PET findings to be related to brain anatomy and offers visualization of structural abnormalities, which may result from brain injury, trauma, disorder, disease or dysfunction, such as subdural hematoma, tumor, stroke, or cerebrovascular disease, etc.

    CortexID Suite may aid physicians in the image interpretation process of PET studies conducted on patients being evaluated for cognitive impairment, or other causes of cognitive decline, and is an adjunct to other diagnostic evaluations.

    Device Description

    CortexID, image analysis software, has been developed to aid clinicians in the assessment and quantification of pathologies derived from PET scans. The software enables the display, co-registration, and fusion of PET images with those from other modalities. It enables automated quantitative and statistical analysis of tracer uptake by registration to a standard template space and comparing intensity values. Additionally, CortexID assists with comparison of the activity in defined brain regions of individual scans relative to normal activity values as found in normal subjects. Quantification is presented using volumes of interest, voxel-based or 3D stereotactic surface projection maps of the brain.

    Key features of the CortexID Suite include:
    a. Integrated platform for FDG and Beta Amyloid analysis
    b. PET-MR and PET-CT registration and fusion
    c. Automatic reorientation and standardization
    d. 3D SSP, Voxel based images and VOI statistics
    e. Comparison with normals databases
    f. Longitudinal Analysis
    g. Q.Check
    h. Summing input dynamic PET scan
    i. Exam Summary (integrated report)
    j. Easy Export
    k. Multiple Reference Regions
    l. Reoreintation
    m. Region Overlay
    n. Fully Customizable Interface
    o. Preset Presentations
    p. MR Template Image

    The CortexID Suite is also made available as a standalone post processing application on the AW VolumeShare 5 workstation (K110834) that hosts advanced image processing applications.

    AI/ML Overview

    The provided document is a 510(k) summary for the GE Medical Systems CortexID Suite software. It declares substantial equivalence to a predicate device and does not contain a study section with acceptance criteria and a detailed study proving the device meets those criteria.

    Instead, it functions as a submission to the FDA, asserting that CortexID Suite is substantially equivalent to a previously cleared device (K062393 - CortexID). The determination of substantial equivalence is based on non-clinical tests and the assertion that "The subject of this premarket submission, CortexID Suite software did not require clinical studies to support substantial equivalence since it introduces analysis of new PET image contrast agents only. The clinical utility for the analysis of the new image processing has not been studied since modification does not significantly affect the clinical safety and performance."

    Therefore, based on the provided text, I cannot extract the information required to populate the requested table and answer the specific study-related questions. The document explicitly states that clinical studies were not required for this submission.

    Summary of unavailable information:

    • Acceptance criteria and reported device performance table: Not provided in the document.
    • Sample size used for the test set and data provenance: Not applicable as no clinical test set was used for substantial equivalence.
    • Number of experts used to establish ground truth for the test set and qualifications: Not applicable.
    • Adjudication method: Not applicable.
    • MRMC comparative effectiveness study and effect size: Not applicable as no clinical studies were performed.
    • Standalone (algorithm only) performance: Not explicitly stated or evaluated in the provided text as a separate study. The document focuses on the software and its features.
    • Type of ground truth used: Not applicable.
    • Sample size for the training set: Not applicable (no explicit mention of training a new algorithm or clinical study).
    • How ground truth for the training set was established: Not applicable.

    The document focuses on:

    • Identifying the device and its predicate.
    • Listing the key features of the CortexID Suite.
    • Stating the intended use/indications for use.
    • Explaining the technological similarity to the predicate device.
    • Detailing the non-clinical tests performed (compliance with DICOM standard, risk analysis, requirements reviews, design reviews, integration testing, performance testing, safety testing).
    • Asserting that clinical studies were not required for this specific submission's substantial equivalence determination.
    Ask a Question

    Ask a specific question about this device

    K Number
    K130069
    Manufacturer
    Date Cleared
    2013-04-05

    (84 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K092639,K110834,K113034,K041521,K111200

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Innova EPVision 2.0 software application is intended to enable users to load 3D datasets and overlay and register in real time these 3D datasets with radioscopic or radiographic images of the same anatomy in order to support catheter/device guidance during interventional procedures.

    Innova EPVision 2.0 software application is intended to enable users to load, overlay and register in real time 3D datasets with radioscopic or radiographic images of the same anatomy. Electrophysiological signal information is imported and used to color-code these 3D datasets in order to support catheter/device during cardiac electrophysiology guidance interventional procedures.

    Device Description

    Innova EPVision 2.0 is the new version of Innova EPVision software, which is part of the Innova Vision Applications [K092639] software. Innova EPVision 2.0, as all Innova Vision Applications image processing algorithms, is executed on a hardware called Advantage platform Workstation (AW) [K110834].
    It can perform the following functions:

    • Superimpose the segmented DICOM 3D XA, CT, MR dataset on radioscopic or radiographic image of the same anatomy, obtained on an Innova Fluoroscopic X-ray system [K113034].
    • . Register the segmented DICOM 3D XA. CT. MR dataset with radioscopic or radiographic images obtained on an Innova Fluoroscopic X-ray system for interventional procedures.
    • . Image stabilization features such as ECG gated display or motion tracking in the image.
    • . Capability to load planning data, deposited on the 3D model in Volume Viewer [K041521], such as 3D landmarks, ablations lines, and to display them on the 3D-2D fused image to support the physician during procedures.
    • . Marking points of interest of different size and color during the procedures.
    • . The frequently used functions are also available from tableside on the Innova Central Touch Screen to provide efficient workflow during the interventional procedures.

    Innova EPVision 2.0 can perform additionally the following functions:

    • Import electrophysiology (EP) data digitized and processed . on the CardioLab system [K111200] and use them to colorcode EP recording points on 3D model of the visualized anatomy in order to support catheter/device guidance during cardiac electrophysiology interventional procedures.
    • . Catheter tip detection to help locate the catheter tip on the 2D X-ray image. The user can modify or correct the automatically proposed tip location anytime.
    • . 3D side viewer allowing the user to freely rotate the 3D model independently from the Fluoro image and the gantry angulation.

    Innova EPVision 2.0, as Innova EPVision targets clinical indication for interventional cardiology procedures and in particular cardiac electrophysiology procedures.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Innova EPVision 2.0 device:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary does not explicitly state specific quantitative acceptance criteria or detailed performance metrics for the Innova EPVision 2.0. Instead, it relies on demonstrating substantial equivalence to a predicate device (K092639, Innova Vision Applications) and adherence to various standards (NEMA PS 3.1 - 3.20 (2011) DICOM Set, IEC 60601-1-4 (2000), IEC 62304 (2006), IEC 62366 (2007)).

    The "reported device performance" is described in terms of verification and validation activities designed to ensure the device works as required and meets user needs and intended use.

    Summary of Device Performance (as described):

    Acceptance Criteria (Implied by Regulatory Compliance)Reported Device Performance
    Compliance with NEMA PS 3.1 - 3.20 (DICOM Set)Verified as compliant
    Compliance with IEC 60601-1-4 (Medical Electrical Equipment)Verified as compliant
    Compliance with IEC 62304 (Software Life Cycle Processes)Verified as compliant
    Compliance with IEC 62366 (Usability Engineering)Verified as compliant
    Risk Management effectivenessImplemented and tested
    Requirements Reviews completionPerformed
    Design Reviews completionPerformed
    Performance and Safety testing (Verification)Performed at Unit, Integration, and System levels to check functionality and risk mitigation.
    Final Acceptance Testing (Validation)Performed to ensure user needs, intended use, risk mitigation, and labeling are effective.
    Substantial Equivalence to Predicate Device (K092639)Concluded to be safe, effective, and substantially equivalent.
    No new significant indications for useApplication works within predicate's intended use/indications.
    No new issues of safety and effectivenessVerified through testing
    No new fundamental scientific technologyApplication uses same fundamental technology as predicate.

    2. Sample size used for the test set and the data provenance

    The document does not specify a "test set" in the context of clinical data for performance evaluation. Instead, it discusses verification and validation tests as part of the software development lifecycle. These tests would involve a variety of inputs and scenarios, but the exact sample sizes (e.g., number of test cases, number of images) for these engineering tests are not provided.

    There is no mention of data provenance (e.g., country of origin, retrospective/prospective) because no clinical studies were deemed necessary or performed to support substantial equivalence.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    This information is not applicable or provided as no clinical studies were performed that would require expert-established ground truth on a test set. The ground truth for the engineering verification and validation tests would be the expected behavior of the software as defined by its requirements and design specifications, not clinical expert consensus.

    4. Adjudication method for the test set

    This information is not applicable or provided since no clinical validation study involving adjudication of a test set was performed.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was done. The submission explicitly states: "The subject of this premarket submission, Innova EPVision 2.0, did not require clinical studies to support substantial equivalence." Therefore, there is no reported effect size of human readers improving with or without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes the device as a software application that "enables users to load, overlay and register... in order to support catheter/device guidance." It also mentions "Catheter tip detection to help locate the catheter tip on the 2D X-ray image. The user can modify or correct the automatically proposed tip location anytime." This clearly indicates a human-in-the-loop design.

    While the "Catheter tip detection" component could have an internal standalone performance evaluation during development, the submission does not report a standalone (algorithm only) performance study in the context of a regulatory submission outcome. The overall device is intended to be used with a human interventionalist.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    For the engineering verification and validation tests, the ground truth would be the expected software behavior and functionality based on design input (system requirements and specifications). For example, a test for "Superimpose the segmented DICOM 3D XA, CT, MR dataset on radioscopic or radiographic image" would have a ground truth defined by the correctness of the overlay as per the design specifications. There is no mention of clinical ground truth (expert consensus, pathology, or outcomes data) being used for the regulatory submission's performance evaluation because no clinical studies were conducted.

    8. The sample size for the training set

    This information is not provided. The document makes no mention of machine learning model training or a training set. The device is described as inheriting functions and employing the "same fundamental scientific technology" as its predicate, implying a rule-based or traditional image processing approach rather than a machine learning approach requiring distinct training data that would be relevant to this 510(k) submission.

    9. How the ground truth for the training set was established

    This information is not provided as no training set is mentioned in the context of this 510(k) submission.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2