Search Filters

Search Results

Found 7 results

510(k) Data Aggregation

    K Number
    K223490
    Date Cleared
    2023-03-21

    (120 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K181403, K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps clinicians visualize vasculature to aid in the planning of endovascular embolization procedures throughout the body.

    A new option, called AI Segmentation, was developed from the modifications to the predicate device, GE HealthCare's FlightPlan for Embolization [K193261]. It includes two new algorithms. This Al Segmentation option is what triggered this 510(k) submission.

    The software process 3D X-ray angiography images (CBCT) acquired from GE HealthCare's interventional X-ray system [K181403], operates on GEHC's Advantage Workstation (AW) [K110834] platform and AW Server (AWS) [K081985] platform, and is an extension to the GE HealthCare's Volume Viewer application [K041521].

    FlightPlan for Embolization is intended to be used during the planning phase of embolization procedures.

    The primary features/functions of the proposed software are:

    • Semi-automatic segmentation of vasculature from a starting point determined by the user, when AI Segmentation option is not activated;
    • Automatic segmentation of vasculature powered by a deep-learning algorithm, when Al Segmentation option is activated;
    • Automatic definition of the root point powered by a deep-learning algorithm, when AI Segmentation option is activated;
    • Selective display (Live Tracking) of proximal vessels from a point determined by the user's cursor;
    • Ability to segment part of the selected vasculature;
    • Ability to mark points of interest (POI) to store cursor position(s);
    • Save results and optionally export them to other applications such as GEHC's Vision Applications ● [K092639] for 3D road-mapping.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the GE Medical Systems SCS's FlightPlan for Embolization device, based on the provided text:

    Acceptance Criteria and Device Performance

    Feature / AlgorithmAcceptance CriteriaReported Device Performance
    Vessel Extraction90% success rate93.7% success rate
    Root Definition90% success rate95.2% success rate

    Study Details

    1. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: 207 contrast-injected CBCT scans, each from a unique patient.
    • Data Provenance: The scans were acquired during the planning of embolization procedures from GE HealthCare's interventional X-ray system. The text indicates that these were from "clinical sites" and were "representative of the intended population" but does not specify countries of origin. The study appears to be retrospective, using existing scans.

    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

    • Vessel Extraction: 3 board-certified radiologists.
    • Root Definition: 1 GEHC advanced application specialist.

    3. Adjudication Method for the Test Set:

    • Vessel Extraction: Consensus of 3 board-certified radiologists. (Implies a qualitative agreement, not a specific quantitative method like 2+1).
    • Root Definition: The acceptable area was manually defined by the annotator (the GEHC advanced application specialist).

    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • No, an MRMC comparative effectiveness study was not explicitly described in terms of human readers improving with AI vs. without AI assistance. The non-clinical testing focused on the algorithms' performance against ground truth and the clinical assessment used a Likert scale to evaluate the proposed device with the AI option, rather than a direct comparison of human reader performance with and without AI.

    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Yes, a standalone performance evaluation was conducted for both the vessel extraction and root definition algorithms. The reported success rates of 93.7% and 95.2% are measures of the algorithms' performance against established ground truth without a human in the loop for the primary performance metrics.

    6. The Type of Ground Truth Used:

    • Vessel Extraction: Expert consensus (3 board-certified radiologists).
    • Root Definition: Manual definition by an expert (GEHC advanced application specialist), defining an "acceptable area."

    7. The Sample Size for the Training Set:

    • The document states that "contrast injected CBCT scans acquired from GE HealthCare's interventional X-ray system [K181403] were used for designing and qualifying the algorithms." However, it does not specify the sample size for the training set. It only mentions that a test set of 207 scans was "reserved, segregated, and used to evaluate both algorithms."

    8. How the Ground Truth for the Training Set Was Established:

    • The document does not explicitly state how the ground truth for the training set was established. It only details the ground truth establishment for the test set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K210807
    Date Cleared
    2021-10-22

    (219 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    GE's FlightPlan for Embolization, GE's Advantage Workstation [K110834], AW Server [K081985], GE's Volume

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Liver is a post processing software package that helps the analysis of 3D X-ray angiography images of the liver. Its output is intended as an adjunct means to help visualize vasculature and identify arteries leading to the vicinity of hypervascular lesions in the liver. This adjunct information may be used by physicians to aid them in their evaluation of hepatic arterial anatomy during the planning phase of embolization procedures.

    Device Description

    FlightPlan for Liver with the Parenchyma Analysis option is a post-processing, software-only application using 3D X-ray angiography images (CBCT) as input. It helps physicians visualize and analyze vasculature to aid in the planning of endovascular embolization procedures in the liver. It was developed from modifications to the predicate device, GE's FlightPlan for Liver [K121200], including the addition of 2 new algorithms supporting the Parenchyma Analysis option. The Parenchyma Analysis option is what triggered this 510k. The subject device also includes a feature, Live Tracking, that was cleared in the reference device, GE's FlightPlan for Embolization. The software operates on GE's Advantage Workstation [K110834] platform and AW Server [K081985] platform and is an extension to the GE's Volume Viewer application [K041521].

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for FlightPlan for Liver, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "The test results of both of the algorithms met their predefined acceptance criteria" for the deep learning-based Liver Segmentation algorithm and the non-deep learning Virtual Parenchyma Visualization (VPV) algorithm. However, the specific quantitative acceptance criteria and their corresponding reported performance values are not explicitly detailed in the provided text.

    The clinical assessment also "demonstrated that the proposed device FlightPlan for Liver with the Parenchyma Analysis option met its predefined acceptance criteria," but again, the specifics are not provided.

    Therefore, a table cannot be fully constructed without this missing information.

    Missing Information:

    • Specific quantitative acceptance criteria for Liver Segmentation algorithm (e.g., Dice score, IoU, boundary distance).
    • Specific quantitative reported performance for Liver Segmentation algorithm.
    • Specific quantitative acceptance criteria for VPV algorithm (e.g., accuracy of distal liver region estimation).
    • Specific quantitative reported performance for VPV algorithm.
    • Specific quantitative or qualitative acceptance criteria for the clinical assessment using the 5-point Likert scale.

    2. Sample Sizes and Data Provenance

    For the Deep Learning Liver Segmentation Algorithm:

    • Test Set Sample Size: Not explicitly stated, but derived from a "database of contrast injected CBCT liver acquisitions."
    • Data Provenance: Clinical sites in the USA and France. The data was prospective (implied by "clinical sites") and for the purpose of training and testing.

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Test Set Sample Size: "a test set of proximal CBCT acquisitions." The exact number is not provided.
    • Data Provenance: From the USA and France.

    For the Clinical Testing:

    • Test Set Sample Size: "A sample of 3D X-ray angiography image pairs." The exact number is not provided.
    • Data Provenance: From France and the USA.

    3. Number of Experts and Qualifications for Ground Truth

    For the Deep Learning Liver Segmentation Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified. The ground truth method itself (how segmentation was established for training and testing) is not fully detailed beyond using existing "contrast injected CBCT liver acquisitions."

    For the Non-Deep Learning Virtual Parenchyma Visualization (VPV) Algorithm:

    • Number of Experts: Not specified.
    • Qualifications: Not specified.
    • Ground Truth Basis: "selective contrast injected CBCT exams from same patients." It's implied that these were expert-interpreted or based on a recognized clinical standard, but the specific expert involvement is not detailed.

    For the Clinical Testing:

    • Number of Experts: Not specified.
    • Qualifications: "interventional radiologists." No experience level (e.g., years of experience) is provided.

    4. Adjudication Method for the Test Set

    The document does not explicitly mention an adjudication method (like 2+1 or 3+1 consensus) for any of the test sets.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • The document describes a "Clinical Testing" section where "interventional radiologists using a 5-point Likert scale" assessed image pairs. This suggests a reader study.
    • However, it does not explicitly state that it was an MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study assessed whether the device "met its predefined acceptance criteria and helps physicians visualize and analyze... and aids in the planning..." which seems to be an evaluation of the device's utility rather than a direct comparison of reader performance with and without the AI.
    • Therefore, no effect size of human readers improving with AI vs. without AI assistance is reported.

    6. Standalone (Algorithm Only) Performance

    • Yes, standalone performance was done for both new algorithms.
      • The "deep learning-based Liver Segmentation algorithm" was evaluated, although specific performance metrics are not given.
      • The "non-deep learning Virtual Parenchyma Visualization algorithm's performance was evaluated [...] compared to selective contrast injected CBCT exams from same patients used as the ground truth."
    • This indicates that the algorithms themselves were tested for their inherent performance.

    7. Type of Ground Truth Used

    For the Deep Learning Liver Segmentation Algorithm:

    • Based on "contrast injected CBCT liver acquisitions." The precise method for establishing the "correct" segmentation (e.g., manual expert tracing, pathology, outcome data) is not detailed. It's implicitly clinical data.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • "selective contrast injected CBCT exams from same patients used as the ground truth." This implies a gold standard of directly observed, selective angiography, which is a clinical reference.

    For the Clinical Testing:

    • The "ground truth" here is the perception and evaluation of interventional radiologists using a 5-point Likert scale regarding the device's utility ("helps physicians visualize," "aids in the planning"). This is a form of expert consensus/subjective assessment of utility.

    8. Sample Size for the Training Set

    For the Deep Learning Liver Segmentation Algorithm:

    • "a database of contrast injected CBCT liver acquisitions from clinical sites in the USA and France was used for the training and testing." The exact sample size for training is not specified, only that a database was used.

    For the Virtual Parenchyma Visualization (VPV) Algorithm:

    • The VPV algorithm is described as "non-deep learning," so it would not have a traditional "training set" in the same way a deep learning model would. It likely relies on predefined anatomical/physiological models or rules.

    9. How Ground Truth for the Training Set Was Established

    For the Deep Learning Liver Segmentation Algorithm:

    • The document states "a database of contrast injected CBCT liver acquisitions [...] was used for the training." However, it does not explicitly detail how the ground truth labels (i.e., the correct liver segmentations) were established for these training images. This is a critical piece of information for a deep learning model. It's usually through expert manual annotation for segmentation tasks.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193281
    Device Name
    Hepatic VCAR
    Date Cleared
    2020-03-20

    (114 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K041521, K110834, K081985

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Hepatic VCAR is a CT image analysis software package that allows the analysis and visualization of Liver CT data derived from DICOM 3.0 compliant CT scans. Hepatic VCAR is designed for the purpose of assessing liver morphology, including liver lesion, provided the lesion has different CT appearance from surrounding liver tissue; and its change over time through automated tools for liver lobe, liver segments and liver lesion segmentation and measurement. It is intended for use by clinicians to process, review, archive, print and distribute liver CT studies.

    This software will assist the user by providing initial 3D segmentation, visualization, and quantitative analysis of liver anatomy. The user has the ability to adjust the contour and confirm the final segmentation.

    Device Description

    Hepatic VCAR is a CT image analysis software package that allows the analysis and visualization of Liver CT data derived from DICOM 3.0 compliant CT scans. Hepatic VCAR was designed for the purpose of assessing liver morphology, including liver lesion, provided the lesion has different CT appearance from surrounding liver tissue; and its change over time through automated tools for liver, liver lobe, liver segments and liver lesion segmentation and measurement.

    Hepatic VCAR is a post processing software medical device built on the Volume Viewer (K041521) platform, and can be deployed on the Advantage Workstation (AW) (K110834) and AW Server (K081985) platforms, CT Scanners, and PACS stations or cloud in the future.
    This software will assist the user by providing initial 3D segmentation, vessel analysis, visualization, and quantitative analysis of liver anatomy. The user has the ability to adjust the contour and confirm the final segmentation.
    In the proposed device, two new algorithms utilizing deep learning technology were introduced. One such algorithm segments the liver producing a liver contour editable by the user; another algorithm segments the hepatic artery based on an initial user input point. The hepatic artery segmentation is also editable by the user.

    AI/ML Overview

    The provided text describes the 510(k) summary for Hepatic VCAR, a CT image analysis software package. The submission outlines the device's intended use and the validation performed, particularly highlighting the introduction of two new deep learning algorithms for liver and hepatic artery segmentation.

    Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly define "acceptance criteria" through numerical thresholds for performance metrics. Instead, it states that "Verification and validation including risk mitigations have been executed with results demonstrating Hepatic VCAR met the design inputs and user needs with no unexpected results or risks."

    For the new deep learning algorithms, the performance is described qualitatively:

    Feature/AlgorithmAcceptance Criteria (Implied)Reported Device Performance
    Liver SegmentationProduces a liver contour that is editable by the user and is capable of segmentation.Bench tests show algorithms performed as expected.
    Demonstrated capability of liver segmentation utilizing the deep learning algorithm.
    Hepatic Artery SegmentationSegments the hepatic artery based on initial user input, editable by the user, and capable of segmentation.Bench tests show algorithms performed as expected.
    Demonstrated capability of hepatic artery segmentation utilizing the deep learning algorithm.
    Overall Software PerformanceMeets design inputs and user needs, no unexpected results or risks.Verification and validation met design inputs and user needs with no unexpected results or risks.
    Usability/Clinical AcceptanceFunctionality is clinically acceptable for assisting users in 3D segmentation, visualization, and quantitative analysis.Assessed by 3 board-certified radiologists using a 5-point Likert scale, demonstrating capability.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: "A representative set of clinical sample images" was used for the clinical assessment. The exact number of cases/images is not specified in the provided text.
    • Data Provenance: The provenance of the data (e.g., country of origin, retrospective or prospective) is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: For the clinical assessment of the deep learning algorithms, 3 board certified radiologists were used.
    • Qualifications of Experts: They are described as "board certified radiologists." The number of years of experience is not specified.
    • For the "ground truth" used in "bench tests," the text states "ground truth annotated by qualified experts," but the number and specific qualifications of these experts are not explicitly detailed beyond "qualified experts."

    4. Adjudication Method for the Test Set

    The text states that the "representative set of clinical sample images was assessed by 3 board certified radiologists using 5-point Likert scale." It does not specify an explicit adjudication method (e.g., 2+1, 3+1 consensus) for establishing the "ground truth" or assessing the device's performance based on the radiologists' Likert scale ratings. The Likert scale assessment sounds more like an evaluation of clinical acceptability/usability rather than establishing ground truth for quantitative segmentation accuracy.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    A formal MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance is not explicitly described in the provided text. The clinical assessment mentioned ("assessed by 3 board certified radiologists using 5-point Likert scale") appears to be an evaluation of the device's capability rather than a direct comparison of human performance with and without AI assistance. Therefore, no effect size for human reader improvement is provided.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, standalone performance was assessed for the algorithms. The text states:

    • "Bench tests that compare the output of the two new algorithms with ground truth annotated by qualified experts show that the algorithms performed as expected."
      This indicates an evaluation of the algorithm's direct output against an established ground truth before human interaction/adjustment.

    7. The Type of Ground Truth Used

    Based on the document:

    • For the "bench tests" of the new deep learning algorithms, the ground truth was "ground truth annotated by qualified experts." This suggests expert consensus or expert annotation was used.
    • For the clinical assessment by 3 radiologists using a Likert scale, it's more of a qualitative assessment of the device's capability rather than establishing a definitive ground truth for each case.

    8. The Sample Size for the Training Set

    The sample size for the training set (used to train the deep learning algorithms) is not specified in the provided text.

    9. How the Ground Truth for the Training Set Was Established

    The text states that the deep learning algorithms were trained, but it does not explicitly describe how the ground truth for the training set was established. It only mentions that the ground truth for bench tests was "annotated by qualified experts."

    Ask a Question

    Ask a specific question about this device

    K Number
    K193261
    Date Cleared
    2020-01-24

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K081985, K041521, K092639

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FlightPlan for Embolization is a post processing software package that helps the analysis of 3D X-ray angiography images. Its output is intended to be used by physicians as an adjunct means to help visualize vasculature during the planning phase of embolization procedures. FlightPlan for Embolization is not intended to be used during therapy delivery.

    The output includes segmented vasculature, and selective display of proximal vessel and distal vessels from a reference point determined by the user. User-defined data from the 3D X-ray angiography images may be exported for use during the guidance phase of the procedure. The injection points should be confirmed independently of FlightPlan for Embolization prior to therapy delivery.

    Device Description

    FlightPlan for Embolization is a post processing software application which operates on the Advantage Workstation (AW) [K110834] platform and AW Server [K081985] platform. It is an extension to the Volume Viewer application [K041521] modified from FlightPlan for Liver (K121200) and is designed for processing 3D X-ray angiography images to help visualize vasculature

    The primary features of the software are: semi-automatic segmentation of vascular tree from a starting point determined by the user; selective display (Live Tracking) of proximal vessel and distal vessels from a point determined by the user's cursor; ability to segment part of the vasculature; ability to mark points of interest (POI) to store cursor position; save results and export to other applications such as Vision Applications [K092639] for 3D road-mapping.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) summary for FlightPlan for Embolization:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly list quantitative acceptance criteria in a dedicated section for "acceptance criteria." However, it describes validation activities that implicitly define the performance considered acceptable. Based on the "Summary of Non-Clinical Tests" and "Summary of Clinical Tests," the following can be inferred:

    Acceptance Criteria (Inferred from Validation Activities)Reported Device Performance
    Non-Clinical:
    Capability to automatically segment and selectively display vascular structures from a single user-defined point."FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures from a single user defined point using a database of XACT exams... established satisfactory quality for FlightPlan for Embolization usage."
    Compliance with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."The FlightPlan for Embolization complies with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard."
    Adherence to design control testing per GE's quality system (21 CFR 820 and ISO 13485)."FlightPlan for Embolization has successfully completed the required design control testing per GE's quality system. FlightPlan for Embolization was designed and will be manufactured under the Quality System Regulations of 21CFR 820 and ISO 13485.
    • Risk Analysis
    • Requirements Reviews
    • Design Reviews
    • Performance testing (Verification, Validation)
    • Safety testing (Verification)"
    Clinical:
    Ability of the device to help physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."The assessment demonstrated that the proposed device (FlightPlan for Embolization) helps physicians in the analysis of 3D X-ray angiography images and in the planning of embolization procedures, including the selection of embolization injection points."

    2. Sample Size Used for the Test Set and Data Provenance

    • Non-Clinical Test Set:

      • Sample Size: A "database of XACT exams." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but clinical scenarios are considered "representative" and include "consideration of acquisition parameters, image quality and anatomy." It can be inferred that these are existing clinical data, likely retrospective.
    • Clinical Test Set:

      • Sample Size: "A sample of 3D X-ray angiography images representative of clinical practice." The specific number of cases is not provided.
      • Data Provenance: Not explicitly stated, but described as "representative of clinical practice" and "most common anatomic regions where embolization procedures are performed." It can be inferred that these are retrospective clinical cases.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Clinical Test Set:
      • Number of Experts: Four
      • Qualifications: "board certified interventional radiologists." No information on years of experience is provided.

    4. Adjudication Method for the Test Set

    • Clinical Test Set: The document states that the assessment was done "using a 5-point Likert scale." This implies an individual scoring system by each radiologist, but it does not specify an adjudication method (e.g., 2+1, 3+1 consensus). It sounds like individual assessments were performed and then aggregated or analyzed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, and Effect Size

    • No, an MRMC comparative effectiveness study was not explicitly stated to have been done in the traditional sense of comparing human readers with AI vs. without AI assistance.
      • The clinical study described is an assessment of the device's helpfulness by radiologists, rather than a comparative study measuring improvement in diagnostic accuracy or efficiency for humans using the AI vs. not using it. The statement "demonstrated that the proposed device (FlightPlan for Embolization) helps physicians" is an outcome of an assessment, not a quantitative effect size from an MRMC study comparing assisted vs. unassisted performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was done

    • Yes, a form of standalone performance was assessed in the "Summary of Non-Clinical Tests." The "Engineering...validated FlightPlan for Embolization algorithms' capability to automatically segment and selectively display vascular structures..." using a database of XACT exams. This implies an evaluation of the algorithm's output against some reference (likely manual segmentations or ground truth established by experts) without direct human interaction at the time of assessment.

    7. The Type of Ground Truth Used

    • Non-Clinical Test Set: The ground truth for the non-clinical validation of segmentation and display capabilities is implicitly based on expert-defined correct segmentations or "satisfactory quality" as determined by engineering validation. The document does not explicitly state "expert consensus" or "pathology" for this.
    • Clinical Test Set: The ground truth for the clinical assessment relies on the judgment of "four board certified interventional radiologists" using a 5-point Likert scale to determine if the device "helps physicians." This is a form of expert assessment/opinion as ground truth regarding the device's utility/helpfulness.

    8. The Sample Size for the Training Set

    • Not provided. The document does not disclose information about the training data used for the FlightPlan for Embolization algorithms.

    9. How the Ground Truth for the Training Set was Established

    • Not provided. Since the training set details are not mentioned, how its ground truth was established is also not available in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K172999
    Device Name
    uWS-MR
    Date Cleared
    2018-11-07

    (406 days)

    Product Code
    Regulation Number
    892.2050
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluations within healthcare institutions. It has the following additional indications:

    The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The Dynamic application is intended to provide a general post-processing tool for time course studies.

    The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.

    MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.

    The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.

    MR Vessel Analysis is intended to provide a tool for viewing, manipulating MR vascular images. The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    Device Description

    uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the uWS-MR software. It focuses on demonstrating substantial equivalence to predicate devices, rather than presenting a standalone study with detailed acceptance criteria and performance metrics for a novel AI-powered diagnostic device.

    Therefore, much of the requested information regarding specific acceptance criteria, detailed study design for proving the device meets those criteria, sample sizes for test and training sets, expert qualifications, and ground truth establishment cannot be fully extracted from this document.

    Here's what can be inferred and stated based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly list quantitative acceptance criteria in a pass/fail format nor provide specific, measurable performance metrics for the proposed device (uWS-MR) in comparison to such criteria. Instead, it relies on demonstrating substantial equivalence to predicate devices by comparing their features and functionalities. The "Remark" column consistently states "Same," indicating that the proposed device's features align with its predicates, implying it meets comparable performance.

    Feature Type (Category)Proposed Device (uWS-MR) Performance (Inferred)Predicate Device (syngo.via/Reference Devices) Performance (Inferred to be matched)Remark/Acceptance (Inferred)
    General
    Device Classification NamePicture Archiving and Communications SystemPicture Archiving and Communications SystemSame (Acceptable)
    Product CodeLLZLLZSame (Acceptable)
    Regulation Number21 CFR 892.205021 CFR 892.2050Same (Acceptable)
    Device ClassIIIISame (Acceptable)
    Classification PanelRadiologyRadiologySame (Acceptable)
    Specification
    Image communicationStandard network protocols like TCP/IP and DICOM. Additional fast image.Standard network protocols like TCP/IP and DICOM. Additional fast image.Same (Acceptable)
    Hardware / OSWindows 7Windows 7Same (Acceptable)
    Patient AdministrationDisplay and manage image data information of all patients stored in the database.Display and manage image data information of all patients stored in the database.Same (Acceptable)
    Review 2DBasic processing of 2D images (rotation, scaling, translation, windowing, measurements).Basic processing of 2D images (rotation, scaling, translation, windowing, measurements).Same (Acceptable)
    Review 3DFunctionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis).Functionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis).Same (Acceptable)
    FilmingDedicated for image printing, layout editing for single images and series.Dedicated for image printing, layout editing for single images and series.Same (Acceptable)
    FusionAuto registration, Manual registration, Spot registration.Auto registration, Manual registration, Spot registration.Same (Acceptable)
    Inner ViewInner view of vessel, colon, trachea.Inner view of vessel, colon, trachea.Same (Acceptable)
    VisibilityUser-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused.User-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused.Same (Acceptable)
    ROI/VOIPlotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value.Plotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value.Same (Acceptable)
    MIP DisplayImage can be displayed as MIP and rotating play.Image can be displayed as MIP and rotating play.Same (Acceptable)
    CompareLoad two studies to compare.Load two studies to compare.Same (Acceptable)
    Advanced Applications (Examples)
    MR Brain Perfusion: Type of imaging scansMRIMRISame (Acceptable)
    MR Breast Evaluation: Automatic SubtractionYesYesSame (Acceptable)
    MR Stitching: Automatic StitchingYesYesSame (Acceptable)
    MR Vessel Analysis: Automatic blood vessel center lines extractionYesYesSame (Acceptable)
    MRS: Single-voxel Spectrum Data AnalysisYesYesSame (Acceptable)
    MR Dynamic/MAPS: ADC and eADC CalculateYesYesSame (Acceptable)

    2. Sample Size Used for the Test Set and the Data Provenance

    The document states: "Software verification and validation testing was provided to demonstrate safety and efficacy of the proposed device." and lists "Performance Verification" for various applications. However, it does not specify the sample size used for these performance tests (test set) or the data provenance (e.g., country of origin, retrospective/prospective nature).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not provided in the document. The filing focuses on technical and functional equivalence, not on clinical performance evaluated against expert ground truth.

    4. Adjudication Method

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being performed or required. The submission is a 510(k) for substantial equivalence, which often relies more on technical verification and comparison to predicate devices rather than new clinical effectiveness studies.

    6. Standalone Performance

    The document clearly states: "Not Applicable to the proposed device, because the device is stand-alone software." This implies that the device is intended to perform its functions (viewing, manipulation, post-processing) as a standalone software, and its performance was evaluated in this standalone context during verification and validation, aligning with the predicates which are also software solutions. However, no specific standalone performance metrics are provided.

    7. Type of Ground Truth Used

    The document does not explicitly state the "type of ground truth" used for performance verification. Given the nature of the device (image post-processing software) and the 510(k) pathway, performance verification likely involved:

    • Technical validation: Comparing outputs of uWS-MR's features (e.g., image stitching, parameter maps, ROI measurements) against known good results, simulated data, or outputs from the predicate devices.
    • Functional testing: Ensuring features operate as intended (e.g., if a rotation function rotates the image correctly).
      Pathology or outcomes data are typically used for diagnostic devices with novel clinical claims, which is not the primary focus here.

    8. Sample Size for the Training Set

    The concept of a "training set" typically applies to machine learning or AI models that learn from data. While the uWS-MR is post-processing software, and could potentially incorporate AI elements (though not explicitly stated beyond general "post-processing"), the document does not mention a training set size. This strongly suggests that a machine learning or deep learning model with a distinct training phase, as commonly understood, was not a primary component evaluated in this filing or, if present, its training data details were not part of this summary.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned (see point 8), the establishment of its ground truth is not applicable and therefore not described in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K173001
    Device Name
    uWS-CT
    Date Cleared
    2018-11-07

    (406 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K033326, K162025, K081985, K023785

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-CT is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The CT Oncology application is intended to support fast-tracking routine diagnostic oncology, staging, and follow-up, by providing a tool for the user to perform the segmentation of suspicious lesions in lung or liver. The CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.

    The CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw. The CT Lung Density application is intended to provide the user a number of density parameters and structure information for evaluating tomogram scans of the lung.

    The CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.

    The CT Vessel Analysis application is intended to provide a tool for viewing, manipulating CT vascular images.

    The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    Device Description

    uWS-CT is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    AI/ML Overview

    The provided document is a 510(k) Premarket Notification from Shanghai United Imaging Healthcare Co., Ltd. for their device uWS-CT. This document outlines the device's indications for use, technological characteristics, and comparison to predicate devices, but it does not contain a detailed study demonstrating that the device meets specific acceptance criteria based on human-in-the-loop or standalone performance.

    Instead, the document primarily focuses on demonstrating substantial equivalence to predicate devices based on similar functionality and intended use, supported by software verification and validation testing, hazard analysis, and performance evaluations for various CT applications. It explicitly states that "No clinical study was required." and "No animal study was required." for this submission.

    Therefore, I cannot provide the detailed information requested in the prompt's format (acceptance criteria table, sample size, expert ground truth, MRMC study, etc.) because these types of studies were not conducted or reported in this 510(k) submission.

    The "Performance Data" section (Page 11) lists "Performance Evaluation Report For CT Lung Nodule," "Performance Evaluation Report For CT Oncology," etc., but these are internal reports that are not detailed in this public document. They likely refer to internal testing that verifies the software's functions perform as designed, rather than robust clinical performance studies against specific quantitative acceptance criteria with human readers or well-defined ground truth beyond internal validation.

    What is present in the document regarding "performance" is:

    • Software Verification and Validation: This typically involves testing that the software functions as designed, is free of bugs, and meets its specified requirements. The document mentions "hazard analysis," "software requirements specification (SRS)," "software architecture description," "software development environment description," "software verification and validation," and "cyber security documents."
    • Performance Evaluation Reports for specific applications: These are listed but not detailed (e.g., CT Lung Nodule, CT Oncology). It's implied these show the software functions correctly for those applications.

    In summary, based on the provided text, there is no information about:

    • A table of acceptance criteria with reported device performance in the context of clinical accuracy or diagnostic performance.
    • Sample sizes used for a test set in a clinical performance study.
    • Data provenance for a clinical test set.
    • Number of experts or their qualifications for establishing clinical ground truth.
    • Adjudication methods for a clinical test set.
    • Multi-Reader Multi-Case (MRMC) comparative effectiveness studies.
    • Standalone (algorithm-only) performance studies against clinical ground truth.
    • Type of clinical ground truth used (pathology, outcomes data, expert consensus from an external panel).
    • Sample size for a training set (as no AI/ML model requiring a training set is explicitly discussed in terms of its performance data; the device is described as "CT Image Post-Processing Software" with various applications.)
    • How ground truth for a training set was established.

    The closest the document comes to "acceptance criteria" and "performance" are discussions of functional equivalence to predicate devices and general software validation, stating that the proposed device performs in a "similar manner" and has a "safety and effectiveness profile that is similar to the predicate device."

    Ask a Question

    Ask a specific question about this device

    K Number
    K163281
    Device Name
    FastStroke
    Date Cleared
    2017-01-26

    (66 days)

    Product Code
    Regulation Number
    892.1750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K110834, K081985

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FastStroke is a CT image analysis software package that assists in the analysis and visualization of CT data derived from DICOM 3.0 compliant CT scans. FastStroke is intended for the purpose of displaying vasculature of the head and neck at different time points of enhancement.
    The software will assist the user by providing optimized display settings to enable fast review of the images in synchronized formats, aligning the display of the images to the order of the scans and linking together multiple groups of scans. In addition, the software fuses the vascular information from different time points into a single colorized view. This multiphase information can aid the physician in visualizing the presence or absence of collateral vessels in the brain. Collateral vessel information may aid the physician in the evaluation of stroke patients.

    Device Description

    FastStroke is a CT image analysis software package that assists in the analysis and visualization of CT data derived from DICOM 3.0 compliant CT scans. FastStroke is intended for the purpose of displaying vasculature of the head and neck at different time points of enhancement.
    The software will assist the user by providing optimized display settings to enable fast review of the images in synchronized formats, aligning the display of the images to the order of the scans and linking together multiple groups of scans. In addition, the software fuses the vascular information from different time points into a single colorized view. This multiphase information can aid the physician in visualizing the presence or absence of collateral vessels in the brain. Collateral vessel information may aid the physician in the evaluation of stroke patients.
    FastStroke device has been tested with DICOM images from Discovery CT750 HD and Revolution CT using multi-phase CT Angiography. FastStroke is based on DICOM image based processing and would apply to any CT device that is able to acquire data in an equivalent multi-phase CT angiography (pursuant to the timing protocols in the user quide) manner.
    FastStroke is also made available as a standalone post processing application on the AW VolumeShare workstation (K110834) and AW Server platform (K081985) that host advanced image processing applications.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the FastStroke device based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state quantitative acceptance criteria or direct performance metrics for the FastStroke device in a table format. Instead, it describes the purpose of the study as assessing increased diagnostic capability using Likert scales and concludes that "FastStroke aids the physician in visualizing the presence or absence of collateral vessels in the brain and is a useful tool for neuroradiologists in providing a comprehensive stroke work-up."

    Acceptance Criteria (Implied)Reported Device Performance
    Aid physicians in visualizing presence/absence of collateral vessels in the brain.The study results show that FastStroke aids the physician in visualizing the presence or absence of collateral vessels in the brain.
    Be a useful tool for neuroradiologists in comprehensive stroke work-up.The study results show that FastStroke is a useful tool for neuroradiologists in providing a comprehensive stroke work-up.
    Compliance with NEMA PS 3.1 - 3.20 (2016) DICOM Set (Radiology) standard.The FastStroke software complies with NEMA PS 3.1 - 3.20 (2016) Digital Imaging and Communications in Medicine (DICOM) Set (Radiology) standard.
    Employ the same fundamental scientific technology as the predicate device.The FastStroke software employs the same fundamental scientific technology as its predicate device.
    Uses equivalent CT DICOM image data input requirements as the predicate device.FastStroke software uses the equivalent CT DICOM image data input requirements.
    Has equivalent display, formatting, archiving, and visualization technologies compared to the predicate device.It has equivalent display, formatting, archiving and visualization technologies compared to the predicate device.
    Utilizes thresholding and fusion similar to the predicate device.FastStroke utilizes the thresholding and fusion (implied similar to predicate).
    Performance to be substantially equivalent to the predicate device.GE Healthcare considers the FastStroke software application to be as safe, as effective, and performance is substantially equivalent to the predicate device.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The sample size for the clinical evaluation (test set) is not explicitly stated in the provided document. It only mentions "a retrospective clinical evaluation was conducted."
    • Data Provenance: The study was a retrospective clinical evaluation. The country of origin of the data is not specified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Three experts were used.
    • Qualifications of Experts: They were described as "board certified neuroradiologists who were considered experts."

    4. Adjudication Method for the Test Set

    The document does not explicitly state the adjudication method used for the clinical evaluation. It only mentions that the primary endpoint was assessed by three experts using multiple 5-point Likert Scales.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly stated to be performed to measure improvement with AI vs. without AI assistance. The study described was a clinical evaluation by neuroradiologists assessing the device's aid in visualization. While it involved multiple readers, it wasn't framed as a direct comparison of human readers with and without AI assistance to quantify an "effect size" in reader performance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • A standalone performance evaluation of the algorithm without human-in-the-loop was not explicitly described or quantified in the provided text as part of the clinical study. The device is described as "assisting the user" and "aiding the physician," indicating a human-in-the-loop design.

    7. Type of Ground Truth Used

    • The ground truth for the clinical evaluation study was established through expert consensus/interpretation by the three board-certified neuroradiologists. The study assessed the device's ability to "aid the physician in visualizing the presence or absence of collateral vessels," implying that the experts' assessment of visualization aided by the device served as the "truth" for the study's purpose.

    8. Sample Size for the Training Set

    • The document does not provide any information about the sample size used for the training set of the FastStroke software.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide any information on how the ground truth for the training set (if any) was established. It primarily focuses on the clinical evaluation used for substantial equivalence.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1