Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K202808
    Device Name
    Brainance MD
    Date Cleared
    2021-10-14

    (386 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K090546, K162513

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Brainance MD provides analysis and visualization capabilities of dynamic MRI data of the brain, presenting the derived properties and parameters in a clinically useful context.

    Device Description

    Brainance MD is a web-accessible medical viewing and post-processing software application. Brainance MD offers comprehensive functionality for dynamic image analysis and visualization of brain MRI data which are acquired through DICOM compliant imaging devices and modalities. The following algorithms provide the main functional analyses of the software application. BOLD: BOLD fMRI analysis is used to highlight small magnetic susceptibility changes in the human brain in areas with altered bloodflow resulting from neuronal activity. DTI: Diffusion analysis is used to visualize local water diffusion properties from the analysis of diffusion-weighted MRI data. Fiber tracking utilizes the directional dependency of the diffusion to display the white matter structure in the brain. DSC Perfusion: Calculations of perfusion related parameters that provide information about the blood vessel structure and characteristics. Examples of such maps are blood flow, time to peak, mean transit time and leakage. Apart from the aforementioned functionalities, Brainance MD offers general visualization tools, a data upload, data download and a reporting feature.

    AI/ML Overview

    The provided text describes a comparative study conducted to establish the substantial equivalence of the Brainance MD device to a primary predicate device (nordicBrainEx) concerning the processing of DSC Perfusion, fMRI, and DTI sequences.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Set by Manufacturer)Reported Device Performance
    Equivalence of results between Brainance MD and primary predicate device for DSC Perfusion, fMRI, and DTI sequences, demonstrated by:ICC & Bland-Altman analysis on all valid pixel values (for processed maps)Mean Relative Difference (MRD) as Percentage across each tract (for fiber tracts)"The final results matched the criteria of acceptance/approval priorly set by the manufacturer and thus equivalent the two devices on a result level was proven."

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size: "two sequences that were part of two different exams/subjects (either healthy control or diseased) were selected for the comparison conducted for each modality." This implies a total of 6 sequences (2 sequences * 3 modalities).
    • Data Provenance: The subjects were "all adults and either healthy controls or diseased diagnosed with metastasis or glioblastoma multiforme." The country of origin is not specified but given the submitter is from Greece, it's possible the data originated from there or a European context. The study is retrospective, as existing "exams/subjects" were selected.

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    The text does not provide information on the number of experts used to establish ground truth or their qualifications. The study focused on comparing the results of the two software devices rather than establishing novel ground truth through expert consensus for each case. The "ground truth" in this context is implicitly the results generated by the predicate device, as the goal was to demonstrate equivalence.

    4. Adjudication Method for the Test Set

    The text does not mention any adjudication method like 2+1 or 3+1. The study directly compared software outputs without involving multiple human readers to resolve discrepancies in the outputs. The comparison relied on statistical methods (ICC, Bland-Altman, MRD) between the two software outputs.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly done. The study described is a direct comparison between two software devices, and there is no mention of human readers improving with AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone performance study was done for the Brainance MD device in comparison to the predicate device. The study explicitly states: "The two sequences selected for each modality were processed with each one of the software MD and primary predicate device) using the exact same processing protocol and parameters for each user defined function." This indicates an algorithm-only comparison without a human-in-the-loop component for the performance evaluation itself.

    7. Type of Ground Truth Used

    The ground truth used was the output of the primary predicate device (nordicBrainEx). The study aimed to prove that Brainance MD produces results equivalent to those of the already cleared predicate device, rather than comparing against a clinical "gold standard" like pathology or long-term outcomes.

    8. Sample Size for the Training Set

    The text does not provide information on the sample size for the training set. The descriptions focus on the performance testing carried out for regulatory submission.

    9. How the Ground Truth for the Training Set Was Established

    The text does not provide information on how the ground truth for the training set was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183170
    Device Name
    uWS-CT
    Date Cleared
    2019-06-24

    (220 days)

    Product Code
    Regulation Number
    892.2050
    Predicate For
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-CT is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The CT Oncology application is intended to support fast-tracking routine diagnostic oncology, staging, and follow-up, by providing a tool for the user to perform the segmentation and volumetric evaluation of suspicious lesions in lung or liver.
    The CT Colon Analysis application is intended to provide the user a tool to enable easy visualization and efficient evaluation of CT volume data sets of the colon.
    The CT Dental application is intended to provide the user a tool to reconstruct panoramic and paraxial views of jaw.
    The CT Lung Density Analysis application is intended to segment pulmonary, lobes, and airway, providing the user quantitative parameters, structure information to evaluate the lung and airway.
    The CT Lung Nodule application is intended to provide the user a tool for the review and analysis of thoracic CT images, providing quantitative and characterizing information about nodules in the lung in a single study, or over the time course of several thoracic studies.
    The CT Vessel Analysis application is intended to provide a tool for viewing, manipulating, and evaluating CT vascular images.
    The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.
    The CT Brain Perfusion application is intended to calculate the parameters such as: CBV, CBF, etc. in order to analyze functional blood flow information about a region of interest (ROI) in brain.
    The CT Heart application is intended to segment heart and extract coronary artery. It also provides analysis of vascular stenosis, plaque and heart function.
    The CT Calcium Scoring application is intended to identify calcifications and calculate the calcium score.
    The CT Dynamic Analysis application is intended to support visualization of the CT datasets over time with the 3D/4D display modes.
    The CT Bone Structure Analysis application is intended to provide visualization and labels for the ribs and spine, and support batch function for intervertebral disk.
    The CT Liver Evaluation application is intended to provide processing and visualization for liver segmentation and vessel extraction. It also provides a tool for the user to perform liver separation and residual liver segments evaluation.

    Device Description

    uWS-CT is a comprehensive software solution designed to process, review and analyze CT studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    AI/ML Overview

    The provided document, a 510(k) summary for the uWS-CT software, does not contain detailed information about specific acceptance criteria and the results of a study proving the device meets these criteria in the way typically required for AI/ML-driven diagnostics.

    The document primarily focuses on demonstrating substantial equivalence to a predicate device (uWS-CT K173001) and several reference devices for its various CT analysis applications. It lists the functions of the new and modified applications (e.g., CT Lung Density Analysis, CT Brain Perfusion, CT Heart, CT Calcium Scoring, CT Dynamic Analysis, CT Bone Structure Analysis, CT Liver Evaluation) and compares them to those of the predicate and reference devices, indicating that their functionalities are "Same."

    While the document states that "Performance data were provided in support of the substantial equivalence determination" and lists "Performance Evaluation Report" for various CT applications, it does not provide the specifics of these performance evaluations, such as:

    • Acceptance Criteria: What specific numerical thresholds (e.g., accuracy, sensitivity, specificity, Dice score for segmentation) were set for each function?
    • Reported Device Performance: What were the actual measured performance values?
    • Study Design Details: Sample size, data provenance, ground truth establishment methods, expert qualifications, adjudication methods, or results of MRMC studies.

    The document explicitly states:

    • "No clinical study was required." (Page 16)
    • Software Verification and Validation was provided, including hazard analysis, SRS, architecture description, environment description, and cyber security documents. However, these are general software development lifecycle activities and not clinical performance studies.

    Therefore, based solely on the provided text, I cannot fill out the requested table or provide the detailed study information. The document suggests that the performance verification was focused on demonstrating functional equivalence rather than presenting quantitative performance metrics against pre-defined acceptance criteria in a clinical context.

    Summary of what can be extracted and what is missing:

    1. Table of Acceptance Criteria and Reported Device Performance: This information is not provided in the document. The document states "Performance Evaluation Report" for various applications were submitted, but the content of these reports (i.e., the specific acceptance criteria and the results proving they were met) is not included in this 510(k) summary.

    2. Sample size used for the test set and the data provenance: This information is not provided. The document states "No clinical study was required." The performance evaluations mentioned are likely internal verification and validation tests whose specifics are not detailed here.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: This information is not provided. Given that "No clinical study was required," it's unlikely a formal multi-expert ground truth establishment process for a clinical test set, as typically done for AI/ML diagnostic devices, was undertaken for this submission in a publicly available manner.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: This information is not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: This information is not provided. The document explicitly states "No clinical study was required."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document states this is a "software solution intended to be used for viewing, manipulation, and storage of medical images" that "supports interpretation and evaluation of examinations within healthcare institutions." The listed applications provide "a tool for the user to perform..." or "a tool for the review and analysis..." which implies human-in-the-loop use. Standalone performance metrics are not provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): This information is not provided.

    8. The sample size for the training set: This information is not provided. The document is a 510(k) summary for a software device, not a detailed technical report on an AI/ML model's development.

    9. How the ground truth for the training set was established: This information is not provided.

    In conclusion, the supplied document is a regulatory submission summary focused on demonstrating substantial equivalence based on intended use and technological characteristics, rather than a detailed technical report of performance studies for an AI/ML device with specific acceptance criteria and proven results. For this type of information, one would typically need access to the full 510(k) submission, which is not publicly available in this format, or a peer-reviewed publication based on the device's clinical performance.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183164
    Device Name
    uWS-MR
    Date Cleared
    2019-03-22

    (127 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K133401, K090546, K143368, K151353

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-MR is a software solution intended to be used for viewing, manipulation, communication, and storage of medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The Dynamic application is intended to provide a general post-processing tool for time course studies.

    The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.

    MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.

    The Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.

    MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.

    The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    The DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.

    The United Neuro is intended to view, manipulate, and evaluate MR neurological images.

    Device Description

    uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    This subject device contains the following modifications/improvements in comparison to the predicate device uWS-MR:

    1. Modified Indications for Use Statement;

    2. Added some advanced applications:

    • DCE Analysis
    • United Neuro
    AI/ML Overview

    The provided text from the FDA 510(k) summary (K183164) for uWS-MR describes a software solution primarily for viewing, manipulation, and storage of medical images, with added advanced applications such as DCE Analysis and United Neuro. However, the document does not provide specific acceptance criteria or detailed study data to prove the device meets those criteria in the typical format of a clinical performance study.

    The submission is for a modification to an existing device (uWS-MR, K172999) by adding two new features: DCE Analysis and United Neuro. The "study" described is primarily a performance verification demonstrating that these new features function as intended and are substantially equivalent to previously cleared reference devices with similar functionalities.

    Here's a breakdown of the requested information based on the provided document, noting where specific details are absent:


    Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or a table of reported device performance metrics like sensitivity, specificity, or AUC, as would be typical for an AI-driven diagnostic device. Instead, the "performance" is assessed through verification that the new features (DCE Analysis and United Neuro) operate comparably to predicate devices and meet functional and safety standards.

    The comparison tables (Table 2 for DCE Analysis and the subsequent table for United Neuro) serve as a substitute for performance criteria by demonstrating functional equivalence to reference devices.

    Implicit Acceptance Criteria (based on comparison to reference devices): The new functionalities (DCE Analysis and United Neuro) must exhibit the same core operational features (e.g., image loading, viewing, motion correction, parametric maps, ROI analysis, result saving, reporting for DCE; and motion correction, functional activation calculation, diffusion parameter analysis, display adjustment, fusion, fiber tracking, time-intensity curve, ROI statistics, result saving, reporting for United Neuro) as their respective predicate/reference devices.

    Reported Device Performance (Functional Equivalence):
    The document states that the proposed device's functionalities are "Same" or "Yes" compared to the reference devices for all listed operational features, implying that they perform equivalently in terms of available functions.

    ApplicationFunction NameProposed Device (uWS-MR)Reference Device ComparisonRemark (from document)
    DCE AnalysisType of imaging scansMRMRSame
    Image Loading and ViewingYesYesSame
    Motion CorrectionYesYesSame
    Series RegistrationYesYesSame
    Parametric Maps (Ktrans, kep, ve, vp, iAUC, Max Slop, CER)Yes (for all)Yes (for most)Same (for most, / for others but implied functional)
    ROI AnalysisYesYes/ (Implied Same)
    Result SavingYesYesSame
    ReportYesYesSame
    United NeuroType of imaging scanMRMRSame
    Motion correctionYesYesSame
    Functional activation calculationYesYesSame
    Diffusion parameter analysisYesYesSame
    Adjust display parameterYesYesSame
    FusionYesYesSame
    Fiber trackingYesYesSame
    Time-Intensity curveYesYesSame
    ROI StatisticsYesYesSame
    Result SavingYesYesSame
    ReportYesYesSame

    (Note: The document uses "/" in some rows for reference devices, which generally means the feature wasn't applicable or explicitly listed for that device, but the "Remark" column still concludes "Same" or "Yes," implying that the proposed device's functionality is considered equivalent or non-inferior within the context of substantial equivalence.)


    Study Details:

    1. Sample sizes used for the test set and the data provenance:
      The document mentions "Performance Evaluation Report For MR DCE" and "Performance Evaluation Report For MR United neuro" as part of the "Performance Verification." However, it does not specify any sample sizes (number of cases or images) used for these performance evaluations. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective nature of the data).

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
      The document does not provide any information regarding the number or qualifications of experts used for establishing ground truth, as it's a software verification and validation rather than a clinical reader study.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
      Since no reader study or expert review for ground truth establishment is described, there's no mention of an adjudication method.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
      No MRMC comparative effectiveness study was reported or required. The submission is for a software post-processing tool, not an AI-assisted diagnostic tool that directly impacts human reader performance or diagnosis.

    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
      The performance verification for the new features (DCE Analysis and United Neuro) can be considered a form of standalone testing, as it demonstrates the algorithms' functional capabilities and outputs. However, the document doesn't provide quantitative metrics like accuracy, sensitivity, or specificity for these algorithms. It focuses on functional equivalence to existing cleared devices rather than a standalone performance benchmark against a clinical ground truth.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
      The document does not specify the type of "ground truth" used for the performance verification of the DCE Analysis and United Neuro features. Given the nature of a 510(k) for a post-processing software with added functionalities, the "ground truth" likely refers to the expected algorithmic output or behavior as defined by engineering specifications and comparison to the outputs of the predicate/reference devices, rather than a clinical ground truth derived from expert consensus, pathology, or outcomes for disease detection.

    7. The sample size for the training set:
      The document does not mention any training set size. This is because the device is described as "MR Image Post-Processing Software" and its functionalities seem to be based on established image processing algorithms rather than a machine learning model that requires a distinct training phase.

    8. How the ground truth for the training set was established:
      As no training set is mentioned for a machine learning model, this information is not applicable/not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K172999
    Device Name
    uWS-MR
    Date Cleared
    2018-11-07

    (406 days)

    Product Code
    Regulation Number
    892.2050
    Predicate For
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uWS-MR is a software solution intended to be used for viewing, manipulation, and storage of medical images. It supports interpretation and evaluations within healthcare institutions. It has the following additional indications:

    The MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The Dynamic application is intended to provide a general post-processing tool for time course studies.

    The Image Fusion application is intended to combine two different image series so that the displayed anatomical structures match in both series.

    MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced timecourse images.

    The Brain Perfusion application is intended to allow the visualizations in the dynamic susceptibility time series of MR datasets.

    MR Vessel Analysis is intended to provide a tool for viewing, manipulating MR vascular images. The Inner view application is intended to perform a virtual camera view through hollow structures (cavities), such as vessels.

    Device Description

    uWS-MR is a comprehensive software solution designed to process, review and analyze MR (Magnetic Resonance Imaging) studies. It can transfer images in DICOM 3.0 format over a medical imaging network or import images from external storage devices such as CD/DVDs or flash drives. These images can be functional data, as well as anatomical datasets. It can be at one or more time-points or include one or more time-frames. Multiple display formats including MIP and volume rendering and multiple statistical analysis including mean, maximum and minimum over a user-defined region is supported. A trained, licensed physician can interpret these displayed images as well as the statistics as per standard practice.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the uWS-MR software. It focuses on demonstrating substantial equivalence to predicate devices, rather than presenting a standalone study with detailed acceptance criteria and performance metrics for a novel AI-powered diagnostic device.

    Therefore, much of the requested information regarding specific acceptance criteria, detailed study design for proving the device meets those criteria, sample sizes for test and training sets, expert qualifications, and ground truth establishment cannot be fully extracted from this document.

    Here's what can be inferred and stated based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly list quantitative acceptance criteria in a pass/fail format nor provide specific, measurable performance metrics for the proposed device (uWS-MR) in comparison to such criteria. Instead, it relies on demonstrating substantial equivalence to predicate devices by comparing their features and functionalities. The "Remark" column consistently states "Same," indicating that the proposed device's features align with its predicates, implying it meets comparable performance.

    Feature Type (Category)Proposed Device (uWS-MR) Performance (Inferred)Predicate Device (syngo.via/Reference Devices) Performance (Inferred to be matched)Remark/Acceptance (Inferred)
    General
    Device Classification NamePicture Archiving and Communications SystemPicture Archiving and Communications SystemSame (Acceptable)
    Product CodeLLZLLZSame (Acceptable)
    Regulation Number21 CFR 892.205021 CFR 892.2050Same (Acceptable)
    Device ClassIIIISame (Acceptable)
    Classification PanelRadiologyRadiologySame (Acceptable)
    Specification
    Image communicationStandard network protocols like TCP/IP and DICOM. Additional fast image.Standard network protocols like TCP/IP and DICOM. Additional fast image.Same (Acceptable)
    Hardware / OSWindows 7Windows 7Same (Acceptable)
    Patient AdministrationDisplay and manage image data information of all patients stored in the database.Display and manage image data information of all patients stored in the database.Same (Acceptable)
    Review 2DBasic processing of 2D images (rotation, scaling, translation, windowing, measurements).Basic processing of 2D images (rotation, scaling, translation, windowing, measurements).Same (Acceptable)
    Review 3DFunctionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis).Functionalities for displaying and processing image in 3D form (VR, CPR, MPR, MIP, VOI analysis).Same (Acceptable)
    FilmingDedicated for image printing, layout editing for single images and series.Dedicated for image printing, layout editing for single images and series.Same (Acceptable)
    FusionAuto registration, Manual registration, Spot registration.Auto registration, Manual registration, Spot registration.Same (Acceptable)
    Inner ViewInner view of vessel, colon, trachea.Inner view of vessel, colon, trachea.Same (Acceptable)
    VisibilityUser-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused.User-defined display property of fused image: Adjustment of preset of T/B value; Adjustment of the fused.Same (Acceptable)
    ROI/VOIPlotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value.Plotting ROI/VOI, obtaining max/min/mean activity value, volume/area, max diameter, peak activity value.Same (Acceptable)
    MIP DisplayImage can be displayed as MIP and rotating play.Image can be displayed as MIP and rotating play.Same (Acceptable)
    CompareLoad two studies to compare.Load two studies to compare.Same (Acceptable)
    Advanced Applications (Examples)
    MR Brain Perfusion: Type of imaging scansMRIMRISame (Acceptable)
    MR Breast Evaluation: Automatic SubtractionYesYesSame (Acceptable)
    MR Stitching: Automatic StitchingYesYesSame (Acceptable)
    MR Vessel Analysis: Automatic blood vessel center lines extractionYesYesSame (Acceptable)
    MRS: Single-voxel Spectrum Data AnalysisYesYesSame (Acceptable)
    MR Dynamic/MAPS: ADC and eADC CalculateYesYesSame (Acceptable)

    2. Sample Size Used for the Test Set and the Data Provenance

    The document states: "Software verification and validation testing was provided to demonstrate safety and efficacy of the proposed device." and lists "Performance Verification" for various applications. However, it does not specify the sample size used for these performance tests (test set) or the data provenance (e.g., country of origin, retrospective/prospective nature).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not provided in the document. The filing focuses on technical and functional equivalence, not on clinical performance evaluated against expert ground truth.

    4. Adjudication Method

    This information is not provided in the document.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly mentioned as being performed or required. The submission is a 510(k) for substantial equivalence, which often relies more on technical verification and comparison to predicate devices rather than new clinical effectiveness studies.

    6. Standalone Performance

    The document clearly states: "Not Applicable to the proposed device, because the device is stand-alone software." This implies that the device is intended to perform its functions (viewing, manipulation, post-processing) as a standalone software, and its performance was evaluated in this standalone context during verification and validation, aligning with the predicates which are also software solutions. However, no specific standalone performance metrics are provided.

    7. Type of Ground Truth Used

    The document does not explicitly state the "type of ground truth" used for performance verification. Given the nature of the device (image post-processing software) and the 510(k) pathway, performance verification likely involved:

    • Technical validation: Comparing outputs of uWS-MR's features (e.g., image stitching, parameter maps, ROI measurements) against known good results, simulated data, or outputs from the predicate devices.
    • Functional testing: Ensuring features operate as intended (e.g., if a rotation function rotates the image correctly).
      Pathology or outcomes data are typically used for diagnostic devices with novel clinical claims, which is not the primary focus here.

    8. Sample Size for the Training Set

    The concept of a "training set" typically applies to machine learning or AI models that learn from data. While the uWS-MR is post-processing software, and could potentially incorporate AI elements (though not explicitly stated beyond general "post-processing"), the document does not mention a training set size. This strongly suggests that a machine learning or deep learning model with a distinct training phase, as commonly understood, was not a primary component evaluated in this filing or, if present, its training data details were not part of this summary.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned (see point 8), the establishment of its ground truth is not applicable and therefore not described in the document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K130278
    Device Name
    MR PERMEABILITY
    Date Cleared
    2013-05-29

    (113 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K090546

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    • MR Permeability facilitates the radiologist in visualizing and post-processing dynamic contrast-enhanced datasets. It is an optional package within IntelliSpace Portal.
    • MR Permeability can be used by the radiologist to assess the micro-vascular properties by computing vascular permeability (Ktrans), tracer efflux rate (Kep), extravascular volume fraction (Ve), plasma fraction (Vp), and Area under the curve (AUC) from T1 images of brain and prostate. The applied pharmacokinetic modeling is based on the Tofts model:
    • The results are presented back to the user in the form of a parametric map, a table of results and in a graph.
    • MR Permeability facilitates the visualization of areas with increased permeability.
    • MR Permeability facilitates the visualization of variations in permeability.
    • MR Permeability is a software tool for visualizing and post processing dynamic contrast- enhanced 3D datasets, acquired to visualize areas with abnormal vascularity.
    Device Description

    IntelliSpace Portal is a multimodality (CT, NM, and MR) thin-client applications server that delivers full diagnostic viewing and clinical applications to the enterprise. IntelliSpace Portal is a medical software system that allows multiple users to remotely access IntelliSpace Portal from compatible computers on a network. The system allows networking, selection, processing and filming of multimodality DICOM images. Both the client and server software are only for use with off the shelf hardware technology that meets defined minimum specifications. The device is not intended for diagnosis of lossy compressed images. For other images, trained physicians may use the images as a basis for diagnosis upon ensuring that monitor quality, ambient light conditions and image compression ratios are consistent with clinical application.

    The MR Permeability integrated into the IntelliSpace portal is used for processing MR T1 images to generate parametric maps, which help in studying micro-vascular properties. It is a client-server based application developed using Philips Informatics Infrastructure (PII) platform and is integrated into IntelliSpace portal. Communication and data exchange between client-server and with other portal components use standards like TCP/IP and DICOM.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the Philips MR Permeability device, based on the provided 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document (K130278) is a 510(k) premarket notification for the Philips MR Permeability device. Unlike a Pre-Market Approval (PMA) application, a 510(k) typically doesn't present a detailed study with specific quantitative acceptance criteria for clinical performance metrics (like sensitivity, specificity, accuracy) against a defined ground truth. Instead, the primary goal of a 510(k) is to demonstrate substantial equivalence to a previously legally marketed predicate device.

    The "acceptance criteria" in this context are implicitly the features and functionalities of the predicate device, nordicICE Software, and the study's goal is to show that the new device performs as well as or better than this predicate.

    Acceptance Criteria (Implied by Predicate Comparison) and Reported Device Performance:

    Feature/FunctionalityAcceptance Criteria (Predicate)Reported Device Performance (MR Permeability)
    Intended UsenordicICE is an image processing software for trained professionals, for viewing, processing, and analysis of medical images (DICOM). It provides viewing/analysis of functional/dynamic imaging datasets (MRI, BOLD fMRI, DWI, Fiber Tracking). Specifically, the DCE Module calculates parameters related to leakage of injected contrast material from intravascular to extracellular space.MR Permeability package facilitates radiologists in visualizing and post-processing dynamic contrast-enhanced datasets. It assesses micro-vascular properties (Ktrans, Kep, Ve, Vp, AUC) from T1 images of the brain or prostate, based on the Tofts model. Results are presented as parametric maps, tables, graphs, and facilitate visualization of increased/variations in permeability.
    DICOM CompatibilityYesYes
    Generate Various Parametric MapsYes, including Ktrans, Ve, AUC, Kep, Plasma Fraction.Yes, including Ktrans, Ve, AUC, Kep, Plasma Fraction.
    Detailed analysis based on user-defined ROIsYesYes
    Input based on Dynamic T1-weighted measurementsYesYes
    Underlay selectionYesYes
    Calculation based on Tofts ModelYesYes
    Applicable to multiple organs/tissuesYes (general statement)Yes - Brain and Prostate
    Graph presentation of dataYesYes
    Safety and Effectiveness(Implied: The predicate device is considered safe and effective for its stated intended uses.)"The MR Permeability software does not introduce new indications for use, nor does the use of the device result in any new potential hazard." "The nonclinical and clinical tests have demonstrated that the device is safe and works according to its intended use."

    2. Sample Size Used for the Test Set and Data Provenance

    The document states: "The conclusion from testing the device against synthetic data as well clinical data is: 'Based on the test results, the MR Permeability analysis functions according to its intended use'."

    • Sample Size (Test Set): Not explicitly stated. The document mentions "clinical data" but does not quantify the number of cases or patients used for this testing.
    • Data Provenance: Not specified. It mentions "synthetic data" and "clinical data". The origin (e.g., country of origin, specific institutions) and whether the clinical data was retrospective or prospective are not provided.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    This information is not provided in the summary. For a 510(k) focused on substantial equivalence to software, detailed expert review for ground truth establishment might not be a primary focus as it would be for novel diagnostic algorithms. The testing described appears to be more technical verification ("functions according to its intended use") rather than a clinical performance study against an expert-derived ground truth.

    4. Adjudication Method for the Test Set

    This information is not provided. Given the nature of the evaluation described (technical verification and comparison to synthetic data/unspecified clinical data), a formal adjudication method for a test set ground truth is unlikely to have been documented in this type of submission.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study is not mentioned in this 510(k) summary. The submission focuses on demonstrating substantial equivalence to the predicate device in terms of functionality and safety, not on measuring human reader improvement with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The document mentions "The MR Permeability verification tests were performed on the complete system relative to the verification, regression and test specifications" and "testing the device against synthetic data as well clinical data." This indicates that the algorithm's performance in generating parametric maps was tested. Since the device's output (parametric maps, tables, graphs) is consumed by a radiologist, and the testing description speaks to its functionality, it implicitly covers the standalone performance of the algorithm in generating these outputs.
    Yes, tests were performed on the device's ability to generate the specified parametric maps and analysis results.

    7. The Type of Ground Truth Used

    The ground truth implicitly used for the verification was:

    • Synthetic data: Likely mathematically derived "ideal" or known pharmacokinetic profiles to test the accuracy of the Tofts model calculations and parametric map generation.
    • Unspecified "clinical data": The nature of the ground truth for this clinical data is not specified (e.g., whether it was pathology, follow-up outcomes, or consensus readings). Given the context of a 510(k) for a post-processing tool, it's more likely that the clinical data was used to verify consistency and expected output generation rather than providing an independent 'gold standard' for disease presence/absence.

    8. The Sample Size for the Training Set

    This information is not provided. The 510(k) summary does not detail the development or training of the algorithm, as it's primarily concerned with demonstrating equivalence post-development.

    9. How the Ground Truth for the Training Set Was Established

    This information is not provided. As with the training set size, the summary does not delve into the development methodology or ground truth establishment for algorithm training.

    Ask a Question

    Ask a specific question about this device

    K Number
    K123302
    Device Name
    IB CLINIC
    Date Cleared
    2013-01-11

    (80 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    Why did this record match?
    Reference Devices :

    K080762, K090546

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IB Clinic v1.0 (Clinic) is a post-processing software toolkit designed to be integrated into existing medical image visualization applications running on standard computer hardware. Clinic accepts relevant DICOM image sets, such as dynamic perfusion and diffusion image sets. Clinic generates various perfusion- and diffusion-related parameters, standardized image sets, and image intensity differences. The results are saved to a DICOM image file and may be further visualized on an imaging workstation.

    Clinic is designed to aid trained physicians in advanced image assessment, treatment consideration, and monitoring of therapeutic response. The information provided by Clinic should not be used in isolation when making patient management decisions.

    Dynamic Perfusion Analysis - Generates parametric perfusion maps used for visualization of temporal variations in dynamic datasets, showing changes in image intensity over time. These maps may aid in the assessment of the extent and type of perfusion, blood volume and vascular permeability changes.

    Dynamic Diffusion Analysis - Generates apparent diffusion coefficient maps used for the visualization of apparent water movement in soft tissue throughout the body on both voxel-by-voxel and sub-voxel bases. These images may aid in the assessment of the extent of diffusion in tissue.

    Image Comparison - Generates co-registered image sets. Generates standardized image sets calibrated to an arbitrary scale to facilitate comparisons between independent image sets. Generates voxel-by-voxel maps of the image intensity differences between image sets acquired at different times. Facilitates selection and DICOM export of user-selected regions of interest (ROIs) These processes may enable easier identification of image intensity differences between images and easier selection and processing of ROIs.

    Device Description

    Clinic is a platform independent image processing library which consists of a set of code modules which run on standard computer hardware that computes a variety of numerical analyses, image parameter maps, and other image manipulations based on DICOM images captured via MR and CT modalities. These actions include:

    • Retrieval of MR and CT DICOM image studies from PACS and/or OS-. based file storage.
    • Computation of parameter maps for: .
      • DSC perfusion (based on MR and CT studies) O
      • o ADC diffusion (based on MR studies)
    • Image manipulations (of MR and CT studies): .
      • Registration of images generated at different time points o
      • Standardized scaling of image intensities O
      • o Comparison of registered and/or standardized images
      • Region of Interest (ROI) selection o
      • Generation of ROI datasets in DICOM formats O
    • Output of the above maps in DICOM format for export to PACS and/or OS . file storage
    • . Generation of reports summarizing the computations performed

    The IB Clinic code library can be used within standalone FDA cleared applications or can be "plugged in" and launched from within other FDA cleared applications such as Aycan's OsiriX PRO workstation. They are intended for distribution both in combination and in modular form, with functional subsets geared toward specific types of image analysis and marketed with corresponding names, including IB Neuro, IB Diffusion, and IB Delta Suite.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study information for the IB Clinic v1.0 device:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Based on the provided text, there are no explicit, quantitative acceptance criteria or numerical performance metrics for the IB Clinic v1.0 device. The submission focuses on demonstrating substantial equivalence to predicate devices, rather than meeting specific performance thresholds. The text describes the functionalities of the device but does not quantify their accuracy, precision, or efficiency.

    Acceptance Criteria (Not Explicitly Stated)Reported Device Performance
    Implicit Criteria:
    - Ability to retrieve MR and CT DICOM imagesYes, device performs this.
    - Ability to compute DSC perfusion mapsYes, device performs this.
    - Ability to compute ADC diffusion mapsYes, device performs this.
    - Ability to register imagesYes, device performs this.
    - Ability to standardize image intensitiesYes, device performs this.
    - Ability to compare imagesYes, device performs this.
    - Ability to select Regions of Interest (ROIs)Yes, device performs this.
    - Ability to generate ROI datasets in DICOMYes, device performs this.
    - Ability to output maps in DICOM formatYes, device performs this.
    - Ability to generate reportsYes, device performs this.
    - Substantial equivalence to predicate devices in intended use and performance characteristics.Confirmed by FDA clearance.

    2. Sample Size for Test Set and Data Provenance:

    The document does not specify a sample size for a test set or the provenance of any data. The submission relies on non-clinical tests (quality assurance measures) and a comparison to predicate devices, rather than a clinical trial with a defined test set.

    3. Number of Experts and Qualifications for Ground Truth (Test Set):

    Not applicable. No clinical test set with expert-established ground truth is mentioned in the document.

    4. Adjudication Method for Test Set:

    Not applicable. As there is no described clinical test set, there is no mention of an adjudication method.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    No. The document explicitly states: "Discussion of Clinical Tests Performed: N/A". This indicates that no MRMC or any other clinical effectiveness study involving human readers or AI assistance was conducted or reported in this submission.

    6. Standalone Performance Study (Algorithm Only):

    No. The document states "N/A" for clinical tests. While the device is a "post-processing software toolkit" and "platform independent image processing library," the submission does not present any standalone performance metrics or studies directly demonstrating the algorithm's accuracy or efficacy on a dataset. The validation described is primarily related to software development processes and comparison to predicate devices.

    7. Type of Ground Truth Used:

    Not explicitly stated for any performance evaluation. The "ground truth" for the device's functionality appears to be derived from the inherent mathematical and algorithmic correctness of the image processing operations it performs, as verified through "Performance testing (verification)" and "Product software validation" (listed under non-clinical tests). There's no mention of a clinical ground truth like pathology or outcomes data to assess the accuracy of the generated perfusion/diffusion maps.

    8. Sample Size for Training Set:

    Not applicable. As this is a software toolkit for image processing, not a machine learning model that typically requires a training set, no training set size is mentioned. The device computes parameters based on established physical models (e.g., perfusion, diffusion) rather than learning from data.

    9. How Ground Truth for Training Set Was Established:

    Not applicable. See point 8.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1