Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K233186
    Device Name
    uOmnispace.MR
    Date Cleared
    2024-04-17

    (202 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K220332, K141480, K230152, K113456

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    uOmnispace.MR is a software solution intended to be used for viewing, manipulating and analyzing medical images. It supports interpretation and evaluation of examinations within healthcare institutions. It has the following additional indications:

    The uOmnispace.MR Stitching is intended to create full-format images from overlapping MR volume data sets acquired at multiple stages.

    The uOmnispace.MR Dynamic application is intended to provide a general postprocessing tool for time course studies.

    The uOmnispace.MR MRS (MR Spectroscopy) is intended to evaluate the molecule constitution and spatial distribution of cell metabolism. It provides a set of tools to view, process, and analyze the complex MRS data. This application supports the analysis for both SVS (Single Voxel Spectroscopy) and CSI (Chemical Shift Imaging) data.

    The uOmnispace.MR MAPs application is intended to provide a number of arithmetic and statistical functions for evaluating dynamic processes and images. These functions are applied to the grayscale values of medical images.

    The uOmnispace.MR Breast Evaluation application provides the user a tool to calculate parameter maps from contrast-enhanced time-course images.

    The uOmnispace.MR Brain Perfusion application is intended to allow the visualization of temporal variations in the dynamic susceptibility time series of MR datasets.

    · MR uOmnispace.MR Vessel Analysis is intended to provide a tool for viewing, manipulating, and evaluating MR vascular images.

    The uOmnispace.MR DCE analysis is intended to view, manipulate, and evaluate dynamic contrast-enhanced MRI images.

    The uOmnispace.MR United Neuro is intended to view, manipulate MR neurological images.

    ■ The uOmnispace.MR Cardiac Function is intended to view, evaluate functional analysis of cardiac MR images.

    The uOmnispace.MR Flow Analysis is intended to view, evaluate flow analysis of flow MR images.

    Device Description

    The uOmnispace.MR is a post-processing software based on the uOmnispace platform (cleared in K230039) for viewing, manipulating, evaluating and analyzing MR images, can run alone or with other advanced commercially cleared applications.

    This proposed device contains the following applications:

    • uOmnispace.MR Stitching
    • uOmnispace.MR Dynamic
    • uOmnispace.MR MRS
    • uOmnispace.MR MAPs
    • uOmnispace.MR Breast Evaluation
    • . uOmnispace.MR Brain Perfusion
    • uOmnispace.MR Vessel Analysis
    • uOmnispace.MR DCE Analysis
    • uOmnispace.MR United Neuro
    • uOmnispace.MR Cardiac Analysis
    • uOmnispace.MR Flow Analysis
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Validation TypeAcceptance CriteriaReported Device Performance
    DiceTo evaluate the proposed device of automatic ventricular segmentation, we compared the results with those of the cardiac function application of predicate device. The Sørensen-Dice coefficient is used to evaluate consistency. If dice > 0.95, it is considered consistent between the two devices.1.00

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 114 samples from 114 different patients.
    • Data Provenance: The data includes patients of various genders (35 Male, 20 Female, 59 Unknown), ages (5 between 14-25, 12 between 25-40, 22 between 40-60, 13 between 60-79, 62 Unknown), and ethnicities (50 Europe, 53 Asia, 11 USA). The data was acquired using MR scanners from various manufacturers: UIH (58), GE (2), Philips (2), Siemens (52), and with different magnetic field strengths: 1.5T (23), 3.0T (41), 50 Unknown. The text does not explicitly state if the data was retrospective or prospective, but the mention of a "deep learning-based Automatic ventricular segmentation Algorithm for the LV&RV Contour Segmentation feature" and "The performance testing for deep learning-based Automatic ventricular segmentation Algorithm was performed on 114 subjects...during the product development" implies a retrospective study using existing data to validate the developed algorithm.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The test set's ground truth was established by comparing the proposed device's results with those of the predicate device. The text does not explicitly state that human experts established the ground truth for the test set by manually segmenting the images for direct comparison against the algorithm's output. Instead, it seems the predicate device's output serves as the "ground truth" for the comparison of the new device's algorithm.

    However, for the training ground truth, the following was stated:

    • Number of Experts: Two cardiologists.
    • Qualifications: Both cardiologists had "more than 10 years of experience each."

    4. Adjudication Method for the Test Set

    The study does not describe an adjudication method for the test set in the conventional sense of multiple human readers independently assessing the cases. Instead, the comparison is made between the proposed device's algorithm output and the predicate device's output.

    For the training ground truth, the following adjudication method was used:

    • Manual tracing was performed by an experienced user.
    • Validation of these contours was done by two independent experts (more than 10 years experience).
    • If there was a disagreement, a consensus between the experts was reached.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size

    No MRMC comparative effectiveness study was done to assess how much human readers improve with AI vs without AI assistance. The study focuses on comparing the proposed device's algorithm performance directly against a predicate device's cardiac function application based on the Dice coefficient.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance study was done for the "deep learning-based Automatic ventricular segmentation Algorithm" for the LV&RV Contour Segmentation feature. The device's algorithm output was directly compared to the output of the predicate device's cardiac function application using the Dice coefficient.

    7. The Type of Ground Truth Used

    For the test set, the "ground truth" for comparison was the output of the cardiac function application of the predicate device.

    For the training set, the ground truth was expert consensus based on manual tracing by an experienced user and validated by two independent cardiologists with over 10 years of experience.

    8. The Sample Size for the Training Set

    The document states: "The training data used for the training of the cardiac ventricular segmentation algorithm is independent of the data used to test the algorithm." However, it does not provide the specific sample size for the training set.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set was established through manual annotation and expert consensus:

    • It was "manually drawn on short axis slices in diastole and systole by two cardiologists with more than 10 years of experience each."
    • "Manual tracing of the cardiac was performed by an experienced user."
    • "The validation of these contours was done by two independent expert (more than 10 years) in this domain."
    • "If there is a disagreement, a consensus between the experts was done."
    Ask a Question

    Ask a specific question about this device

    K Number
    K141514
    Date Cleared
    2014-09-03

    (86 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K113701, K120361, K113456

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Synapse 3D Tensor Analysis is medical imaging software used with Synapse 3D Base Tools to accept, display, and process DICOM compliant 2D and 3D medical images acquired from MR for the purpose of viewing of local water diffusion properties and directional dependence of the diffusion in the white matter. It is intended to be used by trained medical professionals in reading, interpreting, reporting, screening and treatment planning.

    Addition to the general 2D and 3D image processing and measurement tools available in Synapse 3D Base Tools, Synapse 3D Tensor Analysis provides custom workflows, UI, and reporting functions for tensor analysis with neck and head MR images. It includes display of diffusion and FA color map images, white matter tractography, dynamic review in MR, vessel and body visualization with registration of MR, CT, XA, PET and NM.

    Device Description

    Synapse 3D Tensor Analysis is an optional software module that works with Synapse 3D Base Tools, cleared by CDRH via K120361 on 04/06/2012. Synapse 3D Tensor Analysis, Synapse 3D Base Tools and other optional software modules consist of the Synapse 3D product family.

    Synapse 3D is medical application software running on a Standalone PC or Windows server/client configuration installed on a commercial general-purpose Windows-compatible computer. It offers provides custom workflows, UI, and reporting functions for trained medical professionals to aid them in reading, interpreting, reporting, screening and treatment planning.

    Synapse 3D Tensor Analysis supports the display of diffusion and Fractional anisotropy (FA) colormap images, white matter tractography, dynamic review, vessel and body visualization with registration of MR, CT, XA, PET and NM. Tensor Analysis tool enables tensor analysis from diffusion-weighted MR images and tractography-based extraction and observation of local water diffusion properties and directional dependence of the diffusion in the white matter. Additional images (mainly CT images) can be loaded, and skin, bone, brain parenchyma, tumor, and cerebral vessels can be extracted in craniotomy simulations.

    The main functions are shown below.

    • Display FA and diffusion colormap images
    • Extract and observe white matter
    • Calculate FA value, number of fibers, area, and volume in the specified ROI
    • Simultaneous display of white matter and skin, bone, brain parenchyma, tumor, artery, vein, and other reqions
    • Craniotomy simulations involving cutting of skin and bone regions, brain surface clipping by depth, and tumor plane clipping
    AI/ML Overview

    The provided text is a 510(k) summary for the FUJIFILM Medical Systems U.S.A., Inc.'s Synapse 3D Tensor Analysis device. It details the device's intended use and claims substantial equivalence to predicate devices. However, the document does not include quantitative acceptance criteria or detailed study results that prove the device meets specific performance thresholds.

    Here's a breakdown of what is and is not present, based on your requested information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    This information is not provided in the document. The document states:

    • "Pass/Fail criteria were based on the requirements and intended use of the product."
    • "Test results showed that all tests successfully passed."
    • "a comparative performance testing was conducted between the Synapse 3D Tensor Analysis and the predicate device, and the comparison test result supported the substantial equivalence of the devices' performance characteristics."

    However, no specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or error rates) are listed, nor are the actual performance metrics (e.g., specific accuracy percentages, mean differences) from the tests.

    2. Sample Size Used for the Test Set and Data Provenance:

    This information is not explicitly provided in the document. The text mentions "actual clinical images" were used for bench performance testing but does not specify the number of images, cases, or their origin (country, retrospective/prospective).

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications:

    This information is not provided in the document. The general statement about testing implies that the results were evaluated against some standard, but it doesn't specify if expert consensus was used to establish ground truth for the test set or the qualifications of such experts if they were involved.

    4. Adjudication Method:

    This information is not provided.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:

    This information is not provided. The document mentions "comparative performance testing was conducted between the Synapse 3D Tensor Analysis and the predicate device," but it does not specify if this was an MRMC study or if it involved human readers. Therefore, an effect size of human improvement with AI assistance cannot be determined from this document.

    6. If a Standalone Study Was Done:

    Yes, a standalone performance assessment was conducted for the Synapse 3D Tensor Analysis software. The document states:

    • "Testing involved system level functionality test, segmentation accuracy test, measurement accuracy test, interfacing test, usability test, serviceability test, labeling test, as well as the test for risk mitigation method analyzed and implemented in the risk management process."
    • "In addition, we conducted the bench performance testing using actual clinical images to help demonstrate that the proposed device achieved the expected accuracy performance."

    This indicates that the software's performance was evaluated on its own.

    7. The Type of Ground Truth Used:

    The type of ground truth used is not explicitly stated. However, given the nature of the device (medical imaging software for viewing water diffusion properties and tractography), potential ground truth sources could include expert consensus, follow-up imaging, or correlation with clinical outcomes, but the document does not confirm this. The phrase "expected accuracy performance" suggests a comparison to some established accurate output, but the source of that accuracy (ground truth) is not specified.

    8. The Sample Size for the Training Set:

    This information is not provided. The document describes testing and validation activities but does not mention the training of an algorithm or the size of a training set. This is consistent with a device that provides visualization and processing tools rather than an AI/ML algorithm that requires a training set.

    9. How the Ground Truth for the Training Set Was Established:

    As no training set is mentioned for an AI/ML algorithm, this information is not applicable/provided. The device is described as "medical imaging software used with Synapse 3D Base Tools to accept, display, and process DICOM compliant 2D and 3D medical images," implying a tool for visualization and analysis rather than an autonomous diagnostic AI.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1