Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K223091
    Manufacturer
    Date Cleared
    2023-06-09

    (252 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CT Perfusion V1.0 is an automatic calculation tool indicated for use in radiology. The device is an image processing software allowing computation of parametric maps from CT Perfusion data and extraction of volumes of interest based on numerical thresholds applied to the aforementioned maps. Computation of mismatch between extracted volumes is automatically provided.

    The device is intended to be used by trained professionals with medical imaging education including but not limited to, physicians and medical technicians in the imaging assessment workflow by extraction and communication of metrics from CT Perfusion dataset.

    The results of CT Perfusion V1.0 are intended to be used in conjunction with other patient information and, based on professional judgment, to assist the clinician in the medical imaging assessment. Trained professionals are responsible for viewing the full set of native images per the standard of care.

    The device does not alter the original image. CT Perfusion V1.0 is not intended to be used as a standalone diagnostic device and shall not be used to take decisions with diagnosis or therapeutic purposes. Patient management decisions should not solely be based on CT Perfusion V1.0 results.

    CT Perfusion V1.0 can be integrated and deployed through technical platforms, responsible for transferring, storing, converting formats, notifying of detected image variations and display DICOM of imaqing data.

    Device Description

    The CT perfusion V1.0 application can be used to automatically compute qualitative as well as quantitative perfusion maps based on the dynamic (first-pass) effect of a contrast agent (CA). The perfusion application assumes that the input data describes a well-defined and transient signal response following rapid administration of a contrast agent.

    Olea Medical proposes CT Perfusion V1.0 as an image processing application, Picture Archiving Communications System (PACS) software module that is intended for use in a technical environment, which incorporates a Medical Image Communications Device (MICD) (21 CFR 892.2020) as its technical platform.

    CT Perfusion V1.0 image processing application is designed as a docker installed on a technical platform, a Medical Image Communications Device.

    The CT Perfusion V1.0 application takes as input a full CT perfusion (CTP) sequence acquired following the injection of an iodine contrast agent.

    By processing these input image series, the application provides the following outputs:

    • . Parametric maps.
    • Volume 1 and volume 2 segmentation in DICOM format. Fusion of segmented Volume 1 and 2 and CTP map could be provided in PNG and DICOM secondary captures.
    • . Mismatch computation:
      • Mismatch volume = Volume 2-Volume 1
      • Mismatch ratio = Volume 2/Volume 1 O
      • Relative Mismatch = (Volume 2-Volume 1)/Volume 2*100. O

    The CT Perfusion V1.0 offers automatic volume seqmentations based on a set of maps and thresholds. The user is able to tune/adjust these thresholds and the maps associated to thresholds in the configuration files.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study for the CT Perfusion V1.0 device, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Parametric maps result comparison: All parametric maps (CBF, CBV, MTT, TTP, Delay, tMIP) computed with CT Perfusion V1.0 and Olea Sphere® V3.0 predicate device were identical.Value differences voxel-by-voxel were equal to zero. Pearson and Spearman correlation coefficients were equal to 1.
    Volumes result comparison: Segmentations (hypoperfused areas) derived from thresholds should be similar between CT Perfusion V1.0 and the predicate device.Mean DICE index (similarity coefficient) was equal to 1 between CT Perfusion V1.0 and Olea Sphere® V3.0 predicate device segmentations. For all cases, no difference was found.

    Study Details

    1. Sample size used for the test set and the data provenance: Not explicitly stated. The document mentions "all cases" for volume comparison, implying a dataset was used, but the specific number of cases is not provided. The provenance (country of origin, retrospective/prospective) is also not stated.

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. The study compares the performance of the new device to a predicate device, Olea Sphere V3.0, not to expert-derived ground truth.

    3. Adjudication method for the test set: Not applicable, as the comparison is against a predicate device's output rather than an expert-adjudicated ground truth.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: No. This was not a MRMC comparative effectiveness study involving human readers with and without AI assistance. The study focuses on comparing the new device's output to a predicate device's output.

    5. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done: Yes, this was a standalone comparison of the CT Perfusion V1.0 algorithm's outputs against the Olea Sphere V3.0 algorithm's outputs.

    6. The type of ground truth used: The "ground truth" for this study was the output of the predicate device, Olea Sphere V3.0.

    7. The sample size for the training set: Not applicable. The document states that "CT Perfusion V1.0 does not contain any AI-based algorithms. All calculations are based on deterministic algorithms." Therefore, there is no training set in the machine learning sense.

    8. How the ground truth for the training set was established: Not applicable, as there is no training set for a deterministic algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221426
    Manufacturer
    Date Cleared
    2022-07-06

    (51 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Functional MR V1.0 is an optional image processing software application that is intended for use on Olea Sphere® V3.0 software package. It is intended to be used by trained professionals including, but not limited to, physicians, MR technicians, radiographers.

    Functional MR V1.0 includes a software module that computes the activation map from a BOLD sequence and supports the visualization, analysis of activation maps.

    Functional MR V1.0 can also be used to provide reproducible measurements of derived maps. These measurements include thresholds modification and ROI analysis.

    Functional MR V1.0 may also be used as an image viewer of multi-modality digital images, including BOLD and DTI images.

    When interpreted by a skilled physician, Functional MR V1.0 provides information that may be used in a clinically useful context. Patient management decisions should not be based solely on the results of Functional MR V1.0.

    Device Description

    The functional MRI technique consists of analyzing the blood-oxygen-level dependent (BOLD) contrast images. This is a type of specialized brain and body scan is used to map neural activity in the brain or spinal cord of humans by imaging the change in blood flow (hemodynamic response) related to energy use by brain cells.

    Olea Medical proposes the Functional MR V1.0 as an optional medical viewing, analysis and processing, Picture Archiving Communications System (PACS) software module that is intended for use with the Olea Sphere® V3.0 software package (K152602). Functional MR V1.0 software application runs on a standard "off-the-shelf" PC workstation.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the Olea Medical Functional MR V1.0 software. The submission aims to demonstrate substantial equivalence to a predicate device, nordicBrainEx v2.3.7. However, the document does not contain specific acceptance criteria, a detailed study proving the device meets those criteria, or the specific information requested in your numbered list regarding performance metrics, sample sizes, expert involvement, and ground truth establishment.

    The document primarily focuses on:

    • The device's intended use and functionality.
    • A comparison of technological characteristics with the predicate device.
    • A general statement about internal verification and validation testing, and a comparison study with the predicate.

    Here's a breakdown of what is available and what is missing from the provided text, in response to your request:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document provides a "Predicate Device Comparison Table" which lists functionalities and indicates whether both devices possess them. This table does not represent acceptance criteria as performance thresholds (e.g., sensitivity, specificity, accuracy). It merely confirms feature commonality.

    Functional MR V1.0nordicBrainEx® (K163324)Reported Acceptance Criteria Met (Implicit)
    Standard image Viewing toolsYesYes
    Loading, post-processing and exporting of images series in DICOM formatYesYes
    Measurement toolsYesYes
    2D MPR visualizationYesYes
    3D volume rendering visualizationYesYes
    Paradigm selection and editionYesYes
    Activation mapsYesYes
    Skull FilteringYesYes
    Automatic co-registrationYesYes
    Time Intensity DisplayYesYes
    Motion CorrectionYesYes
    Slice time correctionYesYes
    Spatial FilteringYesYes
    Threshold adjustingYesYes
    Makes available as outputs for Neuro-navigation systemsYesYes
    Laterality IndexN/A (Functional MR V1.0 has)N/A (Feature not present in predicate)
    Bonferroni CorrectionN/A (Functional MR V1.0 has)N/A (Feature not present in predicate)

    Note: The "Reported Acceptance Criteria Met" column is inferred from the statement: "The result of this comparison demonstrates that Functional MR V1.0 has a safety and effectiveness profile similar to the predicate device." This implies that for all features shared with the predicate, the performance was deemed comparable. However, no quantitative performance metrics or specific acceptance criteria are provided.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Missing Information: The document does not specify the sample size of the test set, the country of origin of the data, or whether the study was retrospective or prospective. It only states that "additional validation testing to compare the results of Functional MR V1.0 with the predicate."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    • Missing Information: This information is not provided. The term "ground truth" is not explicitly mentioned in the context of the comparison study.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    • Missing Information: This information is not provided.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC Study Described: The document describes a comparison of the results of Functional MR V1.0 with the predicate nordicBrainEx v2.3.7. It does not mention a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader performance with and without AI assistance. Functional MR V1.0 is image processing software, not explicitly an AI-assisted diagnostic tool for human readers in the traditional sense discussed by MRMC studies. The device provides "information that may be used in a clinically useful context," but "Patient management decisions should not be based solely on the results of Functional MR V1.0."

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Yes (Implicit): The comparison study appears to be a standalone comparison of the software's output ("compare the results of Functional MR V1.0 with the predicate"). The document states it "evaluate performance of BOLD sequence analysis and visualization, viewing and measurement tools..." This implies evaluating the algorithm's output directly against the predicate's output, without human interpretation as part of the primary performance metric.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Implicit Ground Truth: Predicate Device Output: The comparison used the "results of Functional MR V1.0 with the predicate. nordicBrainEx v2.3.7 (NordicNeurolab AS®) was used as a comparison for Functional MR V1.0". This suggests that the output/results of the legally marketed predicate device served as the reference or de facto "ground truth" for the comparison for the functionalities being evaluated. There is no mention of an independent, expert-established ground truth, pathology, or outcomes data.

    8. The sample size for the training set

    • Not Applicable / Not Provided: As this device is primarily described as image processing software performing calculations and visualizations rather than a machine learning model requiring a distinct training set (though it could conceptually use ML for some features), a "training set" is not explicitly mentioned or quantified. If an ML component exists, its training set details are not provided.

    9. How the ground truth for the training set was established

    • Not Applicable / Not Provided: Similar to point 8, this information is not available given the context of the document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211431
    Device Name
    breastscape v1.0
    Manufacturer
    Date Cleared
    2021-08-02

    (87 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    breastscape V1.0 is an optional image processing software application that is intended for use on Olea Sphere 3.0 software package. It is intended to be used by trained breast imaging physicians and trained MRI technologists.

    breastscape V1.0 includes a software module (BreastApp) that supports the visualization, analysis, and reporting of lesions measurements and analysis. breastscape V1.0 supports the evaluation of dynamic MR data acquired during contrast administration and the calculation of parameters related to the uptake characteristics.

    breastscape V1.0 performs other user selected processing functions (such as image subtraction, multiplanar and oblique reformats, 3D renderings).

    The resulting information can be displayed in a variety of formats, including a parametric image overlaid onto the source image.

    breastscape V1.0 can also be used to provide measurements of the segmented tissue volumes (volumes of interest) based on uptake characteristics. These measurements include volume measurement, distances of volumes of interest to anatomical landmarks, 3D longest diameter and 2D long and short axis.

    breastscape V1.0 includes the option to add annotations based on the fifth edition of the American College of Radiology's Breast Imaging Reporting and Data System (BI-RADS®) Breast Imaging Atlas.

    breastscape V1.0 may be used as an image viewer of multi-modality digital images, including ultrasound and mammography. breastscape V1.0 is not intended for primary interpretation of digital mammography images.

    breastscape V1.0 includes a software module (BreastLoc) to assists users in planning MR guided breast interventional procedures. Using information from MR images, regarding userspecified target lesion and fiducial location coordinates, the software gives calculation of the targeted region of interest (such as suspected lesion) depth.

    When interpreted by a skilled physician, breastscape V1.0 provides information that may be used for screening, diagnosis, and interventional planning.

    Patient management decisions should not be based solely on the results of breastscape V1.0.

    Device Description

    breastscape V1.0 is an optional PACS software tool that is intended for use with the Olea Sphere V3.0 software package, cleared under K152602. The software accesses image series in DICOM format through Olea Sphere V3.0, which is a software package used to perform image viewing, processing and analysis of medical images.

    breastscape V1.0 is made of two software modules: BreastApp and BreastLoc.

    1. BreastApp
      BreastApp is designed to assist in the visualization, analysis and reporting of Magnetic Resonance Imaging (MRI) breast studies. This module supports the evaluation of dynamic MR breast data acquired during contrast administration (DCE-MRI), and the calculation of parameters related to the lesion uptake characteristics. This module provides semi-automatic segmentation of volumes of interest, distance measurements and lesion volume measurements.

    BreastApp provides the features below:

    • Visualization of registered MR image series. It includes well-established to standard image viewing, MIPs, reformats and 3D volume rendering.
    • Visualization of Mammography and Ultrasound image series for display purpose only.
    • Evaluation of dynamic MR breast data acquired during contrast administration. It includes:
      • The computation of image subtractions (subtractions of each time point/phase image of the dynamic series with the 1st time point (baseline) image to highlight tissue with contrast enhancement).
      • The display of time intensity signal curves (kinetics curves) showing tissue contrast enhancement evolution over time.
      • The detection and display of the kinetics curve showing the worst kinetics behavior (most important washout among pixels having peak enhancement superior at 50% enhancing threshold).
      • The computation of semi-quantitative kinetics maps that are derived from the time intensity signal curves and showing uptake characteristics (e.g., Time to Maximum contrast Enhancement, Wash in, Washout, etc.).
    • Automatic detection of breast morphological structures. It includes the automatic detection of nipple position, chest and skin border. The user can further adjust them if needed.
    • Semi-automatic lesion segmentation. It includes:
      • Highlighting tissues showing significant contrast agent uptake based on an uptake threshold.
      • Semi-automatic segmentation of the suspected lesion identified by the user. The user can further adjust the segmentation if needed or even manually segment the suspected lesion.
      • Automatic computation of the suspected lesion 2D/3D diameter and lesion volume.
    • Automatic computation of distances between the suspected lesion and the morphological structures (distances to nipple, chest and skin). The user can further adjust the distances if needed.
    • Reporting of user-selected findings and assessment through a dedicated breast report. It includes the option to add annotations based on the Fifth Edition of American College of Radiology's Breast Imaging Reporting and Data System (BI-RADS®) Breast Imaging Atlas. The software automatically reports the localization of the suspected lesion on a dedicated breast sector map. The position can be further adjusted by the user if needed.
    • Follow-up for multiple (more than two) studies from same patient. It includes tools to enhance the visualization and analysis of patient follow-up studies through the same layout.
    1. BreastLoc

    The BreastLoc module is designed to assist users in planning MR-guided breast interventional procedures. Based on user-specified target lesion and fiducial location coordinates, BreastLoc is used to compute and display the following features:

    • Needle insertion block position within the grid diagram;
    • Needle insertion point activation in the block;
    • Depth of introducer, representing the graduation value where to put the depth stop on the introducer sheath;
    • Needle insertion path display on native images.
    AI/ML Overview

    The provided text describes Olea Medical's breastscape V1.0 and its substantial equivalence to predicate and reference devices. However, it does not explicitly state "acceptance criteria" or provide a detailed "study that proves the device meets the acceptance criteria" in the format requested.

    The document focuses on demonstrating substantial equivalence through various validation and verification tests, software functionality comparisons, and a general statement about clinical performance. It mentions "performance evaluation support that the minor difference in the technological characteristics do not raise different questions of safety and effectiveness" and "software validation testing demonstrates that the device operates as safely and effectively as its predicate device and does not raise different questions of safety and effectiveness."

    While the document indicates that the device's performance was evaluated, it does not provide the specific quantitative acceptance criteria or the detailed results of a study designed to prove the device met those criteria in a structured manner.

    Therefore, the following information is extracted and inferred from the text provided. Many points cannot be fully answered due to the absence of specific details in the input text.

    1. Table of Acceptance Criteria and Reported Device Performance

    Not explicitly stated in the document. The document asserts that validation testing confirms product specifications are met and that the device operates as safely and effectively as its predicate.

    2. Sample size used for the test set and the data provenance

    • Sample Size for Test Set: Not explicitly stated. The document mentions "anonymized images from a cohort of patients" were used for additional validation testing to compare breastscape V1.0 with the predicate and reference devices. The size of this cohort is not specified.
    • Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not explicitly stated. The document mentions "direct manual measurements (ground truth)" were used for BreastApp™, but it does not specify the number or qualifications of the experts who performed these measurements. The device is intended for use by "trained breast imaging physicians and trained MRI technologists."

    4. Adjudication method for the test set

    Not explicitly stated.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not explicitly stated. The document mentions "additional validation testing to compare the results of breastscape V1.0 with the predicate and reference devices," but this test appears to be focused on comparing the device's output to the predicate/reference rather than assessing human reader performance with and without AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, in part. The document states that "direct manual measurements (ground truth)" were used to evaluate "automatically calculated metrics and parametric maps" for BreastApp™. This suggests an evaluation of the algorithm's output against a ground truth without human interpretation in the loop for that specific aspect. Further, the primary validation and verification tests for the software modules themselves would be standalone algorithm performance tests.

    7. The type of ground truth used

    • For BreastApp™: "Direct manual measurements (ground truth)" and "Kinetics plugin, viewing tools, follow-up feature, breast dedicated report, mammography loading and visualization, measurements modifications within Olea Sphere V3.0" were used. This indicates a combination of expert-derived manual measurements and reference to the established functionality of other cleared systems or modules.
    • For BreastLoc™: It was compared against MultiView™ MR Breast V4.0.3.2 (Hologic®) to evaluate "performance of the MR guided breast intervention procedural planning." This implies the predicate device's accepted output served as a reference for ground truth or comparison.

    8. The sample size for the training set

    Not explicitly stated. The document focuses on validation and verification rather than detailing the training of potential AI/ML components (though "parametric image maps" and "semi-automatic segmentation" may involve such components).

    9. How the ground truth for the training set was established

    Not explicitly stated.

    Ask a Question

    Ask a specific question about this device

    K Number
    K203582
    Manufacturer
    Date Cleared
    2021-02-04

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    qp-Prostate is an image processing software package to be used by trained professionals, including radiologists specialized in prostate imaging, urologists and oncologists. The software runs on a standard "off-the-shelf" workstation and can be used to perform image viewing, processing and analysis of prostate MR images. Data and images are acquired through DICOM compliant imaging devices and modalities. Patient management decisions should not be based solely on the results of qp-Prostate. qp-Prostate does not perform a diagnostic function, but instead allows the users to visualize and analyze DICOM data.

    Device Description

    qp-Prostate is a medical image viewing, processing and analyzing software package for use by a trained user or healthcare professional, including radiologists specialized in prostate imaging, urologists and oncologists. These prostate MR images, when interpreted by a trained physician, may yield clinically useful information.

    qp-Prostate consists of a modular platform based on a plug-in software architecture. Apparent Diffusion Coefficient (ADC) post-processing and Perfusion - Pharmacokinetics post-processing (PKM) are embedded into the platform as plug-ins to allow prostate imaging quantitative analysis.

    The platform runs as a client-server model that requires a high-performance computer installed by QUIBIM inside the hospital or medical clinic network. The server communicates with the Picture Archiving and Communication System (PACS) through DICOM protocol, qp-Prostate is accessible through the web browser (Google Chrome or Mozilla Firefox) of any standard "off-the-shelf" computer connected to the hospital/center network.

    The main features of the software are:

    1. Query/Retrieve interaction with PACS;
    2. Apparent Diffusion Coefficient (ADC) post-processing (MR imaging);
    3. Perfusion Pharmacokinetics (PKM) post-processing (MR imaging);
    4. DICOM viewer; and
    5. Structured reporting.

    The software provides MR imaging analysis plug-ins to objectively measure different functional properties in prostate images.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the qp-Prostate device, based on the provided document:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of "acceptance criteria" with numerical targets and reported performance. Instead, it describes performance testing conducted to demonstrate functional equivalence and safety and effectiveness compared to a predicate device. The performance data generally aims to show the device functions as intended and is comparable to the predicate.

    The closest we can get to a "table of acceptance criteria and reported device performance" is by looking at the types of tests done and the general conclusions.

    Implicit Acceptance Criteria (Inferred from testing and purpose):

    • Accuracy of Diffusion-ADC and Perfusion-Pharmacokinetics (PKM) calculations: The device should accurately compute these parameters.
    • Accuracy of algorithmic functions: The Spatial Smoothing, Registration, Automated Prostate Segmentation, Motion Correction, and automated AIF selection algorithms should perform correctly.
    • Equivalence to Predicate Device: The performance of qp-Prostate should be comparable to the Olea Sphere v3.0, especially in quantitative outputs.
    • Functionality as intended: All listed features (Query/Retrieve, DICOM viewer, Structured reporting, etc.) should work correctly.

    Reported Device Performance (from the document):

    • Diffusion-ADC and Perfusion-Pharmacokinetics (PKM) analysis modules: Evaluated using QIBA's Digital Reference Objects (DROs), including noise modeling, for technical performance. The document concludes that the "tests results demonstrate that qp-Prostate functioned as intended."
    • Algorithmic functions (Spatial Smoothing, Registration, Automated Prostate Segmentation, Motion Correction, AIF selection): Tested using a dataset of prostate clinical cases. The document states "performance testing with prostate MR cases" was conducted.
    • Comparison to Predicate Device: Performed using 157 clinical cases, demonstrating that the device is "as safe and effective as its predicate device, without introducing new questions of safety and efficacy."
    • Overall Conclusion: "Performance data demonstrate that qp-Prostate is as safe and effective as the OLEA Sphere v3.0. Thus, gp-Prostate is substantially equivalent."

    Since no specific numerical thresholds for accuracy or performance metrics are provided in the document, a quantitative table cannot be generated. The acceptance is based on successful completion of the described performance tests and demonstrating substantial equivalence.


    Study Details

    Here's the detailed information regarding the studies:

    1. Sample sized used for the test set and the data provenance:

    • Digital Reference Object (DRO) Analysis (Bench Testing):
      • Sample Size: Not explicitly quantified as "sample size" in terms of number of patient cases, but rather refers to universally accepted digital reference objects from QIBA.
      • Provenance: These are synthetic or standardized digital objects designed for technical performance evaluation, not originating from specific patients or countries.
    • Clinical Testing of Algorithms (Motion Correction, Registration, Spatial Smoothing, AIF Selection, Prostate Segmentation):
      • Motion Correction algorithm: 155 DCE-MR and DWI-MR prostate sequences from 155 different patients.
      • Registration algorithm: 112 T2-Weighted MR, DCE-MR, and 108 DWI-MR prostate sequences from different patients.
      • Spatial Smoothing algorithm: 51 transverse T2-weighted, DCE-MR, and DWI-MR prostate sequences from 51 different patients.
      • AIF selection algorithm: 242 DCE-MR prostate sequences from 242 different patients.
      • Prostate Segmentation algorithm: 243 transverse T2-weighted MR prostate sequences from 243 different patients.
      • Provenance for these clinical cases: The document states they were "acquired from different patients in different machines with multiple acquisition protocols" and from "different three major MRI vendors: Siemens, GE and Philips, magnetic field of 3T and 1.5T cases." There is no explicit mention of the country of origin or whether the data was retrospective or prospective, though "clinical cases" used for validation often imply retrospective use of existing data.
    • Comparison to Predicate Device:
      • Sample Size: 157 T2-weighted MR, DCE-MR, and 141 DWI-MR prostate sequences from different patients.
      • Provenance: Similar to the algorithmic clinical testing, from "different patients in different machines with multiple acquisition protocols" and from "different three major MRI vendors: Siemens, GE and Philips, magnetic field of 3T and 1.5T cases."

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document does not explicitly state the number of experts used or their specific qualifications (e.g., years of experience as radiologists) for establishing ground truth for the clinical test sets.
    • For the Digital Reference Object (DRO) analysis, the "ground truth" is inherent to the design of the DROs themselves (proposed by QIBA), rather than established by human experts for each test.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • The document does not specify any adjudication method used for establishing ground truth or for resolving discrepancies in the test sets.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study assessing human reader improvement with AI assistance was not performed or reported in this document. The study compared the device's technical performance and its output with that of a predicate device, not with human readers or human readers assisted by AI.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the performance testing described is primarily standalone. The "bench testing" with DROs and the "clinical testing" of the algorithms (Motion Correction, Registration, etc.) evaluate the device's inherent performance without a human-in-the-loop scenario. The comparison to the predicate device also assesses the algorithm's output directly against the predicate's output. The device itself is described as "an image processing software package to be used by trained professionals" and "does not perform a diagnostic function, but instead allows the users to visualize and analyze DICOM data," which points to it being a tool that supports human interpretation rather than replacing it.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Bench Testing (DWI & DCE): Digital Reference Objects (DROs) proposed by QIBA, which serve as a synthetic, known ground truth for quantitative accuracy.
    • Clinical Testing of Algorithms & Comparison to Predicate: The document does not explicitly define the ground truth for these clinical performance tests. Given that it's comparing the outputs of image processing algorithms, the ground truth would likely involve:
      • Reference standard values: For quantitative parameters like ADC, Ktrans, kep, ve, it would likely involve comparing the device's calculated values against a recognized reference standard or the predicate's output considered as a benchmark.
      • Visual assessment/Expert review: For the performance of segmentation, registration, and motion correction, expert radiologists would visually assess the accuracy of the algorithm's output to determine if it "functioned as intended."
      • The statement "comparison against the predicate device" implies the predicate's output is used as a reference point.

    7. The sample size for the training set:

    • The document does not provide any information on the sample size used for the training set of the algorithms. It focuses solely on the validation and verification performed post-development.

    8. How the ground truth for the training set was established:

    • Since no information on the training set or its sample size is provided, there is also no information on how the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K181247
    Manufacturer
    Date Cleared
    2018-11-20

    (194 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Vitrea CT Brain Perfusion is a non-invasive post-processing application designed to evaluate areas of brain perfusion. The software can calculate cerebral blood flow (CBF), cerebral blood volume (CBV), local bolus timing (i.e., delay of tissue response, time to peak), and mean transit time (MTT) from dynamic CT image data acquired after the injection of contrast media. The package also allows the calculation of regions of interest and mirrored regions, as well as the visual inspection of time density curves. Vitrea CT Brain Perfusion supports the physician in visualizing the apparent blood perfusion in brain tissue affected by acute stroke. Areas of decreased perfusion, as is observed in acute cerebral infarcts, appear as areas of changed signal intensity (lower for both CBF and CBV and higher for time to peak and MTT).

    Device Description

    Vitrea CT Brain Perfusion is a noninvasive post-processing software that calculates cerebral blood flow (CBF), cerebral blood volume (CBV), local bolus timing (i.e., delay of tissue response, time to peak), and mean transit time (MTT) from dynamic CT image data. It displays time density curves, perfusion characteristics in perfusion and summary maps, as well as regions of interest and mirrored regions.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study information for the Vitrea CT Brain Perfusion device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document does not explicitly state quantitative acceptance criteria or a direct performance table for the Vitrea CT Brain Perfusion device with the Bayesian algorithm. Instead, the document focuses on demonstrating substantial equivalence to a previously cleared predicate device (Vitrea CT Brain Perfusion with SVD+ algorithm, K121213) and reference device (Olea Sphere V3.0, K152602) due to the addition of a Bayesian algorithm.

    The core "acceptance" is based on the conclusion that the new device is "as safe and effective" as the predicate. This is supported by:

    • "Algorithm Testing": "The Vitrea CT Brain Perfusion Bayesian algorithm has passed all the verification and validation and is therefore considered validated and acceptable."
    • "External Validation": "Based on the scores provided by the physicians, Vital concluded the Brain Perfusion with Bayesian algorithm is as safe and effective as the already cleared Brain Perfusion with SVD+ algorithm and fulfills its intended use."

    While direct numerical performance metrics are not given, the implicit acceptance criteria are that the device's output (CBF, CBV, TTP, MTT maps and calculations) is comparable and clinically acceptable to that generated by the predicate device, especially in its ability to support the physician in visualizing perfusion in acute stroke.

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions "External Validation" where "physicians evaluated if the Brain Perfusion with Bayesian algorithm (subject device) was substantially equivalent with the Brain Perfusion with SVD+ algorithm (K121213, predicate device)." However, the specifics of the test set, including:

    • Sample size: Not explicitly stated (e.g., number of patients/cases).
    • Data provenance (country of origin, retrospective/prospective): Not explicitly stated.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of experts: Not explicitly stated how many "physicians" were involved in the "External Validation."
    • Qualifications of experts: The document only refers to them as "physicians." Specific qualifications (e.g., "radiologist with 10 years of experience") are not provided.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or evaluating the physician scores in the "External Validation." It simply states that "Based on the scores provided by the physicians, Vital concluded..."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? While an "External Validation" involving "physicians" was performed, the document does not explicitly label it as a formal MRMC comparative effectiveness study in the way this term is typically used for AI-assisted workflows (i.e., comparing human readers with and without AI assistance). The focus was on comparing the Bayesian algorithm's outputs to the predicate's SVD+ algorithm's outputs, as judged by physicians.
    • Effect size of human improvement with AI vs. without AI assistance: Not reported, as the study was not framed as a human-in-the-loop improvement study.

    6. Standalone (Algorithm Only) Performance

    • Was a standalone study done? Yes, the document heavily implies that the "Algorithm Testing" and subsequent "External Validation" primarily focused on the standalone performance of the Bayesian algorithm in generating perfusion maps and calculations. The "External Validation" specifically assessed if the "Brain Perfusion with Bayesian algorithm (subject device) was substantially equivalent with the Brain Perfusion with SVD+ algorithm (K121213, predicate device)." This indicates an evaluation of the algorithm's output itself.

    7. Type of Ground Truth Used

    The "ground truth" for the external validation appears to be the expert consensus/clinical judgment of the participating "physicians" who evaluated the outputs of the Bayesian algorithm compared to the SVD+ algorithm of the predicate device. There is no mention of pathology or outcomes data being used as ground truth for this particular evaluation.

    8. Sample Size for the Training Set

    The document does not provide any information regarding the sample size or characteristics of the training set used for the Bayesian algorithm. As this is a 510(k) for a software update (addition of a new algorithm) to an already cleared device, the submission focuses on demonstrating the safety and effectiveness of the change relative to the predicate, rather than fully detailing the original algorithm development.

    9. How Ground Truth for the Training Set Was Established

    Since information on the training set itself is not provided, the method for establishing its ground truth is also not described in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K181773
    Date Cleared
    2018-09-25

    (84 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K021099, K152602

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Synapse 3D Optional Tools is medical imaging software used with Synapse 3D Base Tools that is intended to provide trained medical professionals, with tools to aid them in reading, interpreting, reporting, and treatment planning. Synapse 3D Optional Tools accepts DICOM compliant medical images acquired from a variety of imaging devices including CT, MR. This product is not intended for use with or for the primary diagnostic interpretation of Mammography images. Addition to the tools in Synapse 3D Base Tools, Synapse 3D Optional Tools provides:

    · Imaging tools for CT images including virtual endoscopic viewing.

    · Imaging tools for MR images including delayed enhancement image viewing and diffusion-weighted MRI data analysis.

    Device Description

    Synapse 3D Optional Tools is an optional software module that works with Synapse 3D Base Tools (cleared by via K120361 on 04/06/2012). Synapse 3D Base Tools is connected through DICOM standard to medical devices such as CT, MR, CR, US, NM, PT, XA, etc. and to a PACS system storing data generated by these medical devices, and retrieves image data via network communication based on the DICOM standard. The retrieved image data are stored on the local disk managed by Synapse 3D Base Tools, and the associated information of the image data is registered in the database and used for display, image processing, analysis, etc.

    Synapse 3D Optional Tools provides imaging tools for CT and MR images such as virtual endoscopic simulator (CT) (referred collectively as "Endoscopic Simulator"), diffusion-weighted MRI data analysis (MR) (referred collectively as "IVIM"), and delayed enhancement image viewing (MR) (referred collectively as "Delayed Enhancement"). The software can display the images on a display monitor, or printed them on a hardcopy using a DICOM printer or a Windows printer.

    Synapse 3D Optional Tools runs on Windows standalone and server/client configuration installed on a commercial general-purpose Windows-compatible computer. It offers software tools which can be used by trained professionals, such as radiologists, clinicians or general practitioners to interpret medical images obtained from various medical devices to create reports or develop treatment plans.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the device:

    The provided document (K181773 for Synapse 3D Optional Tools) does not contain a detailed table of acceptance criteria or comprehensive study results for specific performance metrics in the way one might expect for a new AI/CAD device. Instead, it leverages its classification as a "Picture Archiving And Communications System" (PACS) and positions itself as substantially equivalent to predicate devices. This typically means that formal performance studies with detailed acceptance criteria and reported metrics demonstrating specific diagnostic accuracy are not required in the same way as a de novo device or a device making a new diagnostic claim.

    The focus is on demonstrating that the features and technical characteristics are similar to existing cleared devices, and that the software development process and risk management ensure safety and effectiveness.

    Here's a breakdown of the requested information based on the provided text:


    1. Table of acceptance criteria and the reported device performance

    As mentioned above, the document does not present a table of quantitative acceptance criteria for performance metrics (e.g., sensitivity, specificity, AUC) and corresponding reported performance values for the Synapse 3D Optional Tools. The "acceptance criteria" are implied to be fulfilled by following software development processes, risk management, and successful functional and system-level testing, which are designed to ensure the device operates as intended and is substantially equivalent to predicate devices.

    The "reported device performance" is described qualitatively as:
    "Test results showed that all tests passed successfully according to the design specifications. All of the different components of the Synapse 3D Optional Tools software have been stress tested to ensure that the system as a whole provides all the capabilities necessary to operate according to its intended use and in a manner substantially equivalent to the predicate devices."


    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    The document states:
    "benchmark performance testing was conducted using actual clinical images to help demonstrate that the semi-automatic segmentation, detection, and registration functions implemented in Synapse 3D Optional Tools achieved the expected accuracy performance."

    However, it does not specify the sample size of the clinical images used for this benchmark performance testing. It also does not specify the data provenance (e.g., country of origin, retrospective or prospective nature of the data).


    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    The document does not provide information on how ground truth was established for the "actual clinical images" used in benchmark performance testing, nor does it mention the number or qualifications of experts involved in this process.


    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    The document does not specify any adjudication method for establishing ground truth or evaluating the test set.


    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not mention a multi-reader, multi-case (MRMC) comparative effectiveness study. It explicitly states: "The subject of this 510(k) notification, Synapse 3D Optional Tools did not require clinical studies to support safety and effectiveness of the software." This reinforces the idea that the submission relies on substantial equivalence and non-clinical testing rather than new clinical effectiveness studies involving human readers.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document notes that "benchmark performance testing was conducted using actual clinical images to help demonstrate that the semi-automatic segmentation, detection, and registration functions implemented in Synapse 3D Optional Tools achieved the expected accuracy performance." This implies some form of standalone evaluation of these specific functions' accuracy. However, "standalone performance" in the context of diagnostic accuracy (e.g., sensitivity/specificity of an AI model) is not explicitly detailed or quantified. The device is described as providing "tools to aid them [trained medical professionals] in reading, interpreting, reporting, and treatment planning," indicating it's an assistive tool, not a standalone diagnostic AI.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The document does not explicitly state the type of ground truth used for the "actual clinical images" in the benchmark testing. Given the general nature of the tools (segmentation, detection, registration), the ground truth for "accuracy performance" would likely involve expert-defined annotations or measurements on the images themselves, rather than pathology or outcomes data. However, this is an inference, not a stated fact in the document.


    8. The sample size for the training set

    The document does not provide information regarding a training set. This is consistent with a 510(k) for software tools that are substantially equivalent to existing PACS systems, rather than a de novo AI/ML algorithm that requires extensive training data. While it mentions "semi-automatic segmentation, detection, and registration functions," which often involve learned components, the submission focuses on the functionality of these tools as part of a PACS system rather than reporting on the underlying AI model's development details.


    9. How the ground truth for the training set was established

    Since no training set information is provided, there is no information on how ground truth for a training set was established.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1