Search Filters

Search Results

Found 8 results

510(k) Data Aggregation

    K Number
    K230772
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2023-04-17

    (27 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate (QPR) is an extension to the Quantib AI Node (QBX) software platform that enables analysis of prostate MRI scans. QPR makes use of QBX functionality, and includes 3 specific QPR modules.

    The three specific modules of QPR are as follows:

    • An automatic processing module that performs input checks, prostate (sub-region) segmentation, multi-parametric MRI (mpMRI) image registration, computation of a biparametric combination image, and optionally crops the images around the prostate.
    • A two-step user-interaction module in which the user can:
      • edit the computed prostate segmentation (QBX functionality) and determine PSA density (QPR-specific functionality).
      • view multi-parametric MRI images (QBX functionality), and segment (QBX functionality) and analyze potential lesions (QPR-specific functionality). This extension also shows the prostatic sub-region segmentation and biparametric combination image overlay. (QPR-specific functionality).
    • An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for Quantib Prostate based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on demonstrating substantial equivalence to the predicate device (Quantib Prostate v2.0) rather than presenting specific, quantitative acceptance criteria with corresponding performance metrics in a direct table format. However, based on the description, we can infer the performance expectations and the reported findings.

    Acceptance Criteria CategorySpecific Acceptance Criterion (Inferred)Reported Device Performance
    Prostate SegmentationAccuracy at least as good as predicate device (Quantib Prostate v2.0)"at least as accurate as that of the predicate device"
    Subregion Segmentation (Standalone)Acceptable Dice overlap and Mean Surface Distance compared to ground truth. Agreement comparable to interobserver variability.Bench testing (quantitative metrics) did not reveal issues and showed agreement comparable to interobserver measurements.
    Subregion Segmentation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged sub-regions "at least as accurate as the predicate device was" using a 5-point Likert scale.
    ROI Localization Initiation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged ROI localizations "at least as accurate as the predicate device was" using a 5-point Likert scale.

    2. Sample Size for Test Set and Data Provenance

    • Prostate and Subregion Segmentation Algorithm (Non-clinical / Bench Testing): While a specific test set sample size isn't explicitly stated for this part, the overall prostate and sub-region segmentation algorithms were improved by "updating the methodology and training it on over 400 scans." This suggests the test set for its validation would be drawn from a similar diverse dataset.
    • Clinical Performance Testing (Qualitative): The document does not specify the number of cases used in the clinical performance testing for the subregion segmentations and ROI localization initiation.
    • Data Provenance: Not explicitly stated for either the non-clinical or clinical performance testing. We cannot determine the country of origin or if the data was retrospective or prospective from this document.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Non-clinical Performance Testing (Prostate/Subregion Segmentation): Ground truth was established via "manual segmentation." The number and qualifications of the experts performing these manual segmentations are not specified in the document.
    • Clinical Performance Testing (Subregion Segmentation and ROI Localization Initiation): "Radiologists were asked to score the subregion segmentations and ROI initial localizations." The number of radiologists or their specific qualifications (e.g., years of experience, subspecialty) are not provided.

    4. Adjudication Method for the Test Set

    • Non-clinical Performance Testing: The document mentions "comparing the automatic segmentations to their ground truth" and comparing results with "interobserver measurements." This suggests that the ground truth itself might have involved some form of consensus or a single expert's work, but a formal adjudication method (like 2+1 or 3+1) for the ground truth establishment is not described.
    • Clinical Performance Testing: A formal adjudication method is not mentioned. Radiologists scored segmentations and ROI localizations using a 5-point Likert scale, implying individual expert review rather than a consensus process for scoring during the test.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A formal MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described for this submission. The clinical performance testing involved radiologists scoring the device's output, but it was not framed as a study to measure improvement in human reader performance with AI assistance. The focus was on demonstrating that the device's performance was at least as good as the predicate.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    • Yes, standalone performance was done for the combined prostate-subregion segmentation algorithm. This is described under "Non-clinical performance testing," where the "stand-alone performance of the sub-region segmentation algorithm" was evaluated by comparing automatic segmentations to ground truth using Dice overlap and Mean Surface Distance.
    • The "semi-automatic clinical performance test of the prostate segmentation has not been repeated" as its accuracy was found to be at least as good as the predicate in non-clinical testing, suggesting that the initial predicate validation (K221106) would have covered this aspect, potentially including standalone evaluation.

    7. Type of Ground Truth Used

    • Non-clinical Performance Testing: "Manual segmentation" was used to establish ground truth for prostate and subregion segmentation.
    • Clinical Performance Testing: The "ground truth" here is implied to be the expert judgment of radiologists based on a 5-point Likert scale score, rather than a definitive pathological or outcomes-based truth.

    8. Sample Size for the Training Set

    • The improved prostate and sub-region segmentation algorithms were trained "on over 400 scans."

    9. How the Ground Truth for the Training Set Was Established

    • The document states the training involved "updating the methodology and training it on over 400 scans." It does not explicitly detail the process for establishing the ground truth for these 400+ training scans. However, given the context of segmentation algorithms, it is highly probable that manual segmentations by experts were used to create the ground truth annotations for training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221106
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2022-05-13

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate is an extension to the Quantib Al Node software platform and enables analysis of prostate MRI scans. Quantib Prostate makes use of Quantib Al Node functionality, and includes the following specific Quantib Prostate modules:

    • An automatic processing module that performs prostate and prostate sub-reqion segmentation and multi-parametric MRI image registration and computation of a biparametric combination image.
    • A user-interaction module in which the user can edit and approve the computed prostate segmentation and determine PSA density.
    • A user-interaction module in which the user can view multi-parametric MRI images. segment and analyze potential lesions and set and view PI-RADS properties. This module also shows the prostatic sub-region segmentation and biparametric combination image overlav.
    • An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
    AI/ML Overview

    The provided text does not contain a detailed study specifically proving the device meets acceptance criteria, nor does it specify acceptance criteria in a quantifiable table format. It describes the modifications made to the device (Quantib Prostate 2.0 from 1.0) and mentions performance testing to demonstrate substantial equivalence to its predicate.

    However, based on the information provided about the non-clinical and clinical performance testing, we can infer the intent of the acceptance criteria and the nature of the studies conducted. I will reconstruct the table and study description based on the available information, noting where specific quantifiable acceptance criteria are not explicitly stated in the document.

    Inferred Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric (Inferred)Acceptance Threshold (Inferred)Reported Device Performance
    Non-Clinical Performance (Sub-Region Segmentation)Dice Overlap CoefficientSufficiently high (compared to inter-observer variability)Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device."
    Mean Surface DistanceSufficiently low (compared to inter-observer variability)Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device."
    Statistical comparison to inter-observer measurementsComparable to inter-observer variability"results are compared with the inter-observer measurements using statistical tests." (Specific results not given)
    Clinical Performance (Sub-Region Segmentation & ROI Initialization)Qualitative Assessment (Likert Scale)High quality (e.g., majority of scores at the higher end of the 5-point Likert scale)"It is concluded that sub-regions and ROI localizations are judged to be of high quality." (Specific scores not given)
    General Safety and EffectivenessAbsence of significant issuesNo issues identified"Bench testing did not reveal any issues with the system..." and "By virtue of its intended use and physical and technological characteristics, Quantib Prostate 2.0 is substantially equivalent to a device that has been approved for marketing in the United States. The performance data show that Quantib Prostate 2.0 is as safe and effective as the predicate device."

    Study Information Pertaining to Device Performance:

    1. Sample sizes used for the test set and the data provenance:

      • Sub-region segmentation (non-clinical testing): The exact sample size for the image dataset used for comparing automatic sub-region segmentation to ground truth is not specified.
      • Clinical performance test: The exact sample size (number of cases or radiologists) for the qualitative assessment is not specified.
      • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Ground Truth for Sub-region Segmentation: Established by "manual segmentation." The number of experts and their qualifications (e.g., "radiologist with 10 years of experience") are not specified. The text mentions "inter-observer measurements" for context of the Dice overlap and Mean Surface Distance, implying multiple human experts were involved in generating or evaluating "manual segmentations."
      • Ground Truth for Clinical Performance (Likert Scale): The "radiologists were asked to score." The number of radiologists and their specific qualifications are not specified.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The document implies ground truth for quantitative metrics was manual segmentation, presumably from one or more experts. For the qualitative clinical assessment, radiologists scored the output, but there is no mention of a formal adjudication process (e.g., arbitration for discrepancies) if multiple radiologists rated the same case. It is likely a consensus or averaged score was used for "high quality" judgment, but no specific method is detailed.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study involving human readers with vs. without AI assistance is explicitly described. The testing focuses on the performance of the algorithm additions themselves (sub-region segmentation, ROI localization initiation) and their qualitative assessment by radiologists, rather than a comparative study of radiologist performance. The software is described as "assisting" in interpretation, but no data on improved human reader performance with the assistance is provided in this document.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, for the sub-region segmentation algorithm, "Bench testing of the software was done to show that the system is suitable for its intended use and to evaluate the stand-alone performance of the sub-region segmentation algorithm. This was done by comparing the automatic sub-region segmentation to a ground truth and calculating the Dice overlap and Mean Surface Distance." This is a standalone performance evaluation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • For the non-clinical quantitative performance of sub-region segmentation, the ground truth was manual segmentation (presumably by experts).
      • For the clinical qualitative performance, the ground truth for "high quality" was based on radiologist scoring/judgment (Likert scale).
    7. The sample size for the training set:

      • Not specified within this document. This document focuses on the performance testing of the modified device, not its development or training data.
    8. How the ground truth for the training set was established:

      • Not specified within this document as training set details are not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K202501
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2020-10-11

    (41 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate is an extension to the Quantib Al Node software platform and enables analysis of prostate MRI scans. Quantib Prostate makes use of Quantib Al Node functionality, and includes the following specific Quantib Prostate modules:

    • An automatic processing module that performs prostate segmentation and multiparametric MRI image registration.
    • A user-interaction module in which the user can edit and approve the computed prostate segmentation and determine PSA density.
    • . A user-interaction module in which the user can view multi-parametric MRI images, and segment and analyze potential lesions. This extension will also apply a mathematical operation on the input images to combine information from the MRI sequences into a single combination image.
    • . An automatic processing module that collects all results for exporting and transferring back to the user.
    AI/ML Overview

    The provided text describes the Quantib Prostate device and its substantial equivalence to a predicate device (DynaCAD). However, it does not contain the specific details required to fully address all points of your request regarding acceptance criteria and the study that proves the device meets those criteria.

    Here's an analysis of what is available and what is missing:

    1. A table of acceptance criteria and the reported device performance

    The document states: "The testing results support that all the system requirements have met their pre-defined acceptance criteria." and "Bench testing did not reveal any issues with the system, demonstrating that the performance of Quantib Prostate is as safe and effective as its predicate devices."

    And for clinical performance: "It was concluded that a trained medical professional using Quantib Prostate performs better or equal in prostate segmentation than without use of Quantib Prostate."

    However, the document does NOT provide a table detailing specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or volumetric error thresholds) or the reported numerical performance metrics from either the non-clinical or clinical studies.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test set sample size: Not specified.
    • Data provenance: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not specified. The document mentions "correction by trained clinicians" for the prostate segmentation algorithm in a clinical use context for non-clinical testing, and "trained medical professional" for clinical testing, but no details on their number or qualifications for ground truth establishment.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions: "comparing the performance of a clinician using Quantib Prostate to segment a prostate with a clinician not using Quantib Prostate is a reasonable way to prove safety and effectiveness of the semi-automatic segmentation algorithm." And later: "It was concluded that a trained medical professional using Quantib Prostate performs better or equal in prostate segmentation than without use of Quantib Prostate."

    This strongly suggests a comparative study was performed. However:

    • The exact design (MRMC vs. other comparative study) is not explicitly stated.
    • The effect size of improvement (e.g., specific metrics like Dice coefficient improvement, time saved, or diagnostic accuracy change) is NOT provided. The statement "better or equal" is qualitative.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, for non-clinical performance, it states: "This included characterization of the stand-alone performance of the prostate segmentation algorithm."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not explicitly stated. It refers to "correction by trained clinicians" when discussing the non-clinical performance of the prostate segmentation algorithm, which implies expert consensus or expert-corrected segmentations were used as ground truth for evaluating the algorithm's standalone performance within a clinical context.

    8. The sample size for the training set

    Not specified.

    9. How the ground truth for the training set was established

    Not specified. (Likely similar to the test set ground truth, but not confirmed).


    Summary of Available vs. Missing Information:

    Information RequestedAvailable in Document?Details
    1. Acceptance Criteria & Reported Performance TableNoThe document states acceptance criteria were met and performance is "better or equal" to unassisted, but no specific numerical criteria or reported metrics are provided.
    2. Test Set Sample Size & Data ProvenanceNoNot specified.
    3. Number & Qualifications of Experts for Test Set Ground TruthNoMentions "trained clinicians" and "trained medical professional" but no specifics on number or qualifications for ground truth creation.
    4. Adjudication Method for Test SetNoNot specified.
    5. MRMC Comparative Effectiveness Study & Effect SizePartially - Study hinted, but no effect sizeA comparative study was likely done ("clinician using Quantib Prostate... with a clinician not using"), but the specific study design (MRMC) is not confirmed, and no effect size quantification is provided beyond "better or equal."
    6. Standalone (Algorithm Only) Performance StudyYes"characterization of the stand-alone performance of the prostate segmentation algorithm."
    7. Type of Ground Truth (Test Set)Implicit (Expert-corrected/Consensus)For the prostate segmentation algorithm, "correction by trained clinicians" is mentioned for non-clinical testing, implying expert-edited or expert consensus segmentations served as ground truth.
    8. Training Set Sample SizeNoNot specified.
    9. Ground Truth Establishment for Training SetNoNot specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K200899
    Device Name
    Quantib AI Node
    Manufacturer
    Date Cleared
    2020-06-15

    (73 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib AI Node is a software platform that provides visualization tools and enables external postprocessing extensions for medical images.

    The software platform is designed to support in qualitative and quantitative measurement, analysis, and reporting of clinical data.

    The software platform provides means for storing of data and transferring data from and into other systems such as PACS. The software platform provides an interface to integrate processing extensions and custom input/output modules.

    Quantib AI Node functionality includes:

    • · Interface for multi-modality and multi-vendor input/output of data, such as DICOM data
    • · Initiation of extensions to process the data based on properties of the data
    • · Interface for extensions that provide custom data import/output, post-processing, and user interface functionality
    • · User interface for visualization and annotation of medical images, and for correction and confirmation of results generated by post-processing extensions

    Quantib AI Node is intended to be used by trained medical professionals.

    Device Description

    Quantib Al Node (QBX) is a stand-alone software platform that enables external post-processing extensions for medical images and provides visualization and annotation tools. It can automatically process data received via a DICOM connection and automatically export results to external DICOM nodes. It allows for configuring workflows that can contain user-interaction steps to review and correct automatic results.

    AI/ML Overview

    The provided document is a 510(k) summary for the Quantib AI Node. It does not contain detailed information about a specific study comparing device performance against acceptance criteria with quantitative results, sample sizes, expert qualifications, or adjudication methods.

    This document describes the Quantib AI Node as a software platform that provides visualization and annotation tools and enables external post-processing extensions for medical images. It's intended to support trained medical professionals in qualitative and quantitative measurement, analysis, and reporting of clinical data. It is not an AI algorithm that performs diagnostic tasks itself, but rather a platform to integrate such extensions.

    Therefore, the typical metrics and study design elements you've requested (e.g., sensitivity, specificity, reader improvement with AI, standalone performance of an algorithm) are not applicable to this specific 510(k) submission, as it concerns a software platform, not a diagnostic AI algorithm.

    However, I can extract information related to its "performance" in the context of a software platform:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Since this is a software platform, the "performance" relates to its functionality, adherence to standards, and verification/validation.

    Acceptance Criteria CategoryReported Device "Performance" (Meeting Criteria)
    Non-clinical Performance / FunctionalityBench testing performed to test the functionality of the system and measurement tools. Did not reveal any issues, demonstrating performance is as safe and effective as predicate devices.
    Standards MetCompliance with:
    • ANSI AAMI ISO 14971:2007/(R)2010 (Risk Management)
    • ANSI AAMI IEC 62304:2006/A1:2016 (Software Life Cycle Processes)
    • ANSI AAMI IEC 62366-1:2015 (Usability Engineering) |
      | Software Verification and Validation | Tested in accordance with verification and validation processes and planning. Testing results support that all system requirements have met their pre-defined acceptance criteria. |
      | Safety Implications | Differences from predicate devices do not affect safety. Based on Failure Mode and Effects Analysis (FMEA) and risk category classification. Does not introduce new safety issues. |
      | Compatibility | Conforms to NEMA PS 3.1-3.20 (2016) DICOM set. DICOM conformance statement included. |
      | Ruler Tool | Available. |
      | ROI Volume Measurement | Available and equal to ROI measurements in predicate devices. |
      | Required Input | DICOM compatible data. |

    2. Sample Size Used for the Test Set and Data Provenance:

    This information is not provided in the document as it's not relevant for a software platform's 510(k) where the primary focus is on functionality, safety, and equivalence to predicates, rather than diagnostic accuracy on a specific disease with a test set of medical images. The testing described is "bench testing" of the software system.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

    This information is not provided and is not applicable to this type of software platform submission. Ground truth established by experts is typically a requirement for AI algorithms performing interpretations.

    4. Adjudication Method for the Test Set:

    This information is not provided and is not applicable.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:

    No, an MRMC comparative effectiveness study was not done and is not applicable for this software platform. The document focuses on demonstrating substantial equivalence in terms of functionality and safety to predicate devices, not on the clinical effectiveness or reader improvement of a specific AI algorithm.

    6. If a Standalone (algorithm only without human-in-the-loop performance) was done:

    While the document states the Quantib AI Node is a "stand-alone software platform," this refers to its independence as a software application, not a standalone diagnostic algorithm performance. The document itself makes it clear that the platform "does not make diagnoses" and provides tools for human professionals. Therefore, a standalone performance study in the context of diagnostic accuracy was not performed or needed for this submission.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    This information is not provided and is not applicable. The "ground truth" for this software platform relates to whether its functionalities (e.g., measurement tools, data handling, interfaces) perform as designed, not whether it accurately detects a disease.

    8. The Sample Size for the Training Set:

    This information is not provided and is not applicable. The Quantib AI Node is a platform; it is not itself an AI algorithm that undergoes training. Any external post-processing extensions integrated into the platform might be AI-based and would have their own training sets, but that is outside the scope of this platform's 510(k) submission.

    9. How the Ground Truth for the Training Set was Established:

    This information is not provided and is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182564
    Device Name
    Quantib ND
    Manufacturer
    Date Cleared
    2018-12-27

    (100 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib™ ND is a non-invasive medical imaging processing application that is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures from a set of magnetic resonance (MR) images. The Quantib™ ND output consists of segmentations, visualizations and volumetric measurements of brain structures and white matter hyperintensities. Volumetric measurements may be compared to reference centile data. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the trained medical professional in quantitative reporting. Quantib™ ND is a software application on top of Myrian®.

    Device Description

    Quantib™ ND is a post-processing analysis module for Myrian®, which provides 3D image visualization tools that create and display user-defined views and streamlines interpretation and reporting. It is intended for automatic labeling, visualization, and volumetric quantification of identifiable brain structures from magnetic resonance images (a 3D T1-weighted MR image, with an additional T2-weighted FLAIR MR image for white matter hyperintensities (WMH) segmentation). The segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments. Quantib™ ND provides quantitative information on both the absolute and relative volume of the segmented regions. The automatic WMH segmentation is to be reviewed and if necessary. edited by the user before validation of the segmentation, after which volumetric information is accessible. Quantib ND consists of Quantib ND Baseline, which provides analysis of images of one time-point, and Quantib ND Follow-Up, which provides longitudinal analysis of images of two time-points. Quantib ND Follow-Up can only process images that have been processed by Quantib ND Baseline. Quantib ND is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the radiology specialist in quantitative reporting.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state acceptance criteria in the form of pre-defined thresholds for Dice index or absolute difference of relative volumes. However, the performance data is presented against manual segmentations, implying that the acceptance criteria are generally "good agreement" or "sufficient similarity" to manual segmentations, as judged by the provided metrics. For the purpose of this table, I'll assume that the reported values demonstrate that the device met an implicit acceptance standard.

    Brain Structure / MetricAcceptance Criteria (Implicit)Reported Device Performance (Mean ± Std. Dev.)
    Brain Tissue Segmentations
    Brain Dice Index"Good agreement"0.96 ± 0.01
    Brain Absolute Diff. of Rel. Volumes [pp]"Good agreement"1.7 ± 1.3
    CSF Dice Index"Good agreement"0.78 ± 0.05
    CSF Absolute Diff. of Rel. Volumes [pp]"Good agreement"1.8 ± 1.3
    ICV Dice Index"Good agreement"0.98 ± 0.01
    Hippocampus Segmentations
    Hippocampus total Dice Index"Good agreement"0.84 ± 0.03
    Hippocampus total Absolute Diff. of Rel. Volumes [pp]"Good agreement"0.03 ± 0.02
    Hippocampus right Dice Index"Good agreement"0.84 ± 0.03
    Hippocampus right Absolute Diff. of Rel. Volumes [pp]"Good agreement"0.01 ± 0.01
    Hippocampus left Dice Index"Good agreement"0.84 ± 0.03
    Hippocampus left Absolute Diff. of Rel. Volumes [pp]"Good agreement"0.01 ± 0.01
    Lobe Segmentations (Dataset C)
    Frontal lobe total Dice Index"Good agreement"0.95 ± 0.01
    Frontal lobe total Absolute Diff. of Rel. Volumes [pp]"Good agreement"1.95 ± 0.90
    Occipital lobe total Dice Index"Good agreement"0.88 ± 0.03
    Occipital lobe total Absolute Diff. of Rel. Volumes [pp]"Good agreement"0.87 ± 0.75
    Parietal lobe total Dice Index"Good agreement"0.89 ± 0.03
    Parietal lobe total Absolute Diff. of Rel. Volumes [pp]"Good agreement"2.81 ± 1.13
    Temporal lobe total Dice Index"Good agreement"0.91 ± 0.01
    Temporal lobe total Absolute Diff. of Rel. Volumes [pp]"Good agreement"1.33 ± 0.76
    Cerebellum total Dice Index"Good agreement"0.98 ± 0.01
    Cerebellum total Absolute Diff. of Rel. Volumes [pp]"Good agreement"0.47 ± 0.20
    White Matter Hyperintensities (WMH)
    WMH Dice Overlap"Good agreement"0.61 ± 0.13
    WMH Absolute Diff. of Rel. Volumes [pp]"Good agreement" (for non-CE cases)0.2 ± 0.2

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Brain Tissue, CSF, ICV (Dataset A):
      • Sample Size: 33 T1w MR images.
      • Data Provenance: "carefully selected to include data from multiple vendors and a series of representative scan settings." No specific country of origin or retrospective/prospective status is mentioned, but the description implies a historical or retrospective collection.
    • Hippocampus (Dataset B):
      • Sample Size: 89 T1w images.
      • Data Provenance: Not explicitly detailed beyond being T1w images. Implied retrospective.
    • Lobes (Dataset C):
      • Sample Size: 13 T1w MR images.
      • Data Provenance: Not explicitly detailed. Implied retrospective.
    • White Matter Hyperintensities:
      • Sample Size: 45 3D T1w images (7 contrast-enhanced), each with corresponding T2w FLAIR images.
      • Data Provenance: "represented various scan settings." Implied retrospective.
      • Note: The absolute difference of relative volumes for WMH was computed over 38 cases (those without contrast-enhancement).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • The document states that the segmentations were compared to "manual segmentations."
    • It does not specify the number of experts who performed these manual segmentations nor their qualifications (e.g., radiologist with X years of experience).

    4. Adjudication Method for the Test Set

    • The document only mentions "manual segmentations" as the ground truth. It does not provide any information about an adjudication method (such as 2+1, 3+1, or none) for these manual segmentations. It implies a single manual segmentation was used as the reference.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    • No, the provided text does not describe a multi-reader multi-case (MRMC) comparative effectiveness study evaluating how much human readers improve with AI vs. without AI assistance. The study focuses solely on the standalone performance of the AI algorithm compared to manual segmentations.

    6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, a standalone performance study was done. The "Algorithm Performance" section details the comparison of the Quantib™ ND algorithm's segmentations and volume measurements against manual segmentations, without human-in-the-loop interaction with the AI.

    7. The Type of Ground Truth Used

    • The type of ground truth used was expert manual segmentation. The text explicitly states, "To validate the quality of Quantib™ ND volume measurements and segmentations, these were compared to manual segmentations of the same scan and their derived volumes."

    8. The Sample Size for the Training Set

    • The document does not provide information regarding the sample size used for the training set of the Quantib™ ND algorithm. It only discusses the test sets used for validation.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide information on how the ground truth for the training set was established. It only details the method for establishing ground truth for the validation/test sets (manual segmentations).
    Ask a Question

    Ask a specific question about this device

    K Number
    K173939
    Device Name
    Quantib Brain
    Manufacturer
    Date Cleared
    2018-03-09

    (73 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib™ Brain is a non-invasive medical imaging processing application that is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures from a set of magnetic resonance (MR) images. The Quantib™ Brain output consists of segmentations, visualizations and volumetric measurements of grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF). The output also visualizes and quantifies white matter hyperintensity (WMH) candidates. Users need to review and if necessary, edit WMH candidates using the provided tools, before validation of the WMHs. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the trained medical professional in quantitative reporting. Quantib™ Brain is a post-processing plugin for the GE Advantage Workstation (AW 4.7) or AW Server (AWS 3.2) platforms.

    Device Description

    Quantib™ Brain is post-processing analysis software for the GE Advantage Workstation (AW 4.7) and AW Server (AWS 3.2) platforms using Volume Viewer Apps. 13.0 Ext 4 (or higher). It is intended for automatic labeling, visualization, and volumetric quantification of identifiable brain structures from magnetic resonance images (a 3D T1-weighted MR image, with an additional T2-weighted FLAIR MR image for white matter hyperintensities (WMH) segmentation). The segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments. Quantib™ Brain provides quantitative information on both the absolute and relative volume of the segmented regions. The automatic WMH segmentation is to be reviewed and if necessary, edited by the user before validation of the segmentation, after which volumetric information is accessible. Longitudinal analysis can be performed for the brain tissue segmentation and WMH seqmentation in order to compare multiple exams of an individual patient. Quantib Brain is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the radiology specialist in quantitative reporting.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and study details for QuantibTM Brain 1.3 based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria (Implicit from Study)Reported Device Performance
    Brain Volumetry (GM, WM, CSF):
    Dice index closer to 1 (perfect overlap)CSF: 0.78 ± 0.05
    GM: 0.84 ± 0.02
    WM: 0.86 ± 0.02
    Absolute difference of relative volumesCSF: 1.8 ± 1.0 pp
    (lower is better, implied target
    Ask a Question

    Ask a specific question about this device

    K Number
    K163013
    Manufacturer
    Date Cleared
    2017-01-06

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quantib BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib™ Brain is a non-invasive medical imaging processing application that is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures from a set of magnetic resonance (MR) images. The Quantib™ Brain output consists of segmentations, visualizations and volumetric measurements of grev matter (GM), white matter (WM), and cerebrospinal fluid (CSF). The output also visualizes white matter hyperintensity (WMH) candidates. Users need to review and if necessary, edit WMH candidates using the provided tools, before validation of the WMHs. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the trained medical professional in quantitative reporting. Quantib™ Brain is a post-processing plugin for the GE Advantage Workstation (AW 4.7) or AW Server (AWS 3.2) platforms.

    Device Description

    Quantib™ Brain is post-processing analysis software for the GE Advantage Workstation (AW 4.7) and AW Server (AWS 3.2) platforms using Volume Viewer Apps. 12.3 Ext 8 (or higher). It is intended for automatic labeling, visualization, and volumetric quantification of identifiable brain structures from magnetic resonance images (a 3D T1-weighted MR image, with an additional T2-weighted FLAIR MR image for white matter hyperintensities (WMH) segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments. Quantib™ Brain provides quantitative information on both the absolute and relative volume of the segmented regions. The automatic WMH segmentation is to be reviewed and if necessary, edited by the user before validation of the segmentation, after which volumetric information is accessible. Longitudinal analysis can be performed for the brain tissue segmentation and WMH segmentation in order to compare multiple exams of an individual patient. Quantib Brain is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the radiology specialist in quantitative reporting.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for Quantib™ Brain 1.2, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document primarily focuses on the validation of a newly added algorithm for classifying White Matter Hyperintensities (WMH) as consistent, new, or disappearing between longitudinal scans. The overall performance of core segmentation algorithms (brain volumetry and WMH) is stated as unchanged from the predicate device.

    Acceptance Criteria (Implicit for new WMH labeling algorithm)Reported Device Performance (Quantib™ Brain 1.2)
    Accurate labeling of WMHs as consistent, new, or disappearing in longitudinal comparison.The automatic labeling of WMHs was 99.6% identical to manual labeling of these WMHs for WMH volume.
    No impact on the safety of the device."The changes made in Quantib™ Brain 1.2 do not affect the safety of the device."
    Continued performance of existing algorithms."The performance of the already existing algorithms did not change."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: 12 datasets.
    • Data Provenance: The document states "12 datasets of different subjects, each consisting of a baseline exam and 1 to 3 follow-up exams." It does not explicitly state the country of origin or if the data was retrospective or prospective. However, given the nature of longitudinal studies, it's highly likely to be retrospective data collected over time from individual patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    The document states "manual labeling of these WMHs" was used for comparison. It does not specify the number of experts or their qualifications (e.g., radiologist with X years of experience).

    4. Adjudication Method for the Test Set:

    The document mentions "manual labeling" for comparison but does not detail an adjudication method (e.g., 2+1, 3+1, none) among multiple experts, suggesting the ground truth was established by a single manual labeling process, or the details of such a process are not provided.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done:

    No, an MRMC comparative effectiveness study is not mentioned in the provided text. The evaluation focuses on the algorithm's standalone performance compared to manual labeling, not on human reader performance with or without AI assistance.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    Yes, a standalone performance evaluation of the new WMH labeling algorithm was done. The study assessed the automatic labeling of WMHs against manual labeling, without a human-in-the-loop component.

    7. The Type of Ground Truth Used:

    The ground truth used for the WMH labeling algorithm was expert manual labeling. The document states, "The automatic labeling of WMHs was for 99.6% of the WMH volume identical to manual labeling of these WMHs."

    8. The Sample Size for the Training Set:

    The document does not provide any information regarding the training set sample size for the new WMH labeling algorithm or for the existing core algorithms.

    9. How the Ground Truth for the Training Set Was Established:

    The document does not provide any information on how the ground truth for the training set was established, as details about the training set itself are absent.

    Ask a Question

    Ask a specific question about this device

    K Number
    K153351
    Device Name
    Quantib Brain 1
    Manufacturer
    Date Cleared
    2016-06-17

    (210 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    QUANTIB BV

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib™ Brain is a non-invasive medical imaging processing application that is intended for automatic labeling, visualization, and volumetric quantification of segmentable brain structures from a set of magnetic resonance (MR) images. The Quantib™ Brain output consists of segmentations, visualizations and volumetric measurements of grey matter (GM), white matter (WM), and cerebrospinal fluid (CSF). The output also visualizes and quantifies white matter hyperintensity (WMH) candidates. Users need to review and if necessary, edit WMH candidates using the provided tools, before validation of the WMHs. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the trained medical professional in quantitative reporting. Quantib™ Brain is a post-processing plugin for the GE Advantage Workstation (AW 4.7) or AW Server (AWS 3.2) platforms.

    Device Description

    Quantib™ Brain is post-processing analysis software for the GE Advantage (AW 4.7) or AW Server (AWS 3.2) platforms using Volume Viewer Apps. 12.3 Ext 6. It is intended for automatic labeling, visualization, and volumetric quantification of identifiable brain structures from magnetic resonance images (a 3D T1-weighted MR image, with an additional T2-weighted FLAIR MR image for white matter hyperintensities (WMH) segmentation). The segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments. Quantib™ Brain provides quantitative information on both the absolute and relative volume of the segmented regions. The automatic WMH segmentation is to be reviewed and if necessary, edited by the user before validation of the segmentation, after which volumetric information is accessible. It is intended to provide the trained medical professional with complementary information for the evaluation and assessment of MR brain images and to aid the radiology specialist in quantitative reporting.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for Quantib™ Brain 1, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" but rather presents a performance study comparing the device's output to manual segmentations. Therefore, the "acceptance criteria" are inferred from the reported performance values that were deemed acceptable for market clearance.

    Metric (Inferred Acceptance Criterion)Reported Device Performance (Mean ± Standard Deviation)
    Brain Tissue Segmentation (GM, WM, CSF)
    Dice Index for CSF0.78 ± 0.05
    Dice Index for GM0.83 ± 0.02
    Dice Index for WM0.86 ± 0.02
    Absolute Difference of Relative CSF Volume (pp)1.6 ± 1.0
    Absolute Difference of Relative GM Volume (pp)2.8 ± 1.9
    Absolute Difference of Relative WM Volume (pp)2.6 ± 1.6
    ICV Segmentation
    Dice Index for ICV0.97 ± 0.01
    White Matter Hyperintensity (WMH) Segmentation
    Dice Index for WMH0.61 ± 0.13
    Absolute Difference of Relative WMH Volume (pp)0.6 ± 0.7

    Note: The document states that the performance data "shows that Quantib™ Brain is as safe and effective as the predicate device," implying these performance metrics were sufficient to demonstrate substantial equivalence.


    2. Sample Size Used for the Test Set and Data Provenance

    • Brain Tissue Segmentation (GM, WM, CSF, ICV):

      • Sample Size: 33 T1w MR images with 6 selected slices per scan for comparison.
      • Data Provenance: The set was "carefully selected to include data from multiple vendors and a series of representable scan settings." The document does not specify the country of origin or whether the data was retrospective or prospective.
    • White Matter Hyperintensity (WMH) Segmentation:

      • Sample Size: 30 3D T1w images with corresponding T2w FLAIR images.
      • Data Provenance: This set also "represented various scan settings." The document does not specify the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts or their qualifications for establishing the ground truth. It simply refers to "manual segmentations" for brain tissues and "manually segmented" WMHs.


    4. Adjudication Method for the Test Set

    The document does not specify any adjudication method (e.g., 2+1, 3+1) for establishing the ground truth or resolving discrepancies in manual segmentations. It merely states "manual segmentations."


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, the document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The study focuses on comparing the algorithm's performance against manual segmentations, not on how human readers improve with or without AI assistance.


    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance study was done. The reported "Algorithm performance" (Section VII.2) directly compares the Quantib™ Brain's automatic segmentations and volume measurements to manual segmentations for the same scans, without human intervention in the device's output during the test. The "Users need to review and if necessary, edit WMH candidates using the provided tools, before validation of the WMHs" statement in the Indications for Use refers to the intended use case, not the performance validation study's methodology. The study itself assesses the raw algorithmic output.


    7. The Type of Ground Truth Used

    The ground truth used was expert manual segmentation.

    • For brain tissue volumetry (GM, WM, CSF, ICV), the device's relative brain tissue volumes were compared to "relative volumes derived from manual segmentations."
    • For WMH, "WMHs were manually segmented on the T2w FLAR images" and compared to the device's automatic segmentation.

    8. The Sample Size for the Training Set

    The document does not specify the sample size for the training set. It mentions that "The segmentation system relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments," but does not give a number for these atlases or the data used to create them or train the system.


    9. How the Ground Truth for the Training Set Was Established

    The document does not explicitly state how the ground truth for the training set was established. It mentions that the system "relies on a number of atlases each consisting of a 3D T1-weighted MR image and a label map dividing the MR image into different tissue segments." It can be inferred that these atlases contain expert-defined segmentations (label maps), but the method of their creation (e.g., by how many experts, what qualifications, adjudication) is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1