Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K230772
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2023-04-17

    (27 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Quantib Prostate

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate (QPR) is an extension to the Quantib AI Node (QBX) software platform that enables analysis of prostate MRI scans. QPR makes use of QBX functionality, and includes 3 specific QPR modules.

    The three specific modules of QPR are as follows:

    • An automatic processing module that performs input checks, prostate (sub-region) segmentation, multi-parametric MRI (mpMRI) image registration, computation of a biparametric combination image, and optionally crops the images around the prostate.
    • A two-step user-interaction module in which the user can:
      • edit the computed prostate segmentation (QBX functionality) and determine PSA density (QPR-specific functionality).
      • view multi-parametric MRI images (QBX functionality), and segment (QBX functionality) and analyze potential lesions (QPR-specific functionality). This extension also shows the prostatic sub-region segmentation and biparametric combination image overlay. (QPR-specific functionality).
    • An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for Quantib Prostate based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily focuses on demonstrating substantial equivalence to the predicate device (Quantib Prostate v2.0) rather than presenting specific, quantitative acceptance criteria with corresponding performance metrics in a direct table format. However, based on the description, we can infer the performance expectations and the reported findings.

    Acceptance Criteria CategorySpecific Acceptance Criterion (Inferred)Reported Device Performance
    Prostate SegmentationAccuracy at least as good as predicate device (Quantib Prostate v2.0)"at least as accurate as that of the predicate device"
    Subregion Segmentation (Standalone)Acceptable Dice overlap and Mean Surface Distance compared to ground truth. Agreement comparable to interobserver variability.Bench testing (quantitative metrics) did not reveal issues and showed agreement comparable to interobserver measurements.
    Subregion Segmentation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged sub-regions "at least as accurate as the predicate device was" using a 5-point Likert scale.
    ROI Localization Initiation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged ROI localizations "at least as accurate as the predicate device was" using a 5-point Likert scale.

    2. Sample Size for Test Set and Data Provenance

    • Prostate and Subregion Segmentation Algorithm (Non-clinical / Bench Testing): While a specific test set sample size isn't explicitly stated for this part, the overall prostate and sub-region segmentation algorithms were improved by "updating the methodology and training it on over 400 scans." This suggests the test set for its validation would be drawn from a similar diverse dataset.
    • Clinical Performance Testing (Qualitative): The document does not specify the number of cases used in the clinical performance testing for the subregion segmentations and ROI localization initiation.
    • Data Provenance: Not explicitly stated for either the non-clinical or clinical performance testing. We cannot determine the country of origin or if the data was retrospective or prospective from this document.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Non-clinical Performance Testing (Prostate/Subregion Segmentation): Ground truth was established via "manual segmentation." The number and qualifications of the experts performing these manual segmentations are not specified in the document.
    • Clinical Performance Testing (Subregion Segmentation and ROI Localization Initiation): "Radiologists were asked to score the subregion segmentations and ROI initial localizations." The number of radiologists or their specific qualifications (e.g., years of experience, subspecialty) are not provided.

    4. Adjudication Method for the Test Set

    • Non-clinical Performance Testing: The document mentions "comparing the automatic segmentations to their ground truth" and comparing results with "interobserver measurements." This suggests that the ground truth itself might have involved some form of consensus or a single expert's work, but a formal adjudication method (like 2+1 or 3+1) for the ground truth establishment is not described.
    • Clinical Performance Testing: A formal adjudication method is not mentioned. Radiologists scored segmentations and ROI localizations using a 5-point Likert scale, implying individual expert review rather than a consensus process for scoring during the test.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A formal MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described for this submission. The clinical performance testing involved radiologists scoring the device's output, but it was not framed as a study to measure improvement in human reader performance with AI assistance. The focus was on demonstrating that the device's performance was at least as good as the predicate.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    • Yes, standalone performance was done for the combined prostate-subregion segmentation algorithm. This is described under "Non-clinical performance testing," where the "stand-alone performance of the sub-region segmentation algorithm" was evaluated by comparing automatic segmentations to ground truth using Dice overlap and Mean Surface Distance.
    • The "semi-automatic clinical performance test of the prostate segmentation has not been repeated" as its accuracy was found to be at least as good as the predicate in non-clinical testing, suggesting that the initial predicate validation (K221106) would have covered this aspect, potentially including standalone evaluation.

    7. Type of Ground Truth Used

    • Non-clinical Performance Testing: "Manual segmentation" was used to establish ground truth for prostate and subregion segmentation.
    • Clinical Performance Testing: The "ground truth" here is implied to be the expert judgment of radiologists based on a 5-point Likert scale score, rather than a definitive pathological or outcomes-based truth.

    8. Sample Size for the Training Set

    • The improved prostate and sub-region segmentation algorithms were trained "on over 400 scans."

    9. How the Ground Truth for the Training Set Was Established

    • The document states the training involved "updating the methodology and training it on over 400 scans." It does not explicitly detail the process for establishing the ground truth for these 400+ training scans. However, given the context of segmentation algorithms, it is highly probable that manual segmentations by experts were used to create the ground truth annotations for training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K221106
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2022-05-13

    (28 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Quantib Prostate

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate is an extension to the Quantib Al Node software platform and enables analysis of prostate MRI scans. Quantib Prostate makes use of Quantib Al Node functionality, and includes the following specific Quantib Prostate modules:

    • An automatic processing module that performs prostate and prostate sub-reqion segmentation and multi-parametric MRI image registration and computation of a biparametric combination image.
    • A user-interaction module in which the user can edit and approve the computed prostate segmentation and determine PSA density.
    • A user-interaction module in which the user can view multi-parametric MRI images. segment and analyze potential lesions and set and view PI-RADS properties. This module also shows the prostatic sub-region segmentation and biparametric combination image overlav.
    • An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
    AI/ML Overview

    The provided text does not contain a detailed study specifically proving the device meets acceptance criteria, nor does it specify acceptance criteria in a quantifiable table format. It describes the modifications made to the device (Quantib Prostate 2.0 from 1.0) and mentions performance testing to demonstrate substantial equivalence to its predicate.

    However, based on the information provided about the non-clinical and clinical performance testing, we can infer the intent of the acceptance criteria and the nature of the studies conducted. I will reconstruct the table and study description based on the available information, noting where specific quantifiable acceptance criteria are not explicitly stated in the document.

    Inferred Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Metric (Inferred)Acceptance Threshold (Inferred)Reported Device Performance
    Non-Clinical Performance (Sub-Region Segmentation)Dice Overlap CoefficientSufficiently high (compared to inter-observer variability)Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device."
    Mean Surface DistanceSufficiently low (compared to inter-observer variability)Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device."
    Statistical comparison to inter-observer measurementsComparable to inter-observer variability"results are compared with the inter-observer measurements using statistical tests." (Specific results not given)
    Clinical Performance (Sub-Region Segmentation & ROI Initialization)Qualitative Assessment (Likert Scale)High quality (e.g., majority of scores at the higher end of the 5-point Likert scale)"It is concluded that sub-regions and ROI localizations are judged to be of high quality." (Specific scores not given)
    General Safety and EffectivenessAbsence of significant issuesNo issues identified"Bench testing did not reveal any issues with the system..." and "By virtue of its intended use and physical and technological characteristics, Quantib Prostate 2.0 is substantially equivalent to a device that has been approved for marketing in the United States. The performance data show that Quantib Prostate 2.0 is as safe and effective as the predicate device."

    Study Information Pertaining to Device Performance:

    1. Sample sizes used for the test set and the data provenance:

      • Sub-region segmentation (non-clinical testing): The exact sample size for the image dataset used for comparing automatic sub-region segmentation to ground truth is not specified.
      • Clinical performance test: The exact sample size (number of cases or radiologists) for the qualitative assessment is not specified.
      • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Ground Truth for Sub-region Segmentation: Established by "manual segmentation." The number of experts and their qualifications (e.g., "radiologist with 10 years of experience") are not specified. The text mentions "inter-observer measurements" for context of the Dice overlap and Mean Surface Distance, implying multiple human experts were involved in generating or evaluating "manual segmentations."
      • Ground Truth for Clinical Performance (Likert Scale): The "radiologists were asked to score." The number of radiologists and their specific qualifications are not specified.
    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • The document implies ground truth for quantitative metrics was manual segmentation, presumably from one or more experts. For the qualitative clinical assessment, radiologists scored the output, but there is no mention of a formal adjudication process (e.g., arbitration for discrepancies) if multiple radiologists rated the same case. It is likely a consensus or averaged score was used for "high quality" judgment, but no specific method is detailed.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study involving human readers with vs. without AI assistance is explicitly described. The testing focuses on the performance of the algorithm additions themselves (sub-region segmentation, ROI localization initiation) and their qualitative assessment by radiologists, rather than a comparative study of radiologist performance. The software is described as "assisting" in interpretation, but no data on improved human reader performance with the assistance is provided in this document.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Yes, for the sub-region segmentation algorithm, "Bench testing of the software was done to show that the system is suitable for its intended use and to evaluate the stand-alone performance of the sub-region segmentation algorithm. This was done by comparing the automatic sub-region segmentation to a ground truth and calculating the Dice overlap and Mean Surface Distance." This is a standalone performance evaluation.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

      • For the non-clinical quantitative performance of sub-region segmentation, the ground truth was manual segmentation (presumably by experts).
      • For the clinical qualitative performance, the ground truth for "high quality" was based on radiologist scoring/judgment (Likert scale).
    7. The sample size for the training set:

      • Not specified within this document. This document focuses on the performance testing of the modified device, not its development or training data.
    8. How the ground truth for the training set was established:

      • Not specified within this document as training set details are not provided.
    Ask a Question

    Ask a specific question about this device

    K Number
    K202501
    Device Name
    Quantib Prostate
    Manufacturer
    Date Cleared
    2020-10-11

    (41 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Quantib Prostate

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

    Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

    Quantib Prostate is intended to be used by trained medical provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

    Device Description

    Quantib Prostate is an extension to the Quantib Al Node software platform and enables analysis of prostate MRI scans. Quantib Prostate makes use of Quantib Al Node functionality, and includes the following specific Quantib Prostate modules:

    • An automatic processing module that performs prostate segmentation and multiparametric MRI image registration.
    • A user-interaction module in which the user can edit and approve the computed prostate segmentation and determine PSA density.
    • . A user-interaction module in which the user can view multi-parametric MRI images, and segment and analyze potential lesions. This extension will also apply a mathematical operation on the input images to combine information from the MRI sequences into a single combination image.
    • . An automatic processing module that collects all results for exporting and transferring back to the user.
    AI/ML Overview

    The provided text describes the Quantib Prostate device and its substantial equivalence to a predicate device (DynaCAD). However, it does not contain the specific details required to fully address all points of your request regarding acceptance criteria and the study that proves the device meets those criteria.

    Here's an analysis of what is available and what is missing:

    1. A table of acceptance criteria and the reported device performance

    The document states: "The testing results support that all the system requirements have met their pre-defined acceptance criteria." and "Bench testing did not reveal any issues with the system, demonstrating that the performance of Quantib Prostate is as safe and effective as its predicate devices."

    And for clinical performance: "It was concluded that a trained medical professional using Quantib Prostate performs better or equal in prostate segmentation than without use of Quantib Prostate."

    However, the document does NOT provide a table detailing specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or volumetric error thresholds) or the reported numerical performance metrics from either the non-clinical or clinical studies.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    • Test set sample size: Not specified.
    • Data provenance: Not specified.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    Not specified. The document mentions "correction by trained clinicians" for the prostate segmentation algorithm in a clinical use context for non-clinical testing, and "trained medical professional" for clinical testing, but no details on their number or qualifications for ground truth establishment.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document mentions: "comparing the performance of a clinician using Quantib Prostate to segment a prostate with a clinician not using Quantib Prostate is a reasonable way to prove safety and effectiveness of the semi-automatic segmentation algorithm." And later: "It was concluded that a trained medical professional using Quantib Prostate performs better or equal in prostate segmentation than without use of Quantib Prostate."

    This strongly suggests a comparative study was performed. However:

    • The exact design (MRMC vs. other comparative study) is not explicitly stated.
    • The effect size of improvement (e.g., specific metrics like Dice coefficient improvement, time saved, or diagnostic accuracy change) is NOT provided. The statement "better or equal" is qualitative.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, for non-clinical performance, it states: "This included characterization of the stand-alone performance of the prostate segmentation algorithm."

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not explicitly stated. It refers to "correction by trained clinicians" when discussing the non-clinical performance of the prostate segmentation algorithm, which implies expert consensus or expert-corrected segmentations were used as ground truth for evaluating the algorithm's standalone performance within a clinical context.

    8. The sample size for the training set

    Not specified.

    9. How the ground truth for the training set was established

    Not specified. (Likely similar to the test set ground truth, but not confirmed).


    Summary of Available vs. Missing Information:

    Information RequestedAvailable in Document?Details
    1. Acceptance Criteria & Reported Performance TableNoThe document states acceptance criteria were met and performance is "better or equal" to unassisted, but no specific numerical criteria or reported metrics are provided.
    2. Test Set Sample Size & Data ProvenanceNoNot specified.
    3. Number & Qualifications of Experts for Test Set Ground TruthNoMentions "trained clinicians" and "trained medical professional" but no specifics on number or qualifications for ground truth creation.
    4. Adjudication Method for Test SetNoNot specified.
    5. MRMC Comparative Effectiveness Study & Effect SizePartially - Study hinted, but no effect sizeA comparative study was likely done ("clinician using Quantib Prostate... with a clinician not using"), but the specific study design (MRMC) is not confirmed, and no effect size quantification is provided beyond "better or equal."
    6. Standalone (Algorithm Only) Performance StudyYes"characterization of the stand-alone performance of the prostate segmentation algorithm."
    7. Type of Ground Truth (Test Set)Implicit (Expert-corrected/Consensus)For the prostate segmentation algorithm, "correction by trained clinicians" is mentioned for non-clinical testing, implying expert-edited or expert consensus segmentations served as ground truth.
    8. Training Set Sample SizeNoNot specified.
    9. Ground Truth Establishment for Training SetNoNot specified.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1