K Number
K230772
Device Name
Quantib Prostate
Manufacturer
Date Cleared
2023-04-17

(27 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.

Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.

Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.

Device Description

Quantib Prostate (QPR) is an extension to the Quantib AI Node (QBX) software platform that enables analysis of prostate MRI scans. QPR makes use of QBX functionality, and includes 3 specific QPR modules.

The three specific modules of QPR are as follows:

  • An automatic processing module that performs input checks, prostate (sub-region) segmentation, multi-parametric MRI (mpMRI) image registration, computation of a biparametric combination image, and optionally crops the images around the prostate.
  • A two-step user-interaction module in which the user can:
    • edit the computed prostate segmentation (QBX functionality) and determine PSA density (QPR-specific functionality).
    • view multi-parametric MRI images (QBX functionality), and segment (QBX functionality) and analyze potential lesions (QPR-specific functionality). This extension also shows the prostatic sub-region segmentation and biparametric combination image overlay. (QPR-specific functionality).
  • An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
AI/ML Overview

Here's a breakdown of the acceptance criteria and study details for Quantib Prostate based on the provided FDA 510(k) summary:

1. Table of Acceptance Criteria and Reported Device Performance

The document primarily focuses on demonstrating substantial equivalence to the predicate device (Quantib Prostate v2.0) rather than presenting specific, quantitative acceptance criteria with corresponding performance metrics in a direct table format. However, based on the description, we can infer the performance expectations and the reported findings.

Acceptance Criteria CategorySpecific Acceptance Criterion (Inferred)Reported Device Performance
Prostate SegmentationAccuracy at least as good as predicate device (Quantib Prostate v2.0)"at least as accurate as that of the predicate device"
Subregion Segmentation (Standalone)Acceptable Dice overlap and Mean Surface Distance compared to ground truth. Agreement comparable to interobserver variability.Bench testing (quantitative metrics) did not reveal issues and showed agreement comparable to interobserver measurements.
Subregion Segmentation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged sub-regions "at least as accurate as the predicate device was" using a 5-point Likert scale.
ROI Localization Initiation (Clinical Context)Judged at least as accurate as predicate device by radiologists.Radiologists judged ROI localizations "at least as accurate as the predicate device was" using a 5-point Likert scale.

2. Sample Size for Test Set and Data Provenance

  • Prostate and Subregion Segmentation Algorithm (Non-clinical / Bench Testing): While a specific test set sample size isn't explicitly stated for this part, the overall prostate and sub-region segmentation algorithms were improved by "updating the methodology and training it on over 400 scans." This suggests the test set for its validation would be drawn from a similar diverse dataset.
  • Clinical Performance Testing (Qualitative): The document does not specify the number of cases used in the clinical performance testing for the subregion segmentations and ROI localization initiation.
  • Data Provenance: Not explicitly stated for either the non-clinical or clinical performance testing. We cannot determine the country of origin or if the data was retrospective or prospective from this document.

3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

  • Non-clinical Performance Testing (Prostate/Subregion Segmentation): Ground truth was established via "manual segmentation." The number and qualifications of the experts performing these manual segmentations are not specified in the document.
  • Clinical Performance Testing (Subregion Segmentation and ROI Localization Initiation): "Radiologists were asked to score the subregion segmentations and ROI initial localizations." The number of radiologists or their specific qualifications (e.g., years of experience, subspecialty) are not provided.

4. Adjudication Method for the Test Set

  • Non-clinical Performance Testing: The document mentions "comparing the automatic segmentations to their ground truth" and comparing results with "interobserver measurements." This suggests that the ground truth itself might have involved some form of consensus or a single expert's work, but a formal adjudication method (like 2+1 or 3+1) for the ground truth establishment is not described.
  • Clinical Performance Testing: A formal adjudication method is not mentioned. Radiologists scored segmentations and ROI localizations using a 5-point Likert scale, implying individual expert review rather than a consensus process for scoring during the test.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

  • A formal MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not explicitly described for this submission. The clinical performance testing involved radiologists scoring the device's output, but it was not framed as a study to measure improvement in human reader performance with AI assistance. The focus was on demonstrating that the device's performance was at least as good as the predicate.

6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

  • Yes, standalone performance was done for the combined prostate-subregion segmentation algorithm. This is described under "Non-clinical performance testing," where the "stand-alone performance of the sub-region segmentation algorithm" was evaluated by comparing automatic segmentations to ground truth using Dice overlap and Mean Surface Distance.
  • The "semi-automatic clinical performance test of the prostate segmentation has not been repeated" as its accuracy was found to be at least as good as the predicate in non-clinical testing, suggesting that the initial predicate validation (K221106) would have covered this aspect, potentially including standalone evaluation.

7. Type of Ground Truth Used

  • Non-clinical Performance Testing: "Manual segmentation" was used to establish ground truth for prostate and subregion segmentation.
  • Clinical Performance Testing: The "ground truth" here is implied to be the expert judgment of radiologists based on a 5-point Likert scale score, rather than a definitive pathological or outcomes-based truth.

8. Sample Size for the Training Set

  • The improved prostate and sub-region segmentation algorithms were trained "on over 400 scans."

9. How the Ground Truth for the Training Set Was Established

  • The document states the training involved "updating the methodology and training it on over 400 scans." It does not explicitly detail the process for establishing the ground truth for these 400+ training scans. However, given the context of segmentation algorithms, it is highly probable that manual segmentations by experts were used to create the ground truth annotations for training.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).