(28 days)
Quantib Prostate is image post-processing software that provides the user with processing, visualization, and editing of prostate MRI images. The software facilitates the analysis and study review of MR data sets and provides additional mathematical and/or statistical analysis. The resulting analysis can be displayed in a variety of formats, including images overlaid onto source MRI images.
Quantib Prostate functionality includes registered multiparametric-MRI viewing, with the option to view images combined into a single image to support visualization. The software can be used for semi-automatic segmentation of anatomical structures and provides volume computations, together with tools for manual editing. PI-RADS scoring is possible using a structured workflow.
Quantib Prostate is intended to be used by trained medical professionals and provides information that, in a clinical setting, may assist in the interpretation of prostate MR studies. Diagnosis should not be made solely based on the analysis performed using Quantib Prostate.
Quantib Prostate is an extension to the Quantib Al Node software platform and enables analysis of prostate MRI scans. Quantib Prostate makes use of Quantib Al Node functionality, and includes the following specific Quantib Prostate modules:
- An automatic processing module that performs prostate and prostate sub-reqion segmentation and multi-parametric MRI image registration and computation of a biparametric combination image.
- A user-interaction module in which the user can edit and approve the computed prostate segmentation and determine PSA density.
- A user-interaction module in which the user can view multi-parametric MRI images. segment and analyze potential lesions and set and view PI-RADS properties. This module also shows the prostatic sub-region segmentation and biparametric combination image overlav.
- An automatic processing module that collects all results, and creates the report and DICOM output so that they can be exported back to the user.
The provided text does not contain a detailed study specifically proving the device meets acceptance criteria, nor does it specify acceptance criteria in a quantifiable table format. It describes the modifications made to the device (Quantib Prostate 2.0 from 1.0) and mentions performance testing to demonstrate substantial equivalence to its predicate.
However, based on the information provided about the non-clinical and clinical performance testing, we can infer the intent of the acceptance criteria and the nature of the studies conducted. I will reconstruct the table and study description based on the available information, noting where specific quantifiable acceptance criteria are not explicitly stated in the document.
Inferred Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric (Inferred) | Acceptance Threshold (Inferred) | Reported Device Performance |
---|---|---|---|
Non-Clinical Performance (Sub-Region Segmentation) | Dice Overlap Coefficient | Sufficiently high (compared to inter-observer variability) | Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device." |
Mean Surface Distance | Sufficiently low (compared to inter-observer variability) | Not explicitly quantified, but reported that "Bench testing did not reveal any issues with the system, demonstrating that the modified device is as safe and effective as the predicate device." | |
Statistical comparison to inter-observer measurements | Comparable to inter-observer variability | "results are compared with the inter-observer measurements using statistical tests." (Specific results not given) | |
Clinical Performance (Sub-Region Segmentation & ROI Initialization) | Qualitative Assessment (Likert Scale) | High quality (e.g., majority of scores at the higher end of the 5-point Likert scale) | "It is concluded that sub-regions and ROI localizations are judged to be of high quality." (Specific scores not given) |
General Safety and Effectiveness | Absence of significant issues | No issues identified | "Bench testing did not reveal any issues with the system..." and "By virtue of its intended use and physical and technological characteristics, Quantib Prostate 2.0 is substantially equivalent to a device that has been approved for marketing in the United States. The performance data show that Quantib Prostate 2.0 is as safe and effective as the predicate device." |
Study Information Pertaining to Device Performance:
-
Sample sizes used for the test set and the data provenance:
- Sub-region segmentation (non-clinical testing): The exact sample size for the image dataset used for comparing automatic sub-region segmentation to ground truth is not specified.
- Clinical performance test: The exact sample size (number of cases or radiologists) for the qualitative assessment is not specified.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective).
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Ground Truth for Sub-region Segmentation: Established by "manual segmentation." The number of experts and their qualifications (e.g., "radiologist with 10 years of experience") are not specified. The text mentions "inter-observer measurements" for context of the Dice overlap and Mean Surface Distance, implying multiple human experts were involved in generating or evaluating "manual segmentations."
- Ground Truth for Clinical Performance (Likert Scale): The "radiologists were asked to score." The number of radiologists and their specific qualifications are not specified.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The document implies ground truth for quantitative metrics was manual segmentation, presumably from one or more experts. For the qualitative clinical assessment, radiologists scored the output, but there is no mention of a formal adjudication process (e.g., arbitration for discrepancies) if multiple radiologists rated the same case. It is likely a consensus or averaged score was used for "high quality" judgment, but no specific method is detailed.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study involving human readers with vs. without AI assistance is explicitly described. The testing focuses on the performance of the algorithm additions themselves (sub-region segmentation, ROI localization initiation) and their qualitative assessment by radiologists, rather than a comparative study of radiologist performance. The software is described as "assisting" in interpretation, but no data on improved human reader performance with the assistance is provided in this document.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, for the sub-region segmentation algorithm, "Bench testing of the software was done to show that the system is suitable for its intended use and to evaluate the stand-alone performance of the sub-region segmentation algorithm. This was done by comparing the automatic sub-region segmentation to a ground truth and calculating the Dice overlap and Mean Surface Distance." This is a standalone performance evaluation.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- For the non-clinical quantitative performance of sub-region segmentation, the ground truth was manual segmentation (presumably by experts).
- For the clinical qualitative performance, the ground truth for "high quality" was based on radiologist scoring/judgment (Likert scale).
-
The sample size for the training set:
- Not specified within this document. This document focuses on the performance testing of the modified device, not its development or training data.
-
How the ground truth for the training set was established:
- Not specified within this document as training set details are not provided.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).