Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K242683
    Manufacturer
    Date Cleared
    2025-03-18

    (193 days)

    Product Code
    Regulation Number
    892.2090
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quibim S.L.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QP-Prostate® CAD is a Computed Aided Detection and Diagnosis (CADe/CADx) image processing software that automatically detects and identifies suspected lesions in the prostate gland based on bi-parametric prostate MRI. The software is intended to be used as a concurrent read by physicians with proper training in a clinical setting as an aid for interpreting prostate MRI studies. The results can be displayed in a variety of DICOM outputs, including identified suspected lesions marked as an overlay onto source MR images. The output can be displayed on third-party DICOM workstations and Picture Archive and Communication Systems (PACS). Patient management decisions should not be based solely on the results of QP-Prostate® CAD.

    Device Description

    QP-Prostate® CAD is an artificial intelligence-based Computed Aided Detection and Diagnosis (CADe/CADx) image processing software. QP-Prostate® CAD uses Al-based algorithms trained with pathology data to detect suspicious lesions for clinically significant prostate cancer. The device automatically detects and identifies suspected lesions in the prostate gland based on bi-parametric prostate MRI and provides marks over regions of the images that may contain suspected lesions. There are two possible markers that are provided in different colors suggesting different levels of suspicion of clinically significant prostate cancer (moderate or high suspicion level).

    The software is intended to be used as a concurrent read by physicians with proper training in a clinical setting as an aid for interpreting prostate MRI studies. The results can be displayed in a variety of DICOM outputs, including identified suspected lesions marked as an overlay onto source MR images. The output can be displayed on third-party DICOM workstations and Picture Archive and Communication Systems (PACS). Based on biparametric input consisting of T2W and DWI series, the analysis is run automatically, and the output in standard DICOM formats is returned to PACS.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Table 1: Acceptance Criteria and Reported Device Performance (Standalone)

    Metric (lesion level)Acceptance Criterion (Implicit)Reported Device Performance
    AUC-ROCEvidence of good discriminatory ability (e.g., above a certain threshold)0.732 (95% CI: 0.668-0.791)
    Sensitivity (high suspicion marker)Evidence of good detection rate for clinically significant findings0.677 (95% CI: 0.593-0.761)
    False Positive Rate per Case (high suspicion marker, any biopsy source)Evidence of acceptable false positive rate0.417 (95% CI: 0.313-0.522)
    Sensitivity (high and moderate suspicion markers)Evidence of good detection rate for clinically significant findings0.795 (95% CI: 0.722-0.861)
    False Positive Rate per Case (high and moderate suspicion markers, any biopsy source)Evidence of acceptable false positive rate0.855 (95% CI: 0.709-0.996)

    Note: The document does not explicitly state numerical acceptance criteria thresholds for the standalone performance metrics (AUC-ROC, Sensitivity, FPR). Instead, it presents the results and implies that these values "demonstrate the safety and effectiveness" in comparison to the predicate device. The general implicit acceptance criterion for these metrics would be that they exhibit performance levels indicative of a useful diagnostic aid.

    Table 2: Acceptance Criteria and Reported Device Performance (Multi-Reader Multi-Case Study)

    MetricAcceptance Criterion (Explicit)Reported Device Performance
    ΔAUC (AUCaided - AUCunaided) (Primary Endpoint)A statistically significant improvement (p-value
    Ask a Question

    Ask a specific question about this device

    K Number
    K232231
    Device Name
    QP-Brain®
    Manufacturer
    Date Cleared
    2023-12-13

    (139 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Quibim S.L.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QP-Brain® is a medical imaging processing application intended for automatic labeling and volumetric quantification of segmentable brain structures and white matter hyperintensities (WMH) from a set of adults and adolescents 18 and older MR images. Volumetric measurements may be compared to reference percentile data. The application with proper training, as a support tool in assessment of structural MRIs. Patient management decisions should not be results of the device.

    Device Description

    QP-Brain® is a medical image processing and analyzing software intended for image processing to analyze brain MR imaging studies. These brain MR images, when interpreted by clinicians with proper training, may yield clinically useful information.

    QP-Brain® is an automated MR imaging post-processing medical device software that uses 3D T1-weighted (T1w) Gradient Echo structural MRI scans to provide a quantitative imaging analysis and automatic segmentation of brain regions. If T2 FLAIR images are uploaded. QP-Brain® uses this sequence to automatically identify white matter hyperintensities using Artificial Intelligence.

    Once the T1 MR or T2 FLAIR has been uploaded, QP-Brain® will check the available sequences for compatibility before automatically launching the analysis.

    The output of the medical device consists of specific volumes with seqmentation overlay as well as different reports with quantitative information. The outputs can be returned to and displayed on third-party DICOM workstations and Picture Archive and Communication Systems (PACS).

    In case age and gender information is available on the study DICOM tags for brain structure analysis module, all quantified volumes are framed in a normative database to be compared with cognitively normal adults of the same age and gender.

    QP-Brain® also allows for longitudinal information reporting if a patient has acquired more than one MRI over time.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for QP-Brain®:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a set of predefined thresholds. Instead, it presents performance metrics for the device compared to a reference standard (manual expert segmentation). The implicit acceptance criteria appear to be the achievement of high correlation and low error rates, demonstrating that QP-Brain® functions as intended and is comparable to manual segmentation.

    Metric / RegionReported Device Performance (Mean (95% CI or Range))Implicit Acceptance Threshold (Inferred)
    Brain Volumetry (T1 MRI)
    GM DICE Score0.983 (0.981 – 0.986)High Dice Score (e.g., > 0.95 or similar to expert inter-rater variability)
    GM Relative Volume Difference2.846 (2.523 – 3.008)Low Relative Volume Difference (e.g., 0.8)

    Note: The acceptance thresholds above are inferred based on the context of demonstrating substantial equivalence and acceptable clinical utility. The document itself does not explicitly list these as "acceptance criteria" with specific numerical cutoffs that were pre-defined.

    2. Sample Size for Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated in the provided text. The tables present summarized metrics (e.g., mean DICE scores, mean errors, and confidence intervals), but the number of images or patients in the test set is not provided.
    • Data Provenance: Not specified. The document does not mention the country of origin of the data or whether it was retrospective or prospective.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated. The document mentions "manual expert segmentation" as the reference standard but does not specify the number of experts involved.
    • Qualifications of Experts: Not specified. The document refers to them as "expert" but does not detail their specific qualifications (e.g., years of experience, board certifications).

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The document refers to "manual expert segmentation" as the reference standard, which implies that expert opinion was used, but the method for resolving any potential disagreements among multiple experts (e.g., 2+1, 3+1, or simple consensus) is not mentioned. If only one expert performed the segmentation, then no adjudication would be needed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No. The performance data presented compares the device's output to "manual expert segmentation" as a reference standard, not to human readers' performance with or without AI assistance. Therefore, it is a standalone performance study against a reference standard, not an MRMC comparative effectiveness study.
    • Effect size of human readers improvement: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only) Performance

    • Was a standalone performance study done? Yes. The "Performance Data" section describes how "QP-Brain® outputs were compared to manual expert segmentation (reference standard)." This directly assesses the algorithm's performance without human intervention after the algorithm has generated its output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Expert consensus (or expert segmentation). The document states, "For the performance evaluation, QP-Brain® outputs were compared to manual expert segmentation (reference standard)."

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not specified. The document only discusses the validation phase and its performance results. Information regarding the training set's size is not provided.

    9. How Ground Truth for the Training Set was Established

    • How Ground Truth for Training Set was Established: Not specified. While it's implied that "manual expert segmentation" would likely also be used for establishing ground truth in the training data, the document does not explicitly state this or provide details on the process for the training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203582
    Manufacturer
    Date Cleared
    2021-02-04

    (59 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    QUIBIM S.L.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    qp-Prostate is an image processing software package to be used by trained professionals, including radiologists specialized in prostate imaging, urologists and oncologists. The software runs on a standard "off-the-shelf" workstation and can be used to perform image viewing, processing and analysis of prostate MR images. Data and images are acquired through DICOM compliant imaging devices and modalities. Patient management decisions should not be based solely on the results of qp-Prostate. qp-Prostate does not perform a diagnostic function, but instead allows the users to visualize and analyze DICOM data.

    Device Description

    qp-Prostate is a medical image viewing, processing and analyzing software package for use by a trained user or healthcare professional, including radiologists specialized in prostate imaging, urologists and oncologists. These prostate MR images, when interpreted by a trained physician, may yield clinically useful information.

    qp-Prostate consists of a modular platform based on a plug-in software architecture. Apparent Diffusion Coefficient (ADC) post-processing and Perfusion - Pharmacokinetics post-processing (PKM) are embedded into the platform as plug-ins to allow prostate imaging quantitative analysis.

    The platform runs as a client-server model that requires a high-performance computer installed by QUIBIM inside the hospital or medical clinic network. The server communicates with the Picture Archiving and Communication System (PACS) through DICOM protocol, qp-Prostate is accessible through the web browser (Google Chrome or Mozilla Firefox) of any standard "off-the-shelf" computer connected to the hospital/center network.

    The main features of the software are:

    1. Query/Retrieve interaction with PACS;
    2. Apparent Diffusion Coefficient (ADC) post-processing (MR imaging);
    3. Perfusion Pharmacokinetics (PKM) post-processing (MR imaging);
    4. DICOM viewer; and
    5. Structured reporting.

    The software provides MR imaging analysis plug-ins to objectively measure different functional properties in prostate images.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the qp-Prostate device, based on the provided document:

    Acceptance Criteria and Device Performance

    The document does not explicitly present a table of "acceptance criteria" with numerical targets and reported performance. Instead, it describes performance testing conducted to demonstrate functional equivalence and safety and effectiveness compared to a predicate device. The performance data generally aims to show the device functions as intended and is comparable to the predicate.

    The closest we can get to a "table of acceptance criteria and reported device performance" is by looking at the types of tests done and the general conclusions.

    Implicit Acceptance Criteria (Inferred from testing and purpose):

    • Accuracy of Diffusion-ADC and Perfusion-Pharmacokinetics (PKM) calculations: The device should accurately compute these parameters.
    • Accuracy of algorithmic functions: The Spatial Smoothing, Registration, Automated Prostate Segmentation, Motion Correction, and automated AIF selection algorithms should perform correctly.
    • Equivalence to Predicate Device: The performance of qp-Prostate should be comparable to the Olea Sphere v3.0, especially in quantitative outputs.
    • Functionality as intended: All listed features (Query/Retrieve, DICOM viewer, Structured reporting, etc.) should work correctly.

    Reported Device Performance (from the document):

    • Diffusion-ADC and Perfusion-Pharmacokinetics (PKM) analysis modules: Evaluated using QIBA's Digital Reference Objects (DROs), including noise modeling, for technical performance. The document concludes that the "tests results demonstrate that qp-Prostate functioned as intended."
    • Algorithmic functions (Spatial Smoothing, Registration, Automated Prostate Segmentation, Motion Correction, AIF selection): Tested using a dataset of prostate clinical cases. The document states "performance testing with prostate MR cases" was conducted.
    • Comparison to Predicate Device: Performed using 157 clinical cases, demonstrating that the device is "as safe and effective as its predicate device, without introducing new questions of safety and efficacy."
    • Overall Conclusion: "Performance data demonstrate that qp-Prostate is as safe and effective as the OLEA Sphere v3.0. Thus, gp-Prostate is substantially equivalent."

    Since no specific numerical thresholds for accuracy or performance metrics are provided in the document, a quantitative table cannot be generated. The acceptance is based on successful completion of the described performance tests and demonstrating substantial equivalence.


    Study Details

    Here's the detailed information regarding the studies:

    1. Sample sized used for the test set and the data provenance:

    • Digital Reference Object (DRO) Analysis (Bench Testing):
      • Sample Size: Not explicitly quantified as "sample size" in terms of number of patient cases, but rather refers to universally accepted digital reference objects from QIBA.
      • Provenance: These are synthetic or standardized digital objects designed for technical performance evaluation, not originating from specific patients or countries.
    • Clinical Testing of Algorithms (Motion Correction, Registration, Spatial Smoothing, AIF Selection, Prostate Segmentation):
      • Motion Correction algorithm: 155 DCE-MR and DWI-MR prostate sequences from 155 different patients.
      • Registration algorithm: 112 T2-Weighted MR, DCE-MR, and 108 DWI-MR prostate sequences from different patients.
      • Spatial Smoothing algorithm: 51 transverse T2-weighted, DCE-MR, and DWI-MR prostate sequences from 51 different patients.
      • AIF selection algorithm: 242 DCE-MR prostate sequences from 242 different patients.
      • Prostate Segmentation algorithm: 243 transverse T2-weighted MR prostate sequences from 243 different patients.
      • Provenance for these clinical cases: The document states they were "acquired from different patients in different machines with multiple acquisition protocols" and from "different three major MRI vendors: Siemens, GE and Philips, magnetic field of 3T and 1.5T cases." There is no explicit mention of the country of origin or whether the data was retrospective or prospective, though "clinical cases" used for validation often imply retrospective use of existing data.
    • Comparison to Predicate Device:
      • Sample Size: 157 T2-weighted MR, DCE-MR, and 141 DWI-MR prostate sequences from different patients.
      • Provenance: Similar to the algorithmic clinical testing, from "different patients in different machines with multiple acquisition protocols" and from "different three major MRI vendors: Siemens, GE and Philips, magnetic field of 3T and 1.5T cases."

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document does not explicitly state the number of experts used or their specific qualifications (e.g., years of experience as radiologists) for establishing ground truth for the clinical test sets.
    • For the Digital Reference Object (DRO) analysis, the "ground truth" is inherent to the design of the DROs themselves (proposed by QIBA), rather than established by human experts for each test.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • The document does not specify any adjudication method used for establishing ground truth or for resolving discrepancies in the test sets.

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study assessing human reader improvement with AI assistance was not performed or reported in this document. The study compared the device's technical performance and its output with that of a predicate device, not with human readers or human readers assisted by AI.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the performance testing described is primarily standalone. The "bench testing" with DROs and the "clinical testing" of the algorithms (Motion Correction, Registration, etc.) evaluate the device's inherent performance without a human-in-the-loop scenario. The comparison to the predicate device also assesses the algorithm's output directly against the predicate's output. The device itself is described as "an image processing software package to be used by trained professionals" and "does not perform a diagnostic function, but instead allows the users to visualize and analyze DICOM data," which points to it being a tool that supports human interpretation rather than replacing it.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • Bench Testing (DWI & DCE): Digital Reference Objects (DROs) proposed by QIBA, which serve as a synthetic, known ground truth for quantitative accuracy.
    • Clinical Testing of Algorithms & Comparison to Predicate: The document does not explicitly define the ground truth for these clinical performance tests. Given that it's comparing the outputs of image processing algorithms, the ground truth would likely involve:
      • Reference standard values: For quantitative parameters like ADC, Ktrans, kep, ve, it would likely involve comparing the device's calculated values against a recognized reference standard or the predicate's output considered as a benchmark.
      • Visual assessment/Expert review: For the performance of segmentation, registration, and motion correction, expert radiologists would visually assess the accuracy of the algorithm's output to determine if it "functioned as intended."
      • The statement "comparison against the predicate device" implies the predicate's output is used as a reference point.

    7. The sample size for the training set:

    • The document does not provide any information on the sample size used for the training set of the algorithms. It focuses solely on the validation and verification performed post-development.

    8. How the ground truth for the training set was established:

    • Since no information on the training set or its sample size is provided, there is also no information on how the ground truth for the training set was established.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1