Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K191959
    Manufacturer
    Date Cleared
    2019-11-08

    (108 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    QuantX Breast MRI Biopsy Guidance Plugin

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The QuantX Breast MRI Biopsy Guidance Plugin is a software application that assists users of the QuantX software device in planning MRI guided interventional procedures.

    Device Description

    The QuantX Breast MRI Biopsy Guidance Plugin assists users in planning MRI quided interventional procedures. Using information from MR images regarding the coordinates of a user-specified region of interest and fiducial coordinates, the software provides an automatic calculation of the location and depth of the targeted region of interest, such as a lesion or suspected lesion. Its primary goal is to identify where and how deep a biopsy needle should be inserted into an imaged breast in order to strike a targeted lesion or region of interest, as chosen by a trained medical professional. The QuantX Breast MRI Biopsy Guidance Plugin may be used in either the SE or Advanced version of QuantX, a software program used for the display and analysis of medical images.

    AI/ML Overview

    The provided text does not contain information about acceptance criteria or a study that specifically proves the device meets those criteria. The document is a 510(k) premarket notification summary for the QuantX Breast MRI Biopsy Guidance Plugin, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting explicit acceptance criteria and corresponding performance study results in the format requested.

    However, the "Nonclinical Performance Data Testing and Reviews" section (1.7) lists various verification and validation tests performed. I can infer potential "acceptance criteria" from these tests regarding the functionality and accuracy of the device. The reported device performance is indicated by the statement that the tests were "successfully tested" and "demonstrates that the device conforms to user needs and intended use."

    Here's an attempt to structure the available information as requested, though many fields will be marked as "Not Provided" or inferred.


    1. Table of Acceptance Criteria and Reported Device Performance

    Given the nature of the document (510(k) summary demonstrating substantial equivalence), explicit numerical acceptance criteria and precise performance metrics are not detailed. The "performance" is generally described as "verification of correct" or "successful testing."

    Acceptance Criteria (Inferred from Verification Tests)Reported Device Performance
    Proper activation of biopsy guidance mode and interface displaySuccessfully tested; achieved proper activation and display.
    Proper creation of needle block imagesSuccessfully tested; achieved proper creation.
    Proper loading of image series for biopsy guidanceSuccessfully tested; achieved proper loading.
    Complete and correct selection of grid type variablesSuccessfully tested; ensured complete and correct selection.
    Complete and correct needle type variablesSuccessfully tested; ensured complete and correct selection.
    Correct fiducial marker image-space coordinatesSuccessfully tested; verified correct coordinates.
    Correct size of grid image overlay and locationSuccessfully tested; verified correct size and location.
    Correct lesion marker overlay display and image-space coordinatesSuccessfully tested; verified correct display and coordinates.
    Proper display of selected breast, grid cell, and block holeSuccessfully tested; achieved proper display.
    Proper display of needle block image and needle depthSuccessfully tested; achieved proper display.
    Correct patient orientation indicatorsSuccessfully tested; verified correct indicators.
    Correct lesion depth calculation (by comparison to predicate)Successfully tested; verified correct calculation.
    Correct needle block hole (by comparison to predicate)Successfully tested; verified correct hole.
    Correct grid cell (by comparison to predicate)Successfully tested; verified correct grid cell.
    Guidance worksheet outputSuccessfully tested; verified output.
    Conformance to user needs and intended useValidation testing demonstrates conformance.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not provided. The document mentions "Nonclinical tests" but does not specify the number of cases or images used for testing.
    • Data Provenance: Not provided. The origin of the data (e.g., country of origin, retrospective or prospective) is not mentioned. Given it's a software for MRI guidance, the data would likely be MRI scans.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not provided.
    • Qualifications of Experts: Not provided. The study refers to "comparison to predicate" for some verifications, suggesting the predicate device's output or established methods served as a reference, but does not detail human expert involvement in establishing ground truth for the test data itself.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not provided. There is no mention of independent expert review, consensus, or other adjudication processes for the test results.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No. The document does not describe an MRMC study comparing human readers with and without AI assistance. The focus is on the software's functional correctness and substantial equivalence to a predicate device.
    • Effect Size of Human Readers Improvement with AI: Not applicable, as no MRMC study was performed or reported.

    6. Standalone (Algorithm Only) Performance Study

    • Standalone Study Done: Yes, implicitly. The listed "Nonclinical tests" appear to evaluate the algorithm's functionality and accuracy in a standalone manner (e.g., "Verification of correct lesion depth calculation," "Verification of correct fiducial marker image-space coordinates"). The results are described as successful.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The ground truth for several verification steps was established by comparison to the predicate device. For other functional tests, the ground truth was implied by the Expected Results of the software's specified functionality (e.g., "proper activation," "correct display"). There is no mention of pathology, expert consensus on patient outcomes, or other clinical ground truth methods for the non-clinical tests.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not provided. The document focuses on verification and validation of a software feature, not on machine learning model training. It's unclear if the "biopsy guidance plugin" itself involves machine learning that would require a distinct training set. If it's a computational algorithm for geometry calculations, a training set might not be conventionally applicable.

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth Was Established for Training Set: Not provided, and likely not applicable given the apparent nature of the device as a computational guidance tool rather than a machine learning classifier.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN170022
    Device Name
    QuantX
    Date Cleared
    2017-07-19

    (103 days)

    Product Code
    Regulation Number
    892.2060
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    QuantX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QuantX is a computer-aided diagnosis (CADx) software device used to assist radiologists in the assessment and characterization of breast abnormalities using MR image data. The software automatically registers images, and segments and analyzes user-selected regions of interest (ROI). QuantX extracts image data from the ROI to provide volumetric analysis and computer analytics based on morphological and enhancement characteristics. These imaging (or radiomic) features are then synthesized by an artificial intelligence algorithm into a single value, the QI score, which is analyzed relative to a database of reference abnormalities with known ground truth.

    QuantX is indicated for evaluation of patients presenting for high-risk screening, diagnostic imaging workup, or evaluation of extent of known disease. Extent of known disease refers to both the assessment of the boundary of a particular abnormality as well as the assessment of the total disease burden in a particular patient. In cases where multiple abnormalities are present, QuantX can be used to assess each abnormality independently.

    This device provides information that may be useful in the characterization of breast abnormalities during image interpretation. For the QI score and component radiomic features, the QuantX device provides comparative analysis to lesions with known outcomes using an image atlas and histogram display format.

    QuantX may also be used as an image viewer of multi-modality digital images, including ultrasound and mammography. The software also includes tools that allow users to measure and document images, and output in a structured report.

    Limitations: QuantX is not intended for primary interpretation of digital mammography images.

    Device Description

    The device is a software-only post-processing system for patient breast images that includes analysis of MR images, and viewing ultrasound and mammographic images.

    MR images are acquired from a third-party acquisition device. The images can be loaded into the QuantX device manually or automatically if connected to a DICOMcompatible device. Users select and load the patient case to use the QuantX software tools in the examination of the images. Different types of MR sequences (T1, DCE, T2, DWI, etc.) can be viewed at the same time as mammography or ultrasound images from the same patient.

    QuantX includes image registration, and automated segmentation and analysis functions, based on a seed point indicated by the user. Users can select a ROI manually from the MR image, or use the automatic segmentation tool to obtain and accept a ROI. for input to the QuantX analytics. The QuantX analytics display the QI Most Enhancing Curve, the Average Enhancing Curve, and volume of the specified region.

    QuantX provides users the QI Score, based on the morphological and enhancement characteristics of the region of interest. The QuantX package provides comparative analysis for the QI score and its component element features to lesions with known ground truth (either biopsy- proven diagnosis or minimum one year follow-up negative scan for non-biopsied lesions) using an image atlas and histogram display format.

    A user experienced with the significance of such data will be able to view and interpret this additional information during the diagnosis of breast lesions.

    Users may select from a variety of information sources to make the diagnosis. The key features of the device are related categorization of lesions include the display of similar cases and the histogram of known lesions for various analytic features (included the Ql score). The QI Score is not a "probability of malignancy," but is intended for the organization of an online atlas (reference database) provided to the user as the Similar Case Database. The QI score is based on a machine learning algorithm, trained on a subset of features calculated on a segmented lesions.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance for QuantX

    1. Table of Acceptance Criteria and Reported Device Performance

    The primary acceptance criteria for QuantX, as a Class II device with Special Controls, revolve around its ability to improve reader performance in diagnosing breast cancer when used as an aid compared to without it.

    Acceptance Criteria (from Special Control 1.iii)Reported Device Performance (from MRMC Study Results)
    Improvement in Reader Performance: Demonstruate that the device improves reader performance in the intended use population when used in accordance with the instructions for use, based on appropriate diagnostic accuracy measures (e.g., receiver operator characteristic plot, sensitivity, specificity, predictive value, and diagnostic likelihood ratio).Primary Endpoint (Improvement in AUC):
    Proper-binormal method:
    • AUC (FIRST READ, without QuantX): 0.7055
    • AUC (SECOND READ, with QuantX): 0.7575
    • ΔAUC (SECOND READ - FIRST READ): 0.0520 (95% CI: [0.0022, 0.1018], p-value: 0.0408)
      Trapezoidal method:
    • AUC (FIRST READ, without QuantX): 0.7090
    • AUC (SECOND READ, with QuantX): 0.7577
    • ΔAUC (SECOND READ - FIRST READ): 0.0487 (95% CI: [-0.0011, 0.0985], p-value: 0.0550)
      Conclusion: The study "marginally met the primary endpoint" with the proper-binormal method demonstrating a statistically significant improvement in AUC. |
      | No Unintended Reduction in Sensitivity or Specificity (Secondary Analysis): Ensure there is not an unintended reduction in either sensitivity or specificity. | Secondary Analyses (Sensitivity & Specificity, descriptive):
      BI-RADS cut-point of 3 (≥3 indicates positive):
    • Sensitivity Difference: 3.8% (95% CI: [0.8, 7.4])
    • Specificity Difference: -1.0% (95% CI: [-6.5, 4.3])
      BI-RADS cut-point of 4a (≥4a indicates positive):
    • Sensitivity Difference: 5.1% (95% CI: [-0.9, 10.9])
    • Specificity Difference: -0.5% (95% CI: [-7.3, 6.0])
      Conclusion: Secondary analyses "suggest improved sensitivity based on BI-RADS 3 as the cut-point without decreased specificity for some readers" (although not pre-specified for formal hypothesis testing). |
      | Standalone Performance: Standalone performance testing protocols and results of the device. | Standalone Performance:
    • On Similar Case Database (i.e., training data): AUC = 0.86 ± 0.02 (mean ± standard error). (Note: This is not considered independent validation, but contains cases from important cohorts.)
    • On Reader Study Test Database (automated segmentation): AUC = 0.75 ± 0.05 (mean ± standard error).
    • On Reader Study Test Database (clinical reader study segmentation variability): AUC = 0.71 ± 0.05 (mean ± standard error). |

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The reader study test database included 111 breast MRI cases, with one lesion per case used for ROC analysis. Of these, 54 were cancerous and 57 were non-cancerous.
    • Data Provenance: The data was retrospectively collected from three different institutions: an academic breast imaging center, a dedicated cancer imaging center, and a community-based imaging center. The cases were collected from the US (e.g., University of Chicago Medical Center, Memorial Sloan Kettering Cancer Center, and X-Ray Associates of New Mexico).
      • Date Range of Data Collection:
        • Philips 1.5T: 02/2009 - 01/2014
        • Philips 3T: 05/2010 - 12/2013
        • Siemens 1.5T: 10/2013 - 12/2014
        • GE 1.5T: 03/2011
        • GE 3T: 01/2009 - 09/2013

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: The ground truth for the non-biopsied non-cancers was determined by multidisciplinary review. The text does not specify the exact number of experts involved in this review for the test set, but it states that "lesion type had been determined by multidisciplinary review" for cases in the Similar Case Database (which shared case inclusion criteria with the standalone testing test set), and for the reader study test set, non-biopsied non-cancers required "clinical and radiology reports and a negative follow-up MRI study at a minimum of 12 months."
    • Qualifications of Those Experts: For biopsied cases, ground truth was directly from pathology reports, implying pathologists were the experts. For non-biopsied non-cancers, the ground truth was established by "clinical and radiology reports and a negative follow-up MRI study," suggesting radiologists and clinicians were involved. The description of reader qualifications for the MRMC study (radiologists with at least 1 year of breast MRI interpretation experience, fellowship-trained in breast imaging or 2 years' breast imaging experience, MQSA qualified) gives an indication of the expertise typically expected in such a setting.

    4. Adjudication Method for the Test Set

    The ground truth was established based on biopsy-proven diagnosis (pathology reports) for cancers and biopsied non-cancers. For non-biopsied benign lesions, it was based on clinical and radiology reports and a negative follow-up MRI study at a minimum of 12 months, along with multidisciplinary review. This implies a form of consensus or adjudicated ground truth for the non-biopsied benign cases. The document mentions concordance was defined as a tissue biopsy result being compatible with the abnormal pre-biopsy imaging appearance, indicating an adjudicating process for these specific cases.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • Yes, an MRMC comparative effectiveness study was done.
    • Effect Size of Human Readers' Improvement with AI vs. Without AI Assistance:
      • Using the proper-binormal method (primary analysis): The average AUC across all readers improved by 0.0520 (from 0.7055 to 0.7575) when using QuantX.
      • Using the trapezoidal method (secondary analysis): The average AUC across all readers improved by 0.0487 (from 0.7090 to 0.7577).

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • Yes, standalone performance testing was done.
    • Results:
      • On the Similar Case Database (training data): AUC = 0.86 ± 0.02.
      • On the Reader Study Test Database (with automated segmentation, no manual correction): AUC = 0.75 ± 0.05.
      • On the Reader Study Test Database (with segmentation allowing for clinical reader study variability in seed point locations): AUC = 0.71 ± 0.05.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, Etc.)

    The ground truth for both the training and test sets was a combination of:

    • Pathology: Biopsy-proven diagnosis for cancerous lesions and biopsied non-cancerous lesions.
    • Outcomes Data/Follow-up: Negative follow-up MRI study at a minimum of 12 months for non-biopsied benign lesions.
    • Expert Consensus/Multidisciplinary Review: For non-biopsied benign lesions, "clinical and radiology reports" were considered, and for cases in the Similar Case Database, lesion type "had been determined by multidisciplinary review."

    8. The Sample Size for the Training Set

    The text refers to the "Similar Case Database" as the training database (or the data on which the QI Score's machine learning algorithm was trained).

    • Training Set Sample Size: This database included a total of 652 lesions (314 benign and 338 malignant), collected across different MR system manufacturers and field strengths between 2008 and 2014. More detailed breakdown: Philips (429 lesions), GE (48 lesions), Siemens (66 lesions).

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the training set (Similar Case Database) was established using the same criteria as the test set:

    • Biopsy-proven truth: For lesions with pathology.
    • Clinical and radiology reports + negative follow-up: For non-biopsied benign lesions (minimum 12 months).
    • Multidisciplinary review: For all cases, ensuring lesion type had been determined by a multidisciplinary review.
    Ask a Question

    Ask a specific question about this device

    K Number
    K170195
    Device Name
    QuantX
    Date Cleared
    2017-05-17

    (114 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    QuantX

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    QuantX is a quantitative image analysis software device used to assist radiologists in the assessment and characterization of breast abnormalities using MR image data. The software automatically registers images, and segments and analyzes user-selected regions of interest (ROI). QuantX extracts image data from the ROI to provide volumetric and surface area analysis. These imaging features are then displayed to the user in a dedicated analysis panel on the display monitor.

    When interpreted by a skilled physician, this device provides information that may be useful for screening and diagnosis. Patient management decisions should not be made based solely on the results of QuantX analysis.

    QuantX may also be used as an image viewer of multi-modality digital images, including ultrasound and mammography. The software also includes tools that allow users to measure and document images, and output in a structured report.

    Limitations: QuantX is not intended for primary interpretation of digital mammography images.

    Device Description

    QuantX is a software program that analyzes patient breast images, and is designed to aid radiologists in the characterization of lesions. After MR images are acquired from a third-party acquisition device, they can be loaded into the QuantX database manually, or automatically if connected to a DICOM-compatible device. Users then select and load the patient case to use the QuantX software tools in the examination of the images. Different types of MR sequences (T1, DCE, T2, DWI, etc.) can be viewed at the same time as mammography or ultrasound images from the same patient.

    A variety of viewing tools are available to users. The MR images can be examined under different image planes (axial, sagittal, and coronal) as well as different image time points and slices. Users can use keyboard shortcuts or a scrolling mechanism to navigate through MR image slices. Colored axes serve as slice location markers for ease of pinpointing regions of interest (ROI). Images can be panned, changed in contrast, zoomed in or out. and measured. The Colormap feature visualizes contrast uptake (enhancement) studies, and a time intensity curve can be viewed for any location on the MR image.

    QuantX includes image registration, and automated segmentation and analysis functions, based on a seed point indicated by the user. Users can select a ROI manually from the MR image, or use the automatic segmentation tool to obtain and accept a ROI, for input to the QuantX analytics.

    The QuantX analytics display the QI Most Enhancing Curve, volume and surface area of the specified region. A user experienced with the significance of such data will be able to view and interpret this additional information during the diagnosis of breast lesions.

    AI/ML Overview

    The provided text describes the QuantX device and its FDA 510(k) summary. However, it does not contain the specific details required to fully address all sections of your request regarding acceptance criteria and the study that proves the device meets those criteria.

    Here's what can be extracted and what information is missing:

    1. A table of acceptance criteria and the reported device performance

    The document mentions "Verification testing of lesion segmentation on MR image data" and "Verification that all measurements and BIRADS reporting were recorded correctly," which implies that there were acceptance criteria for these functions. However, the specific numerical acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or error bounds) and the reported device performance against these criteria are not provided.

    2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This information is not available in the provided document. The summary only states that nonclinical tests were performed, including "Verification testing of lesion segmentation on MR image data." It does not specify the number of cases or the nature of the data used for these verification tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not available in the provided document.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not available in the provided document.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document states, "OuantX is a quantitative image analysis software device used to assist radiologists... When interpreted by a skilled physician, this device provides information that may be useful for screening and diagnosis." This implies an assistive role for radiologists. However, there is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study or any data on how much human readers improve with AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document does not explicitly describe a standalone performance study. While it mentions "automated segmentation and analysis functions," the overall context emphasizes its role in assisting radiologists. Therefore, standalone performance details are not provided.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The document mentions "Verification testing of lesion segmentation on MR image data" and "Verification that all measurements... were recorded correctly." This implies that there were reference standards for these tests, likely based on expert annotations or ground truth derived from clinical knowledge. However, the specific type of ground truth (e.g., expert consensus, pathology, long-term outcomes) is not explicitly stated.

    8. The sample size for the training set

    This information is not available in the provided document. The document describes the device's functions but does not delve into the details of its development or training.

    9. How the ground truth for the training set was established

    This information is not available in the provided document.


    Summary of available and missing information:

    The provided 510(k) summary focuses on establishing substantial equivalence based on technological characteristics and general non-clinical testing. It highlights the device's intended use to assist radiologists and its features like image registration, segmentation, and volumetric/surface area analysis. However, it lacks the detailed performance metrics, study designs (e.g., sample sizes, data provenance, ground truth establishment, expert qualifications, adjudication methods), and comparative effectiveness data that would be found in a comprehensive clinical validation study report. These types of details are often found in the full 510(k) submission, but not typically in the public summary document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1