(108 days)
The QuantX Breast MRI Biopsy Guidance Plugin is a software application that assists users of the QuantX software device in planning MRI guided interventional procedures.
The QuantX Breast MRI Biopsy Guidance Plugin assists users in planning MRI quided interventional procedures. Using information from MR images regarding the coordinates of a user-specified region of interest and fiducial coordinates, the software provides an automatic calculation of the location and depth of the targeted region of interest, such as a lesion or suspected lesion. Its primary goal is to identify where and how deep a biopsy needle should be inserted into an imaged breast in order to strike a targeted lesion or region of interest, as chosen by a trained medical professional. The QuantX Breast MRI Biopsy Guidance Plugin may be used in either the SE or Advanced version of QuantX, a software program used for the display and analysis of medical images.
The provided text does not contain information about acceptance criteria or a study that specifically proves the device meets those criteria. The document is a 510(k) premarket notification summary for the QuantX Breast MRI Biopsy Guidance Plugin, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting explicit acceptance criteria and corresponding performance study results in the format requested.
However, the "Nonclinical Performance Data Testing and Reviews" section (1.7) lists various verification and validation tests performed. I can infer potential "acceptance criteria" from these tests regarding the functionality and accuracy of the device. The reported device performance is indicated by the statement that the tests were "successfully tested" and "demonstrates that the device conforms to user needs and intended use."
Here's an attempt to structure the available information as requested, though many fields will be marked as "Not Provided" or inferred.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the document (510(k) summary demonstrating substantial equivalence), explicit numerical acceptance criteria and precise performance metrics are not detailed. The "performance" is generally described as "verification of correct" or "successful testing."
Acceptance Criteria (Inferred from Verification Tests) | Reported Device Performance |
---|---|
Proper activation of biopsy guidance mode and interface display | Successfully tested; achieved proper activation and display. |
Proper creation of needle block images | Successfully tested; achieved proper creation. |
Proper loading of image series for biopsy guidance | Successfully tested; achieved proper loading. |
Complete and correct selection of grid type variables | Successfully tested; ensured complete and correct selection. |
Complete and correct needle type variables | Successfully tested; ensured complete and correct selection. |
Correct fiducial marker image-space coordinates | Successfully tested; verified correct coordinates. |
Correct size of grid image overlay and location | Successfully tested; verified correct size and location. |
Correct lesion marker overlay display and image-space coordinates | Successfully tested; verified correct display and coordinates. |
Proper display of selected breast, grid cell, and block hole | Successfully tested; achieved proper display. |
Proper display of needle block image and needle depth | Successfully tested; achieved proper display. |
Correct patient orientation indicators | Successfully tested; verified correct indicators. |
Correct lesion depth calculation (by comparison to predicate) | Successfully tested; verified correct calculation. |
Correct needle block hole (by comparison to predicate) | Successfully tested; verified correct hole. |
Correct grid cell (by comparison to predicate) | Successfully tested; verified correct grid cell. |
Guidance worksheet output | Successfully tested; verified output. |
Conformance to user needs and intended use | Validation testing demonstrates conformance. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not provided. The document mentions "Nonclinical tests" but does not specify the number of cases or images used for testing.
- Data Provenance: Not provided. The origin of the data (e.g., country of origin, retrospective or prospective) is not mentioned. Given it's a software for MRI guidance, the data would likely be MRI scans.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Number of Experts: Not provided.
- Qualifications of Experts: Not provided. The study refers to "comparison to predicate" for some verifications, suggesting the predicate device's output or established methods served as a reference, but does not detail human expert involvement in establishing ground truth for the test data itself.
4. Adjudication Method for the Test Set
- Adjudication Method: Not provided. There is no mention of independent expert review, consensus, or other adjudication processes for the test results.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study Done: No. The document does not describe an MRMC study comparing human readers with and without AI assistance. The focus is on the software's functional correctness and substantial equivalence to a predicate device.
- Effect Size of Human Readers Improvement with AI: Not applicable, as no MRMC study was performed or reported.
6. Standalone (Algorithm Only) Performance Study
- Standalone Study Done: Yes, implicitly. The listed "Nonclinical tests" appear to evaluate the algorithm's functionality and accuracy in a standalone manner (e.g., "Verification of correct lesion depth calculation," "Verification of correct fiducial marker image-space coordinates"). The results are described as successful.
7. Type of Ground Truth Used
- Type of Ground Truth: The ground truth for several verification steps was established by comparison to the predicate device. For other functional tests, the ground truth was implied by the Expected Results of the software's specified functionality (e.g., "proper activation," "correct display"). There is no mention of pathology, expert consensus on patient outcomes, or other clinical ground truth methods for the non-clinical tests.
8. Sample Size for the Training Set
- Sample Size for Training Set: Not provided. The document focuses on verification and validation of a software feature, not on machine learning model training. It's unclear if the "biopsy guidance plugin" itself involves machine learning that would require a distinct training set. If it's a computational algorithm for geometry calculations, a training set might not be conventionally applicable.
9. How the Ground Truth for the Training Set Was Established
- How Ground Truth Was Established for Training Set: Not provided, and likely not applicable given the apparent nature of the device as a computational guidance tool rather than a machine learning classifier.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).