Search Results
Found 1 results
510(k) Data Aggregation
(259 days)
Quantitative Total Extensible Imaging (QTxI) is a software tool used to aid in evaluation and information management of digital medical images by trained medical professionals including, but not limited to, radiologists, nuclear medicine physicians, medical imaging technologists, dosimetrists and physicists. The medical modalities of these medical images include DICOM CT and PET as supported by ACR/NEMA DICOM 3.0.
QTxI assists in the following indications:
- Receive, store, retrieve, display and process digital medical images.
- Create, display and print reports from those images.
- Provide medical professionals with the ability to display, register, and fuse medical images.
- Identify Regions of Interest (ROIs) and perform ROI contouring allowing quantitative/statistical analysis of full or partial body scans.
- Evaluate quantitative change in ROIs (total or partial body; individual ROI within individual) with 3D interactive rendering of images with highlighted ROIs.
Quantitative Total Extensible Imaging (QTxl) is a software tool designed for use in medical imaging. It is stand-alone software which operates on Windows 7 and Windows 10. Its intended function and use is to provide medical professionals with the means to display, register and fuse medical images from multiple modalities including DICOM PET and CT. Additionally, it identifies Regions of Interest (ROIs) and performs ROI contouring allowing quantitative/statistical analysis of full or partial-body scans through registration to template space.
QTxl is designed to support multiple image analysis modules. Each module is designed for a specific image analysis purpose. Currently QTxl includes only the Quantitative Total Bone Imaging (QTBI) module, which is designed to identify and measure hot-spots on PET scans. QTBI aids the efficiency of medical professionals through automatic quantification of ROIs and changes in those ROIs, including 3D interactive rendering of the patient skeleton with highlighted Regions of Interest.
QTxI also functions as a Picture Archive and Communications System (PACS) intended to receive, store, retrieve, display and process digital medical images, as well as create, display and print reports from those images. It also provides platform features for security, workflow and integration.
The provided text is a 510(k) Pre-market Notification from the FDA regarding the Quantitative Total Extensible Imaging (QTxI) device. However, it does not contain the specific details about a study proving the device meets acceptance criteria as described in your request.
The document primarily focuses on establishing "substantial equivalence" of QTxI to a predicate device (Exini Diagnostics AB; EXINI, K122205) and a supporting predicate device (MIMvista Corp. MIM4.1 (Seastar), K071964). It mentions "Performance Data (Nonclinical)" but only in very general terms:
- "Software verification testing that demonstrates the device meets product performance and functional specifications."
- "Software verification testing demonstrating that DICOM information collected with medical imaging systems and transmitted through manual or virtual input are captured, transmitted, and stored properly to maintain data integrity (e.g., no loss of data)."
It concludes that "QTxl met all predetermined acceptance criteria of design verification and validation as specified by applicable standards, and test protocols." However, it does not detail these acceptance criteria or the specific study results.
Therefore, I cannot provide the requested information based solely on the provided text. The document states that performance data was submitted and implies successful verification and validation, but it does not present the data itself or the specifics of the study design.
To answer your request, if the information were present in the document, it would look something like this (speculative, based on typical FDA submissions, but not found in the provided text):
Acceptance Criteria and Device Performance Study
While the provided 510(k) summary indicates that QTxI met all predetermined acceptance criteria through non-clinical performance bench tests and simulated clinical performance tests, the specific details of these criteria and the study results are not explicitly enumerated in the document. The general nature of the "Performance Data (Nonclinical)" section suggests that detailed quantitative data on performance was likely submitted separately as part of the full 510(k application.
Based on the general statements, a hypothetical structure of the requested information, if it were present, would be:
1. Table of Acceptance Criteria and Reported Device Performance
| Performance Metric (Hypothetical) | Acceptance Criteria (Hypothetical) | Reported Device Performance (Hypothetical) | P-value/Confidence Interval (Hypothetical) |
|---|---|---|---|
| ROI Contouring Accuracy (Jaccard Index) | ≥ 0.90 (vs. Expert Consensus) | 0.92 ± 0.03 | < 0.001 |
| Quantitative Analysis Precision (CV% for ROI Volume) | ≤ 5% | 3.8% | N/A (precision measure) |
| Image Registration Accuracy (Target Registration Error) | ≤ 2 mm (mean) | 1.5 mm ± 0.5 mm | N/A (accuracy measure) |
| DICOM Data Integrity (Loss/Corruption Rate) | 0% | 0% (in 1000 transfers) | N/A |
| Software Functionality (Pass Rate of Test Cases) | 100% | 100% | N/A |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not specify. Hypothetically, for a software general purpose imaging device, it might be in the range of 50-200 cases per test type (e.g., for ROI contouring accuracy on different anatomies, or for image registration tasks).
- Data Provenance: The document does not specify. Hypothetically, for devices of this nature, data could be retrospectively collected clinical DICOM images from various institutions (e.g., U.S. and European hospitals) to ensure variability in scanners and patient demographics.
3. Number of Experts Used to Establish Ground Truth and Qualifications
- Number of Experts: The document does not specify. Hypothetically, typically 2-3 experts for consensus, or more for an MRMC study.
- Qualifications of Experts: The document does not specify. Hypothetically, e.g., Three Board-Certified Radiologists with 10+ years of experience in oncological imaging and PET/CT interpretation, specializing in bone metastases.
4. Adjudication Method for the Test Set
- Adjudication Method: The document does not specify. Hypothetically, for ground truth establishment, a common method is 2+1 (two experts independently annotate, and a third adjudicates disagreements). If no disagreements, the consensus of the two is the ground truth. Alternatively, 3+0 (majority vote of three independent experts) or a defined consensus meeting might be used.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: The document does not explicitly state that an MRMC study was performed to assess human reader improvement with AI assistance. The device is described as a "software tool to aid in evaluation and information management," implying it's a tool for professionals, but the performance data presented is general "nonclinical" verification.
- Effect Size: Therefore, no effect size of human readers improving with AI vs. without AI assistance can be reported from this document.
6. Standalone (Algorithm Only) Performance
- Standalone Performance: The document states, "QTxl is stand-alone software." The "non-clinical performance bench tests and simulated clinical performance tests" likely refer to the algorithm's standalone performance, particularly for tasks like ROI contouring and quantitative analysis. However, specific metrics are not provided.
7. Type of Ground Truth Used
- Type of Ground Truth: The document does not specify the method used to establish ground truth for the "software verification testing." Hypothetically, for a device performing ROI contouring and quantitative analysis on images, the ground truth would most likely be expert consensus annotations/measurements on the medical images. Pathology or outcomes data might be relevant for clinical utility but are less direct for verifying a software's image processing capabilities.
8. Sample Size for the Training Set
- Training Set Sample Size: The document does not provide details on a training set, as it is a 510(k) for a PACS/image processing tool, not a deep learning AI model that typically requires distinct training sets. The "software verification testing" mentioned would apply to the developed software based on its functional specifications. If the QTxI uses "template space" registration as mentioned, those templates would be developed, but not via a "training set" in the sense of deep learning.
9. How Ground Truth for the Training Set Was Established
- Ground Truth for Training Set: As no explicit training set is mentioned in the context of typical AI model development, the method for establishing its "ground truth" is not applicable from this document. If elements of machine learning were present (not explicitly stated for QTxI), the ground truth for training data would similarly be established by expert annotation or curated datasets.
Ask a specific question about this device
Page 1 of 1