Search Results
Found 2 results
510(k) Data Aggregation
(133 days)
RADLogics, Inc.
The AIMI-Triage CXR PTX Application is a notification-only triage workflow tool for use by hospital networks and clinics to identify and help prioritize chest X-rays acquired in the acute setting for review by hospital radiologists. The device operates in parallel to and independent of standard of care image interpretation workflow. Specifically, the device uses an artificial intelligence algorithm to analyze images for features suggestive of moderate to large sized pneumothorax; it makes caselevel output available to a PACS/workstation for worklist prioritization or triage. Identification of suspected cases of moderate to large sized pneumothorax is not for diagnostic use beyond notification.
The AIMI-Triage CXR PTX Application is limited to analysis of imaging data as a guide to possible urgency of adult chest X-ray image review, and should not be used in lieu of full patient evaluation or relied upon to make or confirm diagnoses. Notified radiologists are responsible for engaging in appropriate patient evaluation as per local hospital procedure before making care-related decisions or requests. The device does not replace review and diagnosis of the X-rays by radiologists. The device is not intended to be used with plain film X-rays.
The AIMI-Triage CXR PTX provides a chest X-ray prioritization service for use by radiologists to identify features suggestive of moderate to large sized pneumothorax. The artificial intelligence algorithm, trained via pattern recognition, processes each chest X-ray and flags those that appear to contain a moderate to large sized pneumothorax for urgent radiologist review. X-rays without an identified anomaly are placed in the worklist for routine review, which is the current standard of care. The user interface is minimal, consisting of the radiologist's existing picture archiving and communication system (PACS) viewer and worklist in which positively identified images are flagged by the software to notify of the suspected anomaly. Images are not marked or otherwise altered, and no diagnoses are provided.
The device does not have any direct accessories. However, it interacts with hospital communication and database systems in order to read and analyze cases in the worklist of the hospital's PACS system in order to identify suspected abnormal findings and transmit corresponding notifications to reflect its recommended prioritization of patient examinations for radiologist review. The software output is compatible with any PACS viewer and worklist.
Acceptance Criteria and Study Details for AIMI-Triage CXR PTX
1. Acceptance Criteria and Reported Device Performance
Criteria | Accepted Performance Goal | Reported Device Performance |
---|---|---|
Overall AUC | Substantially equivalent to the predicate device, meeting a required performance goal (specific numerical value not explicitly stated, but implied to be in line with predicate's performance). | 0.967 (95% CI: [0.950, 0.984]) |
Sensitivity | Not explicitly stated as an acceptance criterion, but device performance is reported. | 92% (95% CI: [86%, 96%]) and 90% (95% CI: [84%, 95%]) for unspecified categories/cohorts within the overall dataset. |
Time to Analyze and Notify | Substantially equivalent to the predicate device (22.1 seconds). | 20.3 seconds |
Performance by Dataset and Region | Not explicitly stated as acceptance criteria, but further detailed performance is provided for evaluation. | NIH (US): Sensitivity 97.6% (93.2,99.2), Specificity 90.8% (84.5,94.7), AUROC 0.987 (0.973,0.999) |
PADCHEST (OUS): Sensitivity 85.3% (79.0,90.8), Specificity 89.7% (83.6,93.9), AUROC 0.949 (0.918,0.979) | ||
Performance by Scanner Spatial Resolution | Not explicitly stated as acceptance criteria, but further detailed performance is provided for evaluation. | High range (0.170 lp/mm): Sensitivity 93.1% (84.8,98.3), Specificity 91.4% (80.7,96.5), AUROC 0.946 (0.911,0.980) |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 300 frontal chest X-rays (PA/AP).
- Data Provenance: Collected from US and OUS (Outside US) patients, representative of the intended population. The datasets are specifically identified as NIH (US) and PADCHEST (OUS). The study was retrospective.
- Patient Demographics: 168 (56%) male with mean age 51.6 years (SD=18.6, range 18-91), and 132 (44%) female with mean age 51.8 years (SD=16.2, range 23-86).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: 3 independent radiologists.
- Qualifications: US-board certified radiologists.
4. Adjudication Method for the Test Set
- The ground truth was established by 3 independent US-board certified radiologists.
- Each "Truther" involved in the ground truthing process was blinded to any other Truther's results, to any existing report, and to the results obtained by the AIMI-Triage CXR PTX software. This implies that the ground truth was established by individual assessment and likely involved a consensus or majority vote among the 3 radiologists, although the specific "2+1" or "3+1" method is not explicitly stated. The emphasis on independent and blinded assessment supports a robust adjudication process.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- It is not explicitly stated that an MRMC comparative effectiveness study was done to measure human reader improvement with AI assistance. The study focuses on standalone AI performance compared to a ground truth established by experts.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance study was conducted. The "AIMI-Triage CXR PTX output was compared to the ground truth established by 3 independent US-board certified radiologists." This indicates the algorithm's performance without direct human-in-the-loop assistance during the evaluation.
7. Type of Ground Truth Used
- Expert Consensus: The ground truth was established by "3 independent US-board certified radiologists." While "consensus" isn't explicitly used, the involvement of multiple blinded experts suggests a robust process to define the true positive/negative cases.
8. Sample Size for the Training Set
- The document does not specify the sample size for the training set. It only mentions that the artificial intelligence algorithm was "trained via pattern recognition."
9. How the Ground Truth for the Training Set Was Established
- The document does not specify how the ground truth for the training set was established. It only indicates that the algorithm was "trained via pattern recognition."
Ask a specific question about this device
(85 days)
RADLOGICS, INC.
The AlphaPoint software is a device that allows review, analysis, and interchange of CT chest images. It is intended for use with CT Chest images to assist medical professionals in image analysis. It is not intended to be the primary interpretation. The software provides segmentation and Hounsfield numerical analysis values which are indicative of various substances (i.e., air, lung, soft tissue, fat, water, transudate, exudate, blood, muscle and bone). The user can review, verify and correct the results of the system and generate a report of the findings.
The AlphaPoint system provides a full application framework with integration to PACS using DICOM. The system has the following functions:
- Communicates with PACS to get imaging studies for processing; .
- Activates one or more applications that process the imaging data . and use segmentation and Hounsfield measurements algorithms to find and measure various attributes in the images, and also identify particular slices as references images for the findings ;
- Formats the processing results for each study into a Preliminary . Findings Report
- Sends the results to PACS. .
- The software is written in C++, C# and Matlab. .
The provided text does not contain the detailed performance study results, acceptance criteria, or specific information about the test set (sample size, data provenance, ground truth establishment, or expert qualifications) that would allow for a comprehensive answer to your request.
The document is a 510(k) summary and FDA clearance letter for the AlphaPoint Imaging Software. It primarily focuses on:
- Device Description and Intended Use: What the device does and what it's for.
- Substantial Equivalence: How it compares to a predicate device.
- Verification and Validation Statement: A general statement that the software underwent V&V processes according to company procedures and FDA guidance, including DICOM compliance testing and internal software testing (unit tests, system testing). It mentions a Software Test Description (STD) specifying acceptance criteria and a Software Test Report (STR) documenting results, but these documents themselves are not included or summarized in detail in the provided text.
Therefore, I cannot populate the table or answer most of your specific questions about the performance study.
Here's what can be extracted and what information is missing:
Missing Information:
- Specific Acceptance Criteria: The document states that the STD "describes the test cases for the device, along with its acceptance criteria," but it does not list these criteria or the associated reported device performance.
- Detailed Device Performance: Beyond DICOM compliance, there are no specific metrics (e.g., accuracy, precision, sensitivity, specificity for segmentation or Hounsfield analysis) reported.
- Sample Size for Test Set: Not mentioned.
- Data Provenance (country, retrospective/prospective): Not mentioned.
- Number of Experts/Qualifications for Ground Truth: Not mentioned.
- Adjudication Method: Not mentioned.
- MRMC Comparative Effectiveness Study: No mention of such a study.
- Standalone Performance: While the device acts as an algorithm (software), specific standalone performance metrics (e.g., accuracy of its segmentation compared to ground truth) are not provided.
- Type of Ground Truth: Not mentioned how ground truth was established for "segmentation and Hounsfield numerical analysis" beyond general V&V.
- Sample Size for Training Set: Not mentioned.
- How Ground Truth for Training Set was Established: Not mentioned.
Available Information (or lack thereof, based on your questions):
Information Point | Details from Provided Text |
---|---|
1. Table of Acceptance Criteria and Reported Device Performance | Cannot be provided. The text states that the "Software Test Description (STD) for the Alphapoint System describes the test cases for the device, along with its acceptance criteria, and the detailed test procedure." It also mentions DICOM compliance testing, which "passed the six DICOM specific test cases." However, no specific performance metrics (e.g., accuracy for segmentation) or detailed acceptance criteria for the core functions (segmentation, Hounsfield analysis) are present in this summary. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective) | Not specified. The document mentions "test cases for the device" and "validation test runs" but does not detail the size or nature of the dataset used for these tests. |
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts | Not specified. There is no information regarding the establishment of ground truth by experts for any dataset. |
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set | Not specified. |
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance | No indication that an MRMC comparative effectiveness study was done. The device's intended use states it "is not intended to be the primary interpretation" and "assist medical professionals," suggesting an assistive role, but no study comparing human performance with and without the device is mentioned. |
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done | Implicitly, yes, for certain functional aspects, but no performance metrics are given. The document states the system "Activates one or more applications that process the imaging data and use segmentation and Hounsfield measurements algorithms to find and measure various attributes in the images." Testing for DICOM compliance and internal "unit tests and system testing" would evaluate the algorithm's functionality in a standalone manner. However, specific standalone performance (e.g., accuracy of algorithm segmentation vs. expert ground truth) is not reported. |
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc) | Not specified. While the software performs "segmentation and Hounsfield numerical analysis," the method for establishing the "correct" segmentation or Hounsfield values for validation is not described. |
8. The sample size for the training set | Not applicable/Not specified. The document does not describe the device as a machine learning/AI model that undergoes a "training" phase with a separate training set. It describes algorithms for segmentation and Hounsfield measurements. If these algorithms are more traditional image processing rather than learned AI models, a training set might not exist in the conventional sense. Even if it were an AI model, the training set size is not mentioned. |
9. How the ground truth for the training set was established | Not applicable/Not specified. As above, a "training set" is not mentioned in the context of this device's development as described. |
Ask a specific question about this device
Page 1 of 1