Search Results
Found 2 results
510(k) Data Aggregation
(189 days)
DrAid™ for Liver Segmentation is a web-based software, non-invasive image analysis application designed for the visualization, evaluation, and reporting of liver and physician identified lesions using multiphase images (with slice thickness
DrAid™ for Liver Segmentation is a web-based software that processes and analyzes multiphase CT images in DICOM format. The software utilizes AI algorithms for semi-automated liver segmentation, combined with manual editing capabilities. Additionally, the device provides tools for manual segmentation with user input of seed points and boundary editing for physician-identified lesions within the liver.
Key device components:
- AI algorithm for liver segmentation
- Measurement algorithm
- DICOM Processing Module for CT images
- Liver Segmentation viewer
- Results Export Module
- Device Characteristics:
- Software
Environment of Use:
- Healthcare facility/hospital
Key Features for SE/Performance:
- Visualization modes:
- Original DICOM 2D image viewing
- -MPR visualization
- -Manual correction tools:
- Seed point placement
- Boundary editing for lesions
- Segmentation refinement
- Reporting tool.
- Energy Source:
- -Web-based application running on standard hospital/clinic workstations
Here's a breakdown of the acceptance criteria and the study details for the DrAid™ for Liver Segmentation device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Test Performed | Acceptance Criteria | Reported Device Performance |
---|---|---|
Liver segmentation mask | 1) Mean Dice ≥ 0.95 |
- 95% CI lower bound of Dice scores ≥ 0.90
- 95% CI upper bound of HD95 score ≤ 4.0 | 1) Dice score:
- Mean ± std: 0.9649 ± 0.0195
- 95% CI Dice: 0.9649 [0.9631, 0.9667]
- HD95:
- Mean ± std: 1.7061 ± 1.5800
- 95% CI: 1.7061 [1.5595, 1.8526] |
| Liver volume measurement | 95% CI upper bound of Volume Error ≤ 5% | NVE (Normalized Volume Error):
Mean ± std: 2.7269 % ± 3.1928 %
95% CI: [2.4308 %, 3.0230 %] |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 450 contrast-enhanced CT scans. These scans were from 150 patients.
- Data Provenance:
- Country of Origin: US medical institutions (USA).
- Retrospective/Prospective: Not explicitly stated, but typically these types of validation studies on existing datasets are retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: 3
- Qualifications of Experts: US board-certified radiologists.
4. Adjudication Method for the Test Set
The document mentions that the ground truth was established by "annotations provided by 3 US board-certified radiologists," but it does not specify an adjudication method (e.g., 2+1, 3+1, majority vote, etc.). It implies that the annotations from these three radiologists collectively formed the ground truth, but the process for resolving discrepancies among them is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study was not done. The study focuses on evaluating the standalone performance of the AI algorithm against expert-created ground truth. There is no information provided about comparing human readers' performance with and without AI assistance or any effect size for such an improvement.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, a standalone study was performed. The reported performance metrics (Dice score, HD95, NVE) are for the DrAid™ liver segmentation algorithm itself, evaluated against the established ground truth. The device is described as having "semi-automated quantitative imaging function, utilizing an AI algorithm to generate liver segmentation that is then editable by the physician if necessary," but the performance data presented is for the initial AI segmentation without physician editing.
7. Type of Ground Truth Used
- Expert Consensus (or Expert Annotation): The ground truth was established by "annotations provided by 3 US board-certified radiologists." This falls under expert consensus/annotation.
8. Sample Size for the Training Set
- The sample size for the training set is not provided in the document. The text only describes the test set (450 CT scans from 150 patients).
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established. It only details the ground truth establishment for the independent test set.
Ask a specific question about this device
(125 days)
The DrAid™ for Radiology v1 is a radiological computer-assisted triage & notification software to aid the clinical assessment of adult Chest X-Ray cases with features suggestive of pneumothorax in medical care environment. DrAid™ analyzes cases using an artificial intelligence algorithm to features suggestive of suspected findings. It makes case-level output available to a PACS for worklist prioritization or triage.
As a passive notification for prioritization-only software tool with standard of care workflow, DrAid™ does not send a proactive alert directly to appropriately trained medical specialists. DrAid™ is not intended to direct attention to specific portions or anomalies of an image. Its results are not intended to be used on a stand-alone basis for clinical decisionmaking nor is it intended to rule out pneumothorax or otherwise preclude clinical assessment of X-Ray cases.
DrAid™ for Radiology v1 (hereafter called DrAid™ or DrAid) is a a radiological computer-assisted triage & notification software product that automatically identifies suspected pneumothorax on frontal chest x-ray images and notifies PACS of the presence of pneumothorax in the scan. This notification enables prioritized review by the appropriately trained medical specialists who are qualified to interpret chest radiographs. The software does not alter the order or remove cases from the reading queue. The device's aim is to aid in the prioritization and triage of radiological medical images only.
Chest radiographs are automatically received from the user's image storage system (e.g. Picture Archiving and Communication System (PACS)) or other radiological imaging equipment (e.g. Xray systems) and processed by DrAid™ for analysis. Following receipt of chest radiographs, the software device de-identifies a copy of each chest radiographs in DICOM format (.dcm) and automatically analyzes each image to identify features suggestive of pneumothorax. Based on the analysis result, the software notifies PACS/workstation for the presence of Pneumothorax as indicating either "flag" or "(blank)". This would allow the appropriately trained medical specialists to group suspicious exams together that may potentially benefit for their prioritization. Chest radiographs without an identified anomaly are placed in the worklist for routine review, which is the current standard of care.
The DrAid™ device works in parallel to and in conjunction with the standard care of workflow. After a chest x-ray has been performed, a copy of the study is automatically retrieved and processed by the DrAid™ device, therefore, the analysis result can also be provided in the form of DICOM files containing information on the presence of suspicious Pneumothorax. In parallel, the algorithms produce an on-device notification indicating which cases were prioritized by DrAid™ in PACS. The on-device notification does not provide any diagnostic information and it is not intended to inform any clinical decision, prioritization, or action to who are qualified to interpret chest radiographs. It is meant as a tool to assist in improving workload prioritization of critical cases. The final diagnosis is provided by the radiologist after reviewing the scan itself.
The following modules compose the DrAid™:
Data input and validation: Following retrieval of a study, the validation feature assessed the input data (e.g. age, modality, view) to ensure compatibility for processing by the algorithm.
AI algorithm: Once a study has been validated, the AI algorithm analyzes the frontal chest x-ray for detection of suspected pneumothorax.
API Cognitive service: The study analysis and the results of a successful study analysis are provided through an API service, to then be sent to the PACS for triaging & notification.
Error codes feature: In the case of a study failure during data validation or the analysis by the algorithm, an error is provided to the system.
Here's a breakdown of the acceptance criteria and the study proving DrAid for Radiology v1 meets them, based on the provided text:
1. Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" as a separate, pre-defined set of thresholds that the device must meet. Instead, it compares the performance of the DrAid™ for Radiology v1 device to its predicate device (HealthPNX, K190362) to demonstrate substantial equivalence. The predicate device's performance metrics effectively serve as the implicit "acceptance criteria" for demonstrating comparable safety and effectiveness.
Here's a table comparing the performance of DrAid™ for Radiology v1 (aggregate results) against its predicate:
Metrics | DrAid™ for Radiology v1 Performance (Mean) | DrAid™ for Radiology v1 (95% CI) | Predicate HealthPNX Performance (Mean) | Predicate HealthPNX (95% CI) |
---|---|---|---|---|
Sensitivity | 0.9461 (94.61%) | [0.9216, 0.9676] | 93.15% | [87.76%, 96.67%] |
Specificity | 0.9758 (97.58%) | [0.9636, 0.9865] | 92.99% | [90.19%, 95.19%] |
AUC | 0.9610 (96.10%) | [0.9473, 0.9730] | 98.3% | [97.40%, 99.02%] |
Timing of Notification | 3.83 minutes | N/A | 22.1 seconds | N/A |
The document concludes that the performance of DrAid™ for Radiology v1 is "substantially equivalent" to that of the predicate and satisfies the requirements for product code QFM. It notes that the timing performance, despite being longer, is also considered "substantially equivalent."
2. Sample Sizes Used for the Test Set and Data Provenance
The test set was composed of two separate datasets:
-
NIH Data Set:
- Sample Size: 565 radiographs (386 negative, 179 positive pneumothorax cases).
- Data Provenance: National Institute of Health (NIH), implicitly US. This dataset was used to demonstrate "generalizability of the device to the demographics of the US population."
- Retrospective/Prospective: Not explicitly stated, but typically large public datasets like NIH are retrospective.
-
Vietnamese Data Set:
- Sample Size: 285 radiographs (110 negative, 175 positive pneumothorax cases).
- Data Provenance: Four Vietnamese hospitals (University Medical Center Hospital, Nam Dinh Lung Hospital, Hai Phong Lung Hospital, and Vinmec Hospital).
- Retrospective/Prospective: Not explicitly stated, but likely retrospective from existing hospital archives.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: 3
- Qualifications of Experts: US board-certified radiologists.
4. Adjudication Method for the Test Set
The adjudication method is not explicitly described beyond stating that the datasets were "truthed by a panel of 3 US board certified radiologists." This implies a consensus-based approach, but the specific dynamics (e.g., majority vote, discussion to achieve consensus, a designated tie-breaker) are not detailed. It is often referred to as an "expert consensus" ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No Multi-Reader Multi-Case (MRMC) comparative effectiveness study was mentioned. The study focused on the standalone performance of the AI algorithm. Therefore, there is no reported effect size of how much human readers improve with AI vs without AI assistance from this document.
6. Standalone Performance Study
Yes, a standalone performance study was done. The performance metrics (Sensitivity, Specificity, AUC) reported in the tables are for the algorithm only, without human-in-the-loop performance. This is further clarified by the section title "Performance Testing - Stand-Alone."
7. Type of Ground Truth Used
The ground truth used for the test sets (both NIH and Vietnamese) was expert consensus from a panel of 3 US board-certified radiologists.
8. Sample Size for the Training Set
The document mentions that the training data came from "a hospital system in Vietnam and the publicly available CheXpert data set." However, the specific sample size for the training set is not provided in the text.
9. How the Ground Truth for the Training Set Was Established
- Hospital system in Vietnam: The method for establishing ground truth for this dataset is not explicitly stated.
- Publicly available CheXpert data set: The CheXpert dataset typically derives its labels from automated natural language processing (NLP) of radiology reports, which are then often reviewed and further annotated, though the precise methodology for this specific training use is not detailed here.
The document emphasizes that there was no overlap between training and validation data sets, with data from different hospitals and no patient overlap confirmed.
Ask a specific question about this device
Page 1 of 1