(189 days)
DrAid™ for Liver Segmentation is a web-based software, non-invasive image analysis application designed for the visualization, evaluation, and reporting of liver and physician identified lesions using multiphase images (with slice thickness
DrAid™ for Liver Segmentation is a web-based software that processes and analyzes multiphase CT images in DICOM format. The software utilizes AI algorithms for semi-automated liver segmentation, combined with manual editing capabilities. Additionally, the device provides tools for manual segmentation with user input of seed points and boundary editing for physician-identified lesions within the liver.
Key device components:
- AI algorithm for liver segmentation
- Measurement algorithm
- DICOM Processing Module for CT images
- Liver Segmentation viewer
- Results Export Module
- Device Characteristics:
- Software
Environment of Use:
- Healthcare facility/hospital
Key Features for SE/Performance:
- Visualization modes:
- Original DICOM 2D image viewing
- -MPR visualization
- -Manual correction tools:
- Seed point placement
- Boundary editing for lesions
- Segmentation refinement
- Reporting tool.
- Energy Source:
- -Web-based application running on standard hospital/clinic workstations
Here's a breakdown of the acceptance criteria and the study details for the DrAid™ for Liver Segmentation device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Test Performed | Acceptance Criteria | Reported Device Performance |
---|---|---|
Liver segmentation mask | 1) Mean Dice ≥ 0.95 |
- 95% CI lower bound of Dice scores ≥ 0.90
- 95% CI upper bound of HD95 score ≤ 4.0 | 1) Dice score:
- Mean ± std: 0.9649 ± 0.0195
- 95% CI Dice: 0.9649 [0.9631, 0.9667]
- HD95:
- Mean ± std: 1.7061 ± 1.5800
- 95% CI: 1.7061 [1.5595, 1.8526] |
| Liver volume measurement | 95% CI upper bound of Volume Error ≤ 5% | NVE (Normalized Volume Error):
Mean ± std: 2.7269 % ± 3.1928 %
95% CI: [2.4308 %, 3.0230 %] |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 450 contrast-enhanced CT scans. These scans were from 150 patients.
- Data Provenance:
- Country of Origin: US medical institutions (USA).
- Retrospective/Prospective: Not explicitly stated, but typically these types of validation studies on existing datasets are retrospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: 3
- Qualifications of Experts: US board-certified radiologists.
4. Adjudication Method for the Test Set
The document mentions that the ground truth was established by "annotations provided by 3 US board-certified radiologists," but it does not specify an adjudication method (e.g., 2+1, 3+1, majority vote, etc.). It implies that the annotations from these three radiologists collectively formed the ground truth, but the process for resolving discrepancies among them is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, an MRMC comparative effectiveness study was not done. The study focuses on evaluating the standalone performance of the AI algorithm against expert-created ground truth. There is no information provided about comparing human readers' performance with and without AI assistance or any effect size for such an improvement.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, a standalone study was performed. The reported performance metrics (Dice score, HD95, NVE) are for the DrAid™ liver segmentation algorithm itself, evaluated against the established ground truth. The device is described as having "semi-automated quantitative imaging function, utilizing an AI algorithm to generate liver segmentation that is then editable by the physician if necessary," but the performance data presented is for the initial AI segmentation without physician editing.
7. Type of Ground Truth Used
- Expert Consensus (or Expert Annotation): The ground truth was established by "annotations provided by 3 US board-certified radiologists." This falls under expert consensus/annotation.
8. Sample Size for the Training Set
- The sample size for the training set is not provided in the document. The text only describes the test set (450 CT scans from 150 patients).
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established. It only details the ground truth establishment for the independent test set.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).