Search Results
Found 1 results
510(k) Data Aggregation
(96 days)
DAFS is a software-only medical device intended for use by trained healthcare professionals for the automated segmentation, visualization, and quantitative analysis of anatomical structures in computed tomography (CT) images.
The software processes DICOM Part 10 formatted CT image data to identify, segment anatomical structures, including body composition tissues and internal organs and annotate DICOM slices by anatomical landmarks. The device provides quantitative measurements of these structures, including area, volume, and intensity values, and displays 2D and 3D visual representations.
The software provides tools for visualization, review, and manual editing of segmentation and slice annotation results and associated measurements.
DAFS is intended for use as an adjunct to clinical assessment. The device does not provide diagnostic interpretation, and all results must be reviewed and confirmed by a qualified healthcare professional.
Data Analysis Facilitation Suite (DAFS) is an image processing software-only medical device that provides automatic tissue segmentation, labeling, visualization, and quantitative characterization of CT images for specific target body tissues. Data output includes quantification of volume, cross-sectional area and intensity for body composition tissues and internal organs.
The Data Analysis Facilitation Suite (DAFS) is a medical image processing software that provides automated tissue segmentation, labeling, visualization, and quantitative characterization of CT images for specific target body tissues. It quantifies volume, cross-sectional area, and intensity for body composition tissues and internal organs, and offers 2D and 3D visual representations. The device functions as an adjunct to clinical assessment, with all results requiring review and confirmation by a qualified healthcare professional.
1. Acceptance Criteria and Reported Device Performance
The FDA clearance letter does not explicitly state numerical acceptance criteria in a table format. However, the summarized performance evaluation acts as an implicit set of criteria based on the demonstrated capabilities.
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Vertebral Annotation Accuracy (Slice-level positional difference) | Median slice error of 0 slices across vertebral levels. Majority of annotations within one slice of the reference standard. |
| Segmentation Performance (Dice Similarity Coefficient - DSC) | Most major anatomical structures achieving mean Dice scores greater than 0.90. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 124 adult subjects (CT scans)
- Data Provenance:
- Origin: Multi-institutional CT studies curated by The Cancer Imaging Archive (TCIA). Primarily from U.S.-based clinical research studies supported by the National Cancer Institute.
- Retrospective/Prospective: Retrospective, as the data was "curated by The Cancer Imaging Archive" and described as an "independent internal testing dataset derived from multi-institutional CT studies."
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: At least two distinct groups of experts:
- Trained anatomy specialists: (Number not specified, but plural indicates more than one) These specialists performed the initial manual segmentation and vertebral landmark annotation.
- Experienced anatomy team leads: (Number not specified, but plural indicates more than one) These leads conducted co-rater review and approval during the quality assurance process.
- Board-certified Nuclear Medicine physician: (At least one) This physician performed the final review to ensure anatomical accuracy and consistency.
- Qualifications:
- "Trained anatomy specialists"
- "Experienced anatomy team leads"
- "Board-certified Nuclear Medicine physician"
4. Adjudication Method for the Test Set
The adjudication method used a multi-stage review process involving different levels of expertise:
- Manual segmentation and annotation by trained anatomy specialists.
- "Structured quality assurance process that included co-rater review and approval by experienced anatomy team leads."
- "Followed by final review by a board-certified Nuclear Medicine physician."
This suggests a tiered, expert consensus-based approach rather than a strict numerical "X+Y" adjudication, with the Nuclear Medicine physician having the final authority.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
- No, an MRMC comparative effectiveness study was not explicitly described in the provided text. The study focused on validating the standalone performance of the DAFS algorithms against expert-generated ground truth. The human-in-the-loop aspect is mentioned as a design feature (human review and adjustment), but not as part of a comparative effectiveness study with and without AI assistance for human readers.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done
- Yes, a standalone performance evaluation was completed. The study evaluated "the performance of the DAFS artificial intelligence (AI) algorithms" by comparing "automated outputs against expert-generated reference standard annotations." This demonstrates the algorithm's performance without human intervention in the segmentation and annotation process for the purpose of the study.
7. The Type of Ground Truth Used
- Expert Consensus/Expert-Generated Reference Standard: The ground truth was established through "manual segmentation and vertebral landmark annotation performed by trained anatomy specialists" which then "underwent a structured quality assurance process that included co-rater review and approval by experienced anatomy team leads, followed by final review by a board-certified Nuclear Medicine physician." This process culminated in "expert-reviewed annotations" serving as the ground truth.
8. The Sample Size for the Training Set
- The document states that the "independent internal testing dataset" consisted of 124 adult subjects "not used during algorithm training." It also mentions "The DAFS algorithms were developed using a collection of multi-institutional CT imaging datasets representing a range of anatomical presentations, imaging protocols, scanner manufacturers, and fields of view."
- However, the exact sample size for the training set is not explicitly provided in the provided text.
9. How the Ground Truth for the Training Set Was Established
- The document states, "The DAFS algorithms were developed using a collection of multi-institutional CT imaging datasets..." and "The training data included CT studies acquired for a variety of clinical indications and imaging conditions to support algorithm generalizability."
- The method for establishing ground truth for the training set is not explicitly detailed in the provided text. While the ground truth for the test set is thoroughly described, the document does not specify if the same rigorous expert-driven process was applied to every image in the training data, or if other methods were used.
Ask a specific question about this device
Page 1 of 1