Search Results
Found 1 results
510(k) Data Aggregation
(123 days)
aprevo**®** Digital Segmentation
aprevo® Digital Segmentation software is intended to be used by trained, medically knowledgeable design personnel to perform digital image segmentation of the spine, primarily lumbar anatomy. The device inputs DICOM images and outputs a 3-D model of the spine.
The device is a software medical device that will use DICOM images as input and provide 3D model of the spine structure. Pre-processing will be performed on the uploaded DICOM files to filter soft tissue and identifying spine. Upon removal of soft tissue and identification of spine structure, the software will utilize an AI-based algorithm to segment the structure and render a 3D model as an output.
The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the "aprevo® Digital Segmentation" software.
Here's a breakdown of the requested information:
Acceptance Criteria and Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
IOU (Intersection Over Union) score > 80% | Exceeded 80% |
Vertebral body labeling accuracy > 90% | Exceeded 90% overall |
Vertebral body labeling sensitivity > 80% | Exceeded 80% |
Vertebral body labeling specificity > 80% | Exceeded 80% |
Study Details:
-
Sample Size Used for the Test Set and Data Provenance:
- Test Set Sample Size: Not explicitly stated in the provided text, but it mentions that "Independent training and validation datasets were selected to ensure model performance would reflect real clinical performance" and "Validation datasets represented diversity in populations and equipment."
- Data Provenance: Not explicitly stated, however, the phrase "diversity in populations and equipment" suggests data from various sources but does not specify countries of origin. The study was a "Non-Clinical Testing" which implies retrospective data.
-
Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- Not specified. The document states that the ground truth for the algorithm was used for model performance evaluation, but does not detail how this ground truth was established, or the number/qualifications of experts involved.
-
Adjudication Method for the Test Set:
- Not specified.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- If done: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly mentioned or indicated. The document states "CLINICAL TESTING: Not applicable." The study solely focuses on the standalone performance of the software.
- Effect size of human readers improvement: Not applicable, as no MRMC study was conducted.
-
Standalone Performance (Algorithm only without human-in-the-loop):
- If done: Yes, a standalone performance evaluation was done. The "NON-CLINICAL TESTING" section describes the evaluation of the "software performance" using IOU and accuracy metrics for segmentation and labeling, without human intervention in the reported performance metrics.
-
Type of Ground Truth Used:
- The type of ground truth used is not explicitly stated as expert consensus, pathology, or outcomes data. However, for "segmentation" and "vertebral body labeling," the ground truth would typically be established by expert annotation or a similar gold standard, refined through a consensus process, but this is not detailed.
-
Sample Size for the Training Set:
- Not explicitly stated. It only mentions that "Independent training and validation datasets were selected to ensure model performance would reflect real clinical performance."
-
How the Ground Truth for the Training Set Was Established:
- Not explicitly stated. The document mentions that "Independent training and validation datasets were selected," but does not elaborate on the method used to establish the ground truth for the training data (e.g., expert annotations, manual segmentation).
Ask a specific question about this device
Page 1 of 1