Search Results
Found 1 results
510(k) Data Aggregation
(144 days)
MuscleView is used in adults and pediatics aged 18 and older to automatically segment muscle and bone structures of the lower extremities from magnetic resonance imaging using a machine learning-based approach. After segmentation, it can provide derived metrics including muscle volume, intramuscular fat percentage, and left/right asymmetry.
It is intended to be used by physicians who are trained to interpret MRI images, and serves as an initial method to segment muscle and bone structures from one or more study series. The segmentation results need to be reviewed and edited using appropriate software.
It is intended to only provide the segmentation and derived metrics for muscle and bone structures and cannot serve as direct guidance for dagnosis of any diseases. This device is not intents who have tumors in lower limb.
MuscleView is a software only product that uses a machine learning-based approach for the automatic segmentation of musculoskeletal structures from MRI. Based on the segmentation, metrics such as volume and length of the segmented structures are calculated.
The software has the following modules: user management, data management, image processing, Al segmentation & 3D model viewer and metrics calculation. User management involves authentication and access to the software and its results. Data management involves medical image data and its interactions with the system workflow. Image processing involves Preprocessing the DICOM data to create a combined continuous 3D volume(s) of series with similar settings for use in Al segmentation & 3D model viewer module handles training data and algorithms to obtain the pre-trained models and algorithms to update models. Metric calculation module handles the final calculation of relevant metrics.
Input data is preprocessed and prepared for 3D volume segmentation of the musculoskeletal structures. A library of already contoured expert cases is utilized to train the machine learning algorithms, specifically convolutional networks (CNNs) perform automated segmentation. This process is in an auxiliary module for AI training.
MuscleView is intended to be used by physicians who are trained to interpret MRI images, and serves as an initial method to segment muscle and bone structures from one or more study series. The segmentation results need to be reviewed and edited using appropriate software. This device is not intended for use with patients who have tumors in lower limb. The currently supported anatomical regions for automatic segmentation are 80 different muscles and bones of the lower extremity.
Upon segmentation, a suite of metrics regarding the segmented 3D volumes is provided. It is intended to only provide the segmentation and derived metrics for muscle and bone structures and cannot serve as direct guidance for diagnosis of any diseases. These metrics include segmentation volume, fat infiltration (if applicable), and limb side asymmetry. The metrics are provided in conjunction with an interactive visualization of the 3D segmentation results.
The software is deployed within a private network on a workstation with an advanced graphic processing unit (GPU) and runs as a service. A web-based interface is used to access the service and manage the data transfer, automatic segmentation, and visualization.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
Acceptance Criteria (Metric) | Acceptance Threshold (Based on Interobserver Variability) | Reported Device Performance (95% Confidence Interval) |
---|---|---|
Dice Similarity Coefficient (DSC) | Mean better than or equal to interobserver repeatability | See Table 4 for 95% CIs for all 80 musculoskeletal structures across subgroups |
Volume Difference (VDt) | Below interobserver variability | See Table 5 for 95% CIs for all 80 musculoskeletal structures across subgroups |
Note: The exact numerical threshold for interobserver variability is not explicitly stated, but the document indicates that the device's performance was compared against this threshold, and a "mean better than or equal to the acceptance criteria" (for DSC) and "below interobserver variability" (for VDt) was desired for an ROI to pass validation. The tables provide the actual 95% confidence intervals for the device's performance for each structure and subgroup, indicating that these intervals were within the "passed validation" criteria.
Study Information
Feature | Description |
---|---|
Sample Size (Test Set) | 148 unique scans, from 148 unique subjects. |
Data Provenance (Test Set) | Not explicitly stated whether retrospective or prospective. Geographic origin is also not explicitly stated beyond general "imaging centers and organizations," but the ethnicity breakdown includes "Non-Hispanic White," "Hispanic/Latino," "Black/African American," "Asian," "Australian," "American Indian / Alaska Native," "Native Hawaiian / Pacific Islander," and "Australian Aboriginal," suggesting a diverse origin. |
Number & Qualifications of Experts (Test Set Ground Truth) | Not explicitly stated but indicated as "experts" who performed "manual segmentation." The personnel "involved in establishing the reference standard for the AI were not involved in the algorithm's development" to ensure independence. |
Adjudication Method (Test Set) | Not explicitly stated. The ground truth for the test set was established by "manual segmentation performed by experts" and compared against "interobserver variability." This implies that the expert segmentations likely underwent some form of review or reconciliation to establish a robust reference, though the specific method (e.g., 2+1, 3+1) isn't detailed. |
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study | No. The study focuses on standalone algorithm performance compared to expert consensus. |
Standalone Performance | Yes. The AI segmentation was validated against a "reference standard developed by manual segmentation performed by experts." |
Type of Ground Truth Used (Test Set) | Expert consensus (manual segmentation performed by experts). |
Sample Size (Training Set) | 1658 unique scans, from 1294 unique subjects. |
How Ground Truth for Training Set Was Established | A "library of already contoured expert cases is utilized to train the machine learning algorithms." It is also mentioned that "expert contours" were used to train the CNNs. This implies expert manual segmentation was used to create the ground truth for training. |
Ask a specific question about this device
Page 1 of 1