Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K251682
    Device Name
    MuscleView 2.0
    Date Cleared
    2025-09-09

    (102 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MuscleView 2.0 is a magnetic resonance diagnostic software device used in adults and pediatrics aged 18 and older which automatically segments muscle, bone, fat and other anatomical structures from magnetic resonance imaging. After segmentation, it enables the generation, display and review of magnetic resonance imaging data. The segmentation results need to be reviewed and edited using appropriate software. Other physical parameters derived from the images may also be produced. This device is not intended for use with patients who have tumors in the trunk, arms and/or lower limb(s). When interpreted by a trained clinician, these images and physical parameters may yield information that may assist in diagnosis.

    Device Description

    MuscleView 2.0 is a software-only medical device which performs automatic segmentation of musculoskeletal structures. The software utilizes a locked artificial intelligence/machine learning (AI/ML) algorithm to identify and segment anatomical structures for quantitative analysis. The input to the software is DICOM data from magnetic resonance imaging (MRI), but the subject device does not directly interface with any devices. The output includes volumetric and dimensional metrics of individual and grouped regions of interest (ROIs) (such as muscles, bones and adipose tissue) and comparative analysis against a Virtual Control Group (VCG) derived from reference population data.

    MuscleView 2.0 builds upon the predicate device, MuscleView 1.0 (K241331, cleared 10/01/2024), which was cleared for the segmentation and analysis of lower extremity structures (hips to ankles). The subject device extends functionality to include:

    • Upper body regions (neck to hips)
    • Adipose tissue segmentation (subcutaneous, visceral, intramuscular, and hepatic fat)
    • Quantitative comparison with a Virtual Control Group
    • Additional derived metrics including Z-scores and composite scores (e.g., muscle-bone score)

    The submission includes a Predetermined Change Control Plan which details the procedure for retraining AI/ML algorithms or adding data to the Virtual Control Groups in order to improve performance without negatively impacting the safety or efficacy of the device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided FDA 510(k) clearance letter for MuscleView 2.0:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for MuscleView 2.0 were based on the device's segmentation accuracy, measured by Dice Similarity Coefficient (DSC) and absolute Volume Difference (VDt), remaining within the interobserver variability observed among human experts. The study demonstrated the device met these criteria. Since the text explicitly states the AI model's performance was "consistently within these predefined interobserver ranges," and "passed validation," the reported performance for all ROIs was successful in meeting the acceptance criteria.

    MetricAcceptance CriteriaComment on Reported Performance
    Dice Similarity Coefficient (DSC)DSC values where the 95% confidence interval for each ROI (across all subgroup analyses) indicates performance at or below interobserver variability (meaning higher DSC, closer to 1.0, is better). Specifically, a desired outcome was "a mean better than or equal to the acceptance criteria."Consistently within predefined interobserver ranges and passed validation for all evaluated ROIs and subgroups. (See Table 1 for 95% CIs of individual ROIs across subgroups).
    Absolute Volume Difference (VDt)VDt values where the 95% confidence interval for each ROI (across all subgroup analyses) indicates performance at or below interobserver variability (meaning lower VDt, closer to 0, is better). Specifically, a desired outcome was "a mean better than or equal to the acceptance criteria."Consistently within predefined interobserver ranges and passed validation for all evaluated ROIs and subgroups. (See Table 2 for 95% CIs of individual ROIs across subgroups).

    2. Sample Sizes Used for the Test Set and Data Provenance

    AI SettingNumber of Unique ScansNumber of Unique SubjectsData Provenance
    AI Setting 1 (Lower Extremity)148148Retrospective, "diverse population," multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Toshiba/Other). Countries of origin not explicitly stated, but "regional demographics" are provided implying a mix of populations.
    AI Setting 2 & 3 (Upper Extremity and Adipose Tissue)171171Retrospective, "diverse population," multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Other/Unknown). Countries of origin not explicitly stated, but "regional demographics" are provided implying a mix of populations.
    • Overall Test Set: 148 unique subjects (for AI Setting 1) + 171 unique subjects (for AI Settings 2 & 3) = 319 unique subjects.
    • Data Provenance: Retrospective, curated collection of MRI datasets from a diverse patient population (age, BMI, biological sex, ethnicity) from multiple imaging sites and MRI manufacturers (GE, Siemens, Philips, Canon, Toshiba/Other/Unknown). Independent from training datasets. De-identified.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated, but referred to as "expert segmentation analysts" and "expert human annotation." The study mentions "consensus process by expert segmentation analysts" for training data, and for testing, "manual segmentation performed by experts" and that the "interobserver variability range observed among experts" was used as a benchmark. The document does not specify the exact number of experts or their specific qualifications (e.g., years of experience or board certification).

    4. Adjudication Method for the Test Set

    • Adjudication Method: The ground truth for both training and testing datasets was established through a "consensus process by expert segmentation analysts" for training data and "manual segmentation performed by experts" for the test set. It does not explicitly state a 2+1 or 3+1 method; rather, it implies a consensus was reached among the experts. The key here is the measurement of "interobserver variability," suggesting that multiple experts initially segmented the data, and their agreement (or discordance) defined the benchmark, from which a final consensus might have been derived.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC study was performed. The performance testing was a standalone study comparing the AI segmentation to expert manual segmentation (ground truth) rather than comparing human readers with and without AI assistance. The text states: "Performance results demonstrated segmentation accuracy within the interobserver variability range observed among experts." This indicates a comparison of the AI's output against what multiple human experts would agree upon, not an evaluation of human performance improvement with AI.

    6. Standalone Performance Study

    • Yes, a standalone study was done. The document states: "To evaluate the performance of the MuscleView AI segmentation algorithm, a comprehensive test was conducted using a test set that was fully independent from the training set. The AI was blinded to the ground truth segmentation labels during inference, ensuring an unbiased comparison." This clearly describes a standalone performance evaluation of the algorithm.

    7. Type of Ground Truth Used

    • Expert Consensus / Expert Manual Segmentation: The ground truth was established by "manual segmentation performed by experts" and through a "consensus process by expert segmentation analysts." This is a form of expert consensus derived from detailed manual annotation. The benchmark for acceptance was the "interobserver variability range observed among experts."

    8. Sample Size for the Training Set

    AI SettingNumber of Unique ScansNumber of Unique Subjects
    AI Setting 1 (Lower Extremity)16581294
    AI Setting 2 & 3 (Upper Extremity and Adipose Tissue)392209
    Total Unique Subjects for Training: 1294 + 209 = 1503 (Note: Some subjects might be present in both sets if they had both lower and upper extremity scans, but the table specifies "unique subjects" per AI setting)
    • Total Training Set: 1658 (scans for AI Setting 1) + 392 (scans for AI Settings 2 & 3) = 2050 unique scans.
    • Total Unique Subjects: 1294 (for AI Setting 1) + 209 (for AI Settings 2 & 3) = 1503 unique subjects.

    9. How Ground Truth for the Training Set Was Established

    • The ground truth for the training set was established through a "consensus process by expert segmentation analysts" on a "curated collection of retrospective MRI datasets."
    Ask a Question

    Ask a specific question about this device

    K Number
    K241331
    Device Name
    MuscleView
    Manufacturer
    Date Cleared
    2024-10-01

    (144 days)

    Product Code
    Regulation Number
    892.1000
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MuscleView is used in adults and pediatics aged 18 and older to automatically segment muscle and bone structures of the lower extremities from magnetic resonance imaging using a machine learning-based approach. After segmentation, it can provide derived metrics including muscle volume, intramuscular fat percentage, and left/right asymmetry.

    It is intended to be used by physicians who are trained to interpret MRI images, and serves as an initial method to segment muscle and bone structures from one or more study series. The segmentation results need to be reviewed and edited using appropriate software.

    It is intended to only provide the segmentation and derived metrics for muscle and bone structures and cannot serve as direct guidance for dagnosis of any diseases. This device is not intents who have tumors in lower limb.

    Device Description

    MuscleView is a software only product that uses a machine learning-based approach for the automatic segmentation of musculoskeletal structures from MRI. Based on the segmentation, metrics such as volume and length of the segmented structures are calculated.

    The software has the following modules: user management, data management, image processing, Al segmentation & 3D model viewer and metrics calculation. User management involves authentication and access to the software and its results. Data management involves medical image data and its interactions with the system workflow. Image processing involves Preprocessing the DICOM data to create a combined continuous 3D volume(s) of series with similar settings for use in Al segmentation & 3D model viewer module handles training data and algorithms to obtain the pre-trained models and algorithms to update models. Metric calculation module handles the final calculation of relevant metrics.

    Input data is preprocessed and prepared for 3D volume segmentation of the musculoskeletal structures. A library of already contoured expert cases is utilized to train the machine learning algorithms, specifically convolutional networks (CNNs) perform automated segmentation. This process is in an auxiliary module for AI training.

    MuscleView is intended to be used by physicians who are trained to interpret MRI images, and serves as an initial method to segment muscle and bone structures from one or more study series. The segmentation results need to be reviewed and edited using appropriate software. This device is not intended for use with patients who have tumors in lower limb. The currently supported anatomical regions for automatic segmentation are 80 different muscles and bones of the lower extremity.

    Upon segmentation, a suite of metrics regarding the segmented 3D volumes is provided. It is intended to only provide the segmentation and derived metrics for muscle and bone structures and cannot serve as direct guidance for diagnosis of any diseases. These metrics include segmentation volume, fat infiltration (if applicable), and limb side asymmetry. The metrics are provided in conjunction with an interactive visualization of the 3D segmentation results.

    The software is deployed within a private network on a workstation with an advanced graphic processing unit (GPU) and runs as a service. A web-based interface is used to access the service and manage the data transfer, automatic segmentation, and visualization.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance Criteria (Metric)Acceptance Threshold (Based on Interobserver Variability)Reported Device Performance (95% Confidence Interval)
    Dice Similarity Coefficient (DSC)Mean better than or equal to interobserver repeatabilitySee Table 4 for 95% CIs for all 80 musculoskeletal structures across subgroups
    Volume Difference (VDt)Below interobserver variabilitySee Table 5 for 95% CIs for all 80 musculoskeletal structures across subgroups

    Note: The exact numerical threshold for interobserver variability is not explicitly stated, but the document indicates that the device's performance was compared against this threshold, and a "mean better than or equal to the acceptance criteria" (for DSC) and "below interobserver variability" (for VDt) was desired for an ROI to pass validation. The tables provide the actual 95% confidence intervals for the device's performance for each structure and subgroup, indicating that these intervals were within the "passed validation" criteria.

    Study Information

    FeatureDescription
    Sample Size (Test Set)148 unique scans, from 148 unique subjects.
    Data Provenance (Test Set)Not explicitly stated whether retrospective or prospective. Geographic origin is also not explicitly stated beyond general "imaging centers and organizations," but the ethnicity breakdown includes "Non-Hispanic White," "Hispanic/Latino," "Black/African American," "Asian," "Australian," "American Indian / Alaska Native," "Native Hawaiian / Pacific Islander," and "Australian Aboriginal," suggesting a diverse origin.
    Number & Qualifications of Experts (Test Set Ground Truth)Not explicitly stated but indicated as "experts" who performed "manual segmentation." The personnel "involved in establishing the reference standard for the AI were not involved in the algorithm's development" to ensure independence.
    Adjudication Method (Test Set)Not explicitly stated. The ground truth for the test set was established by "manual segmentation performed by experts" and compared against "interobserver variability." This implies that the expert segmentations likely underwent some form of review or reconciliation to establish a robust reference, though the specific method (e.g., 2+1, 3+1) isn't detailed.
    Multi-Reader Multi-Case (MRMC) Comparative Effectiveness StudyNo. The study focuses on standalone algorithm performance compared to expert consensus.
    Standalone PerformanceYes. The AI segmentation was validated against a "reference standard developed by manual segmentation performed by experts."
    Type of Ground Truth Used (Test Set)Expert consensus (manual segmentation performed by experts).
    Sample Size (Training Set)1658 unique scans, from 1294 unique subjects.
    How Ground Truth for Training Set Was EstablishedA "library of already contoured expert cases is utilized to train the machine learning algorithms." It is also mentioned that "expert contours" were used to train the CNNs. This implies expert manual segmentation was used to create the ground truth for training.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1