Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K250290
    Device Name
    SurgiTwin
    Manufacturer
    Date Cleared
    2025-08-29

    (210 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    SurgiTwin is a web-based platform designed to help healthcare professionals carry out pre-operative planning for knee reconstruction procedures, based on their patients' imported imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

    The system works with a database of digital representations related to surgical materials supplied by their manufacturers. SurgiTwin generates a PDF report as an output. End users of the generated SurgiTwin reports are trained healthcare professionals. SurgiTwin does not provide a diagnosis or surgical recommendation.

    Device Description

    SurgiTwin is a semi-automated Software as a Medical Device (SaMD) that assists health care professionals in the pre-operative planning of total knee replacement surgery. Using a series of algorithms, the software creates 2D segmented images, a 3D model, and relevant measurements derived from the patient's pre-dimensioned medical images. The software interface allows the user to adjust the plan manually to verify the accuracy of the model and achieve the desired clinical targets. SurgiTwin generates a PDF report as an output. SurgiTwin does not provide a diagnosis or surgical recommendation.

    The intended patient population is patients over 22 undergoing total knee replacement surgery without any existing material in the operated lower limb.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for SurgiTwin:

    1. Acceptance Criteria and Reported Device Performance

    The provided document specifically details acceptance criteria for the segmentation ML model. Other functions (automatic landmark function, metric generation, implant placement, osteophyte removal) are mentioned as having "predefined clinical acceptance criteria" and "all acceptance criteria were met," but the specific numeric criteria are not listed.

    Table of Acceptance Criteria (for the Segmentation ML Model) and Reported Device Performance:

    MetricAcceptance CriteriaReported Device Performance
    Mean DSC (Dice Similarity Coefficient)> 0.95Met (> 0.95, implied by "met the acceptance criteria")
    Mean voxel based AHD (Average Hausdorff Distance)< 1.0mmMet (< 1.0mm, implied by "met the acceptance criteria")
    5th percentile of the DSC> 0.9Met (> 0.9, implied by "met the acceptance criteria")
    95th percentile of the boundary based HD 95 (Hausdorff Distance 95th percentile)< 2.5mmMet (< 2.5mm, implied by "met the acceptance criteria")

    2. Sample Size and Data Provenance

    The document states:

    • Test Set (Validation Dataset): Not explicitly stated, but it's part of a dataset where the ML model was "tested with the remaining 19%." The total training and testing dataset size is also not explicitly stated in numerical terms (only "datasets from multiple sites").
    • Data Provenance: "Datasets from multiple sites." Institution Name and Institution Location were subgroup definitions, implying a variety of sources, but no specific countries or retrospective/prospective nature are mentioned for the test set. However, the "ML Model Development and Testing Information" sections generally imply retrospective data collection for development and testing.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not explicitly stated.
    • Qualifications of Experts: The ground truth for the segmentation ML model reference standard was established through labeling, but the document only mentions that the "validation dataset was labeled by different individuals from the training dataset." No specific qualifications (e.g., radiologist with X years of experience) are provided for these individuals.

    4. Adjudication Method for the Test Set

    The document does not specify an adjudication method (such as 2+1 or 3+1) for the establishment of ground truth for the test set. It only mentions labeling was done by "different individuals."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The performance studies described are focused on the device's standalone performance compared to ground truth or automatic vs. manual landmarking, not human reader improvement with AI assistance.

    6. Standalone Performance Study (Algorithm Only)

    Yes, a standalone performance study was clearly done for the ML model. The document states:

    • "The machine learning (ML) model incorporated into SurgiTwin was developed, trained, tested, and validated for its performance."
    • "Comparison of the performance of the segmentation ML model against the predefined ground truth met the acceptance criteria for the model performance."

    This indicates that the algorithm's performance was evaluated independently against a ground truth.

    Furthermore, studies for "automatic landmark function," "metrics generated by SurgiTwin," "default implant placement algorithm," and "osteophyte removal function" all imply standalone validation of these algorithmic components against predefined criteria or manual annotations by experts.

    7. Type of Ground Truth Used

    The ground truth used for the segmentation ML model was expert consensus (implied by "labeled by different individuals") or expert annotation. It's referred to as "predefined ground truth."

    For other functions:

    • Automatic landmark function and generated metrics were compared to "manual landmark placement by expert annotators" and "manual annotations by expert annotators," respectively, which also points to expert annotation as ground truth.
    • The clinical acceptability of implant placement and osteophyte removal functions was also validated against "predefined clinical acceptance criteria," likely based on expert consensus or established clinical standards.

    8. Sample Size for the Training Set

    The ML model was "trained with 81% of the dataset." The total size of this dataset is not explicitly stated in numerical terms.

    9. How the Ground Truth for the Training Set Was Established

    The document states: "The validation dataset was labeled by different individuals from the training dataset." This implies that the training dataset was also labeled, likely by similar "individuals" (presumed experts or annotators). However, the specific methodology for establishing this ground truth for the training set (e.g., number of annotators, adjudication) is not detailed.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1