Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K252856

    Validate with FDA (Live)

    Device Name
    PeekMed web
    Manufacturer
    Date Cleared
    2025-12-22

    (104 days)

    Product Code
    Regulation Number
    892.2050
    Age Range
    18 - 120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticPediatricDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment are necessary for the proper use of the system in the revision and approval of the output of the planning. The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery, and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.

    Device Description

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment are necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    As the PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it can then provide a total overview of the surgery. Being software, it does not interact with any part of the body of the user and/or patient.

    AI/ML Overview

    The acceptance criteria and study proving device performance are described below, based on the provided FDA 510(k) clearance letter for PeekMed web (K252856).

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document lists the acceptance criteria but does not explicitly state the reported device performance for each metric from the validation studies. It only states that the efficacy results "met the acceptance criteria for ML model performance." Therefore, the "Reported Device Performance" column reflects this general statement.

    ML modelAcceptance CriteriaReported Device Performance
    SegmentationDICE is no less than 90%HD-95 is no more than 8STD DICE is between +/- 10%Precision is more than 85%Recall is more than 90%Met the acceptance criteria for ML model performance
    LandmarkingMRE is no more than 7mmSTD MRE is between +/- 5mmMet the acceptance criteria for ML model performance
    ClassificationAccuracy is no less than 90%.Precision is no less than 85%Recall is no less than 90%F1 score is no less than 90%Met the acceptance criteria for ML model performance
    DetectionMAP is no less than 90%.Precision is no less than 85%Recall is no less than 90%Met the acceptance criteria for ML model performance
    ReconstructionDICE is no less than 90%HD-95 is no more than 8STD DICE is between +/- 10%Precision is more than 85%Recall is more than 90%Met the acceptance criteria for ML model performance

    2. Sample Size Used for the Test Set and Data Provenance

    The document distinguishes between a "testing" dataset (used for internal evaluation during development) and an "external validation" dataset. The external validation dataset serves as the independent test set for assessing final model performance.

    • Test Set (External Validation):
      • Segmentation ML model: 672 unique datasets
      • Landmarking ML model: 561 unique datasets
      • Classification ML model: 367 unique datasets
      • Detection ML model: 198 unique datasets
      • Reconstruction ML model: 87 unique datasets
    • Data Provenance: The document states that ML models were developed with datasets "from multiple sites." It does not specify the country of origin of the data nor explicitly state whether the data was retrospective or prospective, though "external validation datasets were collected independently of the development data" and "labeled by a separate team," suggesting a retrospective approach to data collection for the validation.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document mentions that the "External validation...was employed to provide an accurate assessment of the model's performance." and that the dataset was "labeled by a separate team". It does not specify the number of experts used or their specific qualifications (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set

    The document states that the ground truth for the external validation dataset was "labeled by a separate team." It does not specify an adjudication method such as 2+1, 3+1, or if multiple experts were involved and how discrepancies were resolved.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, the document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done to evaluate how much human readers improve with AI vs. without AI assistance. The testing focused on the standalone performance of the ML models against a predefined ground truth.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone performance evaluation (algorithm only without human-in-the-loop) was done. The performance data section describes the "efficacy results of the [specific] ML model using the testing and external validation datasets against the predefined ground truth," indicating an assessment of the algorithm's performance independent of human interaction during the measurement. The device is described as a "decision support tool" requiring "clinical assessment... for the proper use of the system in the revision and approval of the output," implying the algorithm provides output that a human reviews, but the performance testing described here is on the raw algorithm output.

    7. The Type of Ground Truth Used

    The ground truth used for both the training and test sets is referred to as "predefined ground truth" and established by "labeling" or a "separate team" for the external validation sets. This implies a human-generated expert consensus or annotation-based ground truth, although the specific expertise and method of consensus are not detailed. It is not explicitly stated as pathology or outcomes data.

    8. The Sample Size for the Training Set

    The ML models were trained with datasets from multiple sites totaling:

    • 2852 X-ray datasets
    • 2073 CT scans
    • 209 MRIs

    These total datasets were split as follows:

    • Training Set: 80% of the total dataset for each modality.
      • X-ray: 0.80 * 2852 = 2281.6 (approx. 2282)
      • CT scans: 0.80 * 2073 = 1658.4 (approx. 1658)
      • MRIs: 0.80 * 209 = 167.2 (approx. 167)

    9. How the Ground Truth for the Training Set Was Established

    The document states, "ML models were developed with datasets...We trained the ML models with 80% of the dataset..." and refers to "predefined ground truth." While it doesn't explicitly detail the process for training data, it is implied that the training data also had human-generated ground truth (annotations/labels), similar to the validation data, as ML models rely on labeled data for supervised learning. It mentions that "leakage between development and validation data sets did not occur," and the external validation set was "labeled by a separate team," suggesting the training data was also labeled by experts, possibly the "internal procedures" mentioned for ML model development.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1