K Number
K240926
Device Name
PeekMed web
Manufacturer
Date Cleared
2024-12-06

(246 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the planning.

The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.

Device Description

PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

As PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.

AI/ML Overview

The provided document describes the PeekMed web device and its performance testing to demonstrate substantial equivalence to a predicate device. Here's a breakdown of the acceptance criteria and the study details:

1. Table of Acceptance Criteria and Reported Device Performance

The document explicitly states the acceptance criteria for each ML model but does not provide the reported device performance values against these criteria. It only states that the device met the acceptance criteria.

ML modelAcceptance CriteriaReported Device Performance
SegmentationDICE is no less than 90%
HD-95 is no more than 8
STD DICE is between +/- 10%
Precision is more than 85%
Recall is more than 90%Met acceptance criteria
LandmarkingMRE is no more than 7mm
STD MRE is between +/- 5mmMet acceptance criteria
ClassificationAccuracy is no less than 90%
Precision is no less than 85%
Recall is no less than 90%
F1 score is no less than 90%Met acceptance criteria

2. Sample Sizes and Data Provenance

  • Test Set (External Validation Dataset):

    • Segmentation ML model: 367 unique datasets
    • Landmarking ML model: 367 unique datasets
    • Classification ML model: 367 unique datasets
  • Training and Development Datasets:

    • Total CR datasets: 2852
    • Total CT scans: 1903
    • Training: 80% of the combined CR/CT datasets
    • Development: 10% of the combined CR/CT datasets
    • Testing (internal): The remaining 10%
  • Data Provenance: The document states that ML models were developed with datasets from multiple sites. It does not specify the country of origin of the data. The data used for external validation was collected independently from the development data to prevent bias, implying a prospective collection or at least a carefully curated retrospective collection to ensure independence. It states the external validation dataset was "not a sampling of the development dataset" and "leakage between development and validation data sets did not occur."

3. Number of Experts and Qualifications

The document states that the external validation dataset was "labeled by a separate team" to establish ground truth. However, it does not specify the number of experts used or their qualifications (e.g., radiologist with 10 years of experience).

4. Adjudication Method

The document mentions that the external validation dataset was "labeled by a separate team," implying that this team established the ground truth. However, it does not describe the specific adjudication method used (e.g., 2+1, 3+1, none).

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The study focuses on the standalone performance of the ML models and their equivalency to a predicate device, not on the improvement of human readers with AI assistance.

6. Standalone (Algorithm Only) Performance

Yes, the performance evaluation described focuses on the standalone performance of the ML models (segmentation, landmarking, classification) as incorporated into the PeekMed web system. The results against the predefined acceptance criteria evaluate the algorithm's performance without specific human-in-the-loop metrics, although the overall device is intended to assist human healthcare professionals.

7. Type of Ground Truth Used

The ground truth for the test set (external validation datasets) was established by predefined ground truth. The term "predefined ground truth" and "labeled by a separate team" suggests expert consensus or a similar meticulous labeling process, but it does not specify whether it was pathology, outcomes data, or directly observed clinical outcomes. Given the nature of medical imaging software, it is highly probable to be expert consensus.

8. Sample Size for the Training Set

The training set comprised 80% of a total of 2852 CR datasets and 1903 CT scans.
Total datasets = 2852 (CR) + 1903 (CT) = 4755 datasets.
Training set size = 80% of 4755 = 3804 datasets.

9. How the Ground Truth for the Training Set was Established

The document states that the ML models were developed with datasets from multiple sites. It implies that these datasets, used for training, development, and internal testing, also had established ground truth necessary for supervised learning. However, the document does not explicitly describe the process for establishing the ground truth specifically for the training set. It only states that the external validation data was "labeled by a separate team" for independence, suggesting a similar process might have been used for the training data but this is not confirmed.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).