Search Filters

Search Results

Found 5 results

510(k) Data Aggregation

    K Number
    K251096
    Device Name
    PeekMed web
    Manufacturer
    Date Cleared
    2025-07-14

    (95 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PeekMed web

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning. The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.

    Device Description

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment are necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    As the PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it can then provide a total overview of the surgery. Being software, it does not interact with any part of the body of the user and/or patient.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device's performance, based on the provided FDA 510(k) clearance letter:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state the "reported device performance" against each acceptance criterion. It only states that the comparison of efficacy results met the acceptance criteria. Thus, the "Reported Device Performance" column reflects this qualitative statement.

    ML ModelAcceptance CriteriaReported Device Performance
    SegmentationDICE is no less than 90%
    HD-95 is no more than 8
    STD DICE is between +/- 10%
    Precision is more than 85%
    Recall is more than 90%Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence.
    LandmarkingMRE is no more than 7mm
    STD MRE is between +/- 5mmComparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence.
    ClassificationAccuracy is no less than 90%.
    Precision is no less than 85%
    Recall is no less than 90%
    F1 score is no less than 90%Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence.
    DetectionMAP is no less than 90%.
    Precision is no less than 85%
    Recall is no less than 90%Comparison of the efficacy results using the testing and external validation datasets against the predefined ground truth met the acceptance criteria for ML model performance, demonstrating substantial equivalence.

    Note: The document only confirms that the performance met the criteria, not the exact values achieved.

    2. Sample Sizes and Data Provenance

    • Test Set Sample Sizes (External Validation Data):
      • Segmentation ML model: 402 unique datasets
      • Landmarking ML model: 367 unique datasets
      • Classification ML model: 347 unique datasets
      • Detection ML model: 198 unique datasets
    • Data Provenance: The document states that ML models were developed with datasets from "multiple sites." It doesn't specify the country of origin but mentions that the development, training, and testing data, as well as the external validation data, were designed to cover the intended use population while ensuring variety and diverse patient characteristics. It implies the data is retrospective as it refers to "datasets" collected for model development and validation.

    3. Number of Experts and Qualifications for Ground Truth

    The document does not specify the number of experts or their qualifications for establishing the ground truth for the test set. It only mentions that the "External validation...was collected independently of the development data to prevent bias, ensuring the reliability of the results. For the external validation, a fully independent dataset, labeled by a separate team, was employed..." The qualifications of this "separate team" are not detailed.

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It states that the external validation dataset was "labeled by a separate team." This suggests a single labeling event by that team, rather than an explicit multi-reader adjudication process.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, an MRMC comparative effectiveness study comparing human readers with AI assistance versus without AI assistance was not reported. The study focuses purely on the standalone performance of the ML models against established ground truth.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance evaluation of the algorithm (ML models) was done. The performance metrics (DICE, HD-95, MRE, Accuracy, Precision, Recall, F1 score, MAP) and acceptance criteria are applied directly to the output of the ML models.

    7. Type of Ground Truth Used

    The type of ground truth used is referred to as "predefined ground truth" which was established through a "truthing process" and labeled by a "separate team." While it doesn't explicitly state "expert consensus" or "pathology," for image segmentation and landmarking in medical imaging, ground truth is typically established by trained human experts (e.g., radiologists, orthopedic surgeons, or technicians with specific training) through manual annotation or expert review, which often involves some form of consensus. For classification and detection tasks, ground truth similarly relies on definitive labels provided by experts or established from patient records/outcomes, though the document does not elaborate on the specific method for each ML model type.

    8. Sample Size for the Training Set

    The training set comprised 80% of the total datasets available for ML model development, which included:

    • Total X-rays: 2852
    • Total CT scans: 1903
    • Total MRIs: 151

    Therefore, the approximate sample sizes for the training set are:

    • X-rays: 0.80 * 2852 = 2281.6 (approx. 2282)
    • CT scans: 0.80 * 1903 = 1522.4 (approx. 1522)
    • MRIs: 0.80 * 151 = 120.8 (approx. 121)

    9. How the Ground Truth for the Training Set Was Established

    The document states, "ML models were developed with datasets from multiple sites... We trained the ML models with 80% of the datasets, developed with 10%, and tested with the remaining 10%." It also mentions that "the validation dataset...has never been used for the algorithm training or for tuning the algorithm, and leakage between development and validation data sets did not occur."

    While the process for the training set's ground truth is not explicitly detailed in the same way as the external validation "labeled by a separate team," it is implicitly established through the "development" process. Typically, for ML models of this nature, ground truth for training data would also be established through manual annotation by qualified personnel (e.g., clinicians, trained annotators) following established protocols. The document's emphasis on data independence for external validation suggests that the development/training data was also accurately labeled for its purpose.

    Ask a Question

    Ask a specific question about this device

    K Number
    K250042
    Device Name
    PeekMed web
    Manufacturer
    Date Cleared
    2025-03-19

    (70 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PeekMed web

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.

    Device Description

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    As PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document provides the acceptance criteria but does not directly state the reported device performance metrics from the external validation. It only states that the efficacy results "met the acceptance criteria."

    ML ModelAcceptance CriteriaReported Device Performance (Not explicitly stated in document, only that criteria were met)
    SegmentationDICE is no less than 90%
    HD-95 is no more than 8
    STD DICE is between +/- 10%
    Precision is more than 85%
    Recall is more than 90%Met acceptance criteria
    LandmarkingMRE is no more than 7mm
    STD MRE is between +/- 5mmMet acceptance criteria
    ClassificationAccuracy is no less than 90%.
    Precision is no less than 85%
    Recall is no less than 90%
    F1 score is no less than 90%Met acceptance criteria
    DetectionMAP is no less than 90%.
    Precision is no less than 85%
    Recall is no less than 90%Met acceptance criteria

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Test Set (External Validation Dataset) Sample Sizes:
      • Segmentation ML model: 375
      • Landmarking ML model: 345
      • Classification ML model: 347
      • Detection ML model: 198
    • Data Provenance: The document states "multiple sites." It does not specify the country of origin. The external validation dataset was collected "independently of the development data to prevent bias" and was a "fully independent dataset." It is not explicitly stated whether the data was retrospective or prospective, but the phrasing "collected independently" for external validation often implies existing, retrospectively collected data used for this specific purpose.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The document states that the external validation dataset was "labeled by a separate team" to establish ground truth. It does not provide the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set

    The document does not explicitly describe an adjudication method (e.g., 2+1, 3+1). It only states that the ground truth was "labeled by a separate team."

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If So, What was the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance

    No MRMC comparative effectiveness study involving human readers with and without AI assistance is described in the provided text. The study focuses solely on the standalone performance of the ML models against a predefined ground truth. The device is a "decision support tool" requiring "clinical assessment" and "revision and approval of the output of the planning" by healthcare professionals, implying a human-in-the-loop workflow, but no human performance study is detailed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Yes, a standalone performance evaluation of the ML models was done. The acceptance criteria and the evaluation against a "predefined ground truth" for Segmentation, Landmarking, Classification, and Detection ML models indicate standalone algorithm performance. The document states that the "efficacy results... met the acceptance criteria for ML model performance."

    7. The Type of Ground Truth Used

    The ground truth was "predefined ground truth" established by a "separate team" for the external validation dataset. While not explicitly stated as "expert consensus," this typically implies human expert review and labeling given the context of medical imaging and planning. It is not stated as pathology or outcomes data.

    8. The Sample Size for the Training Set

    The ML models were developed with datasets totaling 2852 CR datasets and 1903 CT scans.

    • Training Set: 80% of these datasets were used for training.
      • 0.80 * (2852 + 1903) = 0.80 * 4755 = 3804 datasets

    9. How the Ground Truth for the Training Set Was Established

    The document states that the ML models were "developed with datasets from multiple sites." While it mentions that "External validation datasets were collected independently of the development data... labeled by a separate team," it does not explicitly describe the methodology for establishing ground truth for the training dataset. However, it's generally inferred in such contexts that training data also requires labeled ground truth, likely established by human annotators or experts, but the specifics are not provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240926
    Device Name
    PeekMed web
    Manufacturer
    Date Cleared
    2024-12-06

    (246 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PeekMed web

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    This medical device consists of a decision support tool for qualified healthcare professionals to quickly and efficiently perform the pre-operative planning for several surgical procedures, using medical imaging with the additional capability of planning the 2D or 3D environment. The system is designed for the medical specialties within surgery and no specific use environment is mandatory, whereas the typical use environment is a room with a computer. The patient target group is adult patients who have an injury or disability diagnosed previously. There are no other considerations for the intended patient population.

    Device Description

    PeekMed web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    As PeekMed web is capable of representing medical images in a 2D or 3D environment, performing relevant measurements on those images, and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.

    AI/ML Overview

    The provided document describes the PeekMed web device and its performance testing to demonstrate substantial equivalence to a predicate device. Here's a breakdown of the acceptance criteria and the study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document explicitly states the acceptance criteria for each ML model but does not provide the reported device performance values against these criteria. It only states that the device met the acceptance criteria.

    ML modelAcceptance CriteriaReported Device Performance
    SegmentationDICE is no less than 90%
    HD-95 is no more than 8
    STD DICE is between +/- 10%
    Precision is more than 85%
    Recall is more than 90%Met acceptance criteria
    LandmarkingMRE is no more than 7mm
    STD MRE is between +/- 5mmMet acceptance criteria
    ClassificationAccuracy is no less than 90%
    Precision is no less than 85%
    Recall is no less than 90%
    F1 score is no less than 90%Met acceptance criteria

    2. Sample Sizes and Data Provenance

    • Test Set (External Validation Dataset):

      • Segmentation ML model: 367 unique datasets
      • Landmarking ML model: 367 unique datasets
      • Classification ML model: 367 unique datasets
    • Training and Development Datasets:

      • Total CR datasets: 2852
      • Total CT scans: 1903
      • Training: 80% of the combined CR/CT datasets
      • Development: 10% of the combined CR/CT datasets
      • Testing (internal): The remaining 10%
    • Data Provenance: The document states that ML models were developed with datasets from multiple sites. It does not specify the country of origin of the data. The data used for external validation was collected independently from the development data to prevent bias, implying a prospective collection or at least a carefully curated retrospective collection to ensure independence. It states the external validation dataset was "not a sampling of the development dataset" and "leakage between development and validation data sets did not occur."

    3. Number of Experts and Qualifications

    The document states that the external validation dataset was "labeled by a separate team" to establish ground truth. However, it does not specify the number of experts used or their qualifications (e.g., radiologist with 10 years of experience).

    4. Adjudication Method

    The document mentions that the external validation dataset was "labeled by a separate team," implying that this team established the ground truth. However, it does not describe the specific adjudication method used (e.g., 2+1, 3+1, none).

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    The document does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done. The study focuses on the standalone performance of the ML models and their equivalency to a predicate device, not on the improvement of human readers with AI assistance.

    6. Standalone (Algorithm Only) Performance

    Yes, the performance evaluation described focuses on the standalone performance of the ML models (segmentation, landmarking, classification) as incorporated into the PeekMed web system. The results against the predefined acceptance criteria evaluate the algorithm's performance without specific human-in-the-loop metrics, although the overall device is intended to assist human healthcare professionals.

    7. Type of Ground Truth Used

    The ground truth for the test set (external validation datasets) was established by predefined ground truth. The term "predefined ground truth" and "labeled by a separate team" suggests expert consensus or a similar meticulous labeling process, but it does not specify whether it was pathology, outcomes data, or directly observed clinical outcomes. Given the nature of medical imaging software, it is highly probable to be expert consensus.

    8. Sample Size for the Training Set

    The training set comprised 80% of a total of 2852 CR datasets and 1903 CT scans.
    Total datasets = 2852 (CR) + 1903 (CT) = 4755 datasets.
    Training set size = 80% of 4755 = 3804 datasets.

    9. How the Ground Truth for the Training Set was Established

    The document states that the ML models were developed with datasets from multiple sites. It implies that these datasets, used for training, development, and internal testing, also had established ground truth necessary for supervised learning. However, the document does not explicitly describe the process for establishing the ground truth specifically for the training set. It only states that the external validation data was "labeled by a separate team" for independence, suggesting a similar process might have been used for the training data but this is not confirmed.

    Ask a Question

    Ask a specific question about this device

    K Number
    K222767
    Device Name
    PeekMed web (v1)
    Manufacturer
    Date Cleared
    2022-12-30

    (108 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PeekMed web (v1)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed® web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for the proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    Device Description

    PeekMed® web is a system designed to help healthcare professionals carry out pre-operative planning for several surgical procedures, based on their imported patients' imaging studies. Experience in usage and a clinical assessment is necessary for proper use of the system in the revision and approval of the output of the planning.

    The multi-platform system works with a database of digital representations related to surgical materials supplied by their manufacturers.

    As PeekMed® web is capable of representing the medical images in a 2D or 3D environment, performing relevant measurements on those images and also capable of adding templates, it then can perform a total overview of the surgery. Being software it does not interact with any part of the body of the user and/or patient.

    AI/ML Overview

    The provided text describes PeekMed web (v1), a medical image management and processing system for pre-operative planning, and its substantial equivalence to a predicate device, PeekMed. The document focuses on regulatory compliance and the differences between the new device and its predicate.

    However, the furnished text does not contain the specific details required to answer all parts of your request, particularly regarding the acceptance criteria, the study design for proving the device meets it, and detailed performance metrics of the ML models. The document states that "ML models incorporated into PeekMed web were also trained, tested and validated for their performance," and that the "measuring function of the software was verified and validated... in order to assure the safety and correct performance of the device compared to the predicate." It also mentions fulfilling "previously defined accuracy and precision specifications." This implies that such studies were performed, but the results themselves and the specific criteria are not provided in this document.

    Therefore, the following response is based only on the information explicitly available in the provided text, and I will highlight where the requested information is not present.


    Device: PeekMed web (v1)

    Acceptance Criteria and Reported Device Performance

    The document broadly states that the device was validated to ensure it "fulfills the previously defined accuracy and precision specifications" for its measuring function and that the "ML models... were also trained, tested and validated for their performance." However, specific numerical acceptance criteria or reported performance metrics (e.g., sensitivity, specificity, or error margins for measurements) are NOT provided in the text.

    Acceptance Criteria (e.g., Performance Threshold for ML Models, Measurement Accuracy/Precision)Reported Device Performance
    Not explicitly stated in the provided text. The document indicates "previously defined accuracy and precision specifications" for the measuring function and general validation for ML model performance.Not explicitly stated in the provided text. The document states "it was confirmed that it fulfills the previously defined accuracy and precision specifications" for the measuring function and that ML models were "validated for their performance."

    Study Information

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not specified in the provided text. The document mentions "All anatomical areas were tested, as well as other main areas of the software, such as the planning final report, and saved planning, ML models, among others." However, the number of cases/images used for testing is not detailed.
      • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective).
    2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

      • Number of Experts: Not specified. The text mentions that "It is mandatory that the qualified user validates each individually landmatically positioned" and that "qualified users (trained surgeons) can perform activities related to the approval of clinical and critical information." This suggests expert involvement in ground truth establishment for validation/review, but the specific number and their qualifications for formal test set ground truth are not detailed.
      • Qualifications of Experts: The text refers to "qualified medical specialist (user)" and "trained surgeons." Specific experience levels (e.g., "10 years of experience") are not provided.
    3. Adjudication Method for the Test Set:

      • Not specified. The document states that "An automatic plan is always reviewed and validated by the qualified medical specialist" and that "It is mandatory that the qualified user validates each individually landmatically positioned." This describes the workflow of the device requiring user validation, but not a formal adjudication method (e.g., 2+1, 3+1) for establishing ground truth during the validation studies.
    4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If so, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:

      • No, an MRMC comparative effectiveness study is not explicitly mentioned or detailed in the provided text. The document describes the automatic planning and landmarking features as "designed to improve and accelerate the user planning experience," but no results from MRMC studies showing an effect size of human reader improvement with AI assistance are provided.
    5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

      • The document implies that standalone performance of the ML models was evaluated, as it states: "ML models incorporated into PeekMed web were also trained, tested and validated for their performance." However, the specific metrics and the definition of "standalone" in this context are not detailed. The device's use case heavily emphasizes human validation of the AI's output, suggesting standalone performance serves as a component of the overall system validation rather than a primary use case.
    6. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.):

      • The text suggests that for the automatic planning and landmarking features, the ground truth or the corrective mechanism involves validation by "qualified medical specialists" or "trained surgeons." For the measuring function, it was verified to "effectively and repeatedly match the real dimensions," implying comparison against a known or established standard/reference. There is no mention of pathology or outcomes data as ground truth.
    7. The Sample Size for the Training Set:

      • Not specified in the provided text. The document mentions that "For the development of these 2 ML models, it was verified that no pediatric images were used." This pertains to the exclusion criteria for the training data, but not its size.
    8. How the Ground Truth for the Training Set Was Established:

      • Not explicitly detailed in the provided text. It is implied that for the ML models, the training data would have had some form of annotated ground truth, likely established by experts, given the nature of pre-operative planning. However, the specific methodology (e.g., single expert, consensus) is not described.
    Ask a Question

    Ask a specific question about this device

    K Number
    K182464
    Device Name
    PeekMed
    Manufacturer
    Date Cleared
    2018-10-25

    (45 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PeekMed

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    PeekMed is a software system designed to help surgeons' specialists carry out the pre-operative planning in a prompt and efficient manner for several surgical procedures, based on their patients' imaging studies.

    The software imports diagnostics imaging studies such as x-rays, CT or magnetic resonance image (MRI). The import process can retrieve files from a CD ROM, a local folder or the PACS. In parallel, there is a database of digital representations related to prosthetic materials supplied by their producing companies.

    PeekMed allows health professional to digitally perform the surgical planning without adding any additional steps to that process. This software system requires no imaging study acquisition specification (no protocol). Experience in usage and a clinical assessment are necessary for a proper use of the software.

    Device Description

    PeekMed is a standalone software application that helps specialist doctors and surgeons to perform the pre-surgical planning for different procedures in a fast and effective way, based on the imaging studies of the patients.

    PeekMed is a 3D pre-operative planning software for surgery. This software system allows surgeons to plan a surgery procedure simulating various environments, from hybrid (2D/3D) to 3D or 2D environments.

    The software imports diagnostic imaging studies such as X-rays, Computed Tomography (CT) or Magnetic Resonance Image (MRI). The import process can retrieve files from a local folder or the Picture Archiving and Communications System (PACS) of the hospital/health center. In parallel, there is a database containing digital representations related to prosthetic materials supplied by their respective manufacturers. This offers the possibility of inserting templates of the materials to be used in the surgery in addition to the measurements, making a complete overview of the surgery.

    In the case of a 3D or hybrid environment, the surgeon can resort to 3D model generated from a previous imaging study on the patient and 3D digital representations of the prosthetic material to be used during surgery, i.e. screws, fixation plates or full prosthesis, deriving from several producing companies.

    AI/ML Overview

    The PeekMed device is a software system designed for pre-operative planning in various surgical procedures, based on patient imaging studies.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided document describes the validation activities and states that "Acceptance criteria were achieved for all tests." However, it does not explicitly list the quantitative acceptance criteria for each specific performance metric. Instead, it describes the type of validation performed and the goals of that validation.

    Based on the information provided, we can infer the following:

    Acceptance Criteria (Implied)Reported Device Performance
    Software Functionality: All specified software functions operate as intended."Verification testing consisted of specific software functionalities testing and system level testing. Acceptance criteria, defined in the product requirements, were met for each verification test and are described in JIRA." This indicates that key functionalities, such as importing images, digital templating, measurement tools (ruler, angle), and 2D/3D planning environments, were tested and met their predefined criteria.
    Measurement Accuracy and Repeatability: Lengths and angles measured by the software accurately and repeatedly match real dimensions."To reassure that the lengths and angles measured with the internal functions of 'ruler' and 'angle' of the PeekMed software effectively and repeatedly match the real dimensions, validation consists the measuring of images of three implantable medical devices (prosthesis), CE marked and with strictly defined dimensions. Validation phase ensures that all product requirements have been fulfilled, meets the end-users needs, and ensure the safety and proper performance of the device."
    Fulfillment of Product Requirements: All defined product requirements are met."Validation phase ensures that all product requirements have been fulfilled..."
    End-User Needs: The software meets the needs of end-users (surgeons)."...meets the end-users needs..." and "Also, satisfaction questionnaires were made to assess the usability of the PeekMed software when comparing with others in the market, and also to make sure that the device operates as intended during the design stage."
    Safety and Proper Performance: The device operates safely and performs properly."...and ensure the safety and proper performance of the device."
    Usability: The software is usable for its intended purpose."Also, satisfaction questionnaires were made to assess the usability of the PeekMed software when comparing with others in the market..."
    Effectiveness of 3D and Hybrid Planning (compared to 2D only): The new 3D and hybrid planning features do not raise new safety or effectiveness concerns.The "Significant Differences" section of the "Comparison of Characteristics" table states: "In more to 2D, PeekMed can offer a 3D pre-surgical planning or a hybrid 2D/3D environment in addition to isolated 2D and 3D. PeekMed has been tested and validated for 3D and hybrid planning." And for the additional feature of allowing intersection of models: "The additional feature from PeekMed of allowing the intersection of the models has been tested and validated and does not raise different questions of safety or effectiveness."
    Support for diversified Orthopedic Subspecialties: The procedures not common to both PeekMed and the predicate have been tested and validated.The "Significant Differences" table notes: "The procedures that are not common to both devices have been tested and validated through PeekMed development. It does not raise different questions of safety or effectiveness."

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not specify a distinct "test set" with a particular sample size in the context of a clinical study for performance evaluation. Instead, the validation involved:

    • Internal validation: "measuring of images of three implantable medical devices (prosthesis), CE marked and with strictly defined dimensions." This sample size (n=3) is for validating measurement accuracy.
    • External validation/Follow-up: "continuous follow-up from the Marketing and Sales team, follow-up on events registered in the platform mixpanel and user/customer surveys."
    • Usability questionnaires: Sample size is not specified but implies a group of users.

    Data Provenance: The document does not explicitly state the country of origin for the data used in validation. It mentions "images of three implantable medical devices (prosthesis), CE marked," which are likely standardized devices rather than patient data. The overall development and submission are from Portugal. The nature of the validation implies retrospective testing on the selected images and prospective feedback (user/customer surveys, event follow-up).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document states that validation was "performed internally prior to the release to the market by qualified personnel (personnel with background in anatomy and biomedical field)". It does not specify a number of experts who established ground truth for the test set (the three implantable devices). For these devices, the "strictly defined dimensions" act as the inherent ground truth, meaning no external expert adjudication was needed to establish it.

    For overall clinical assessment and user feedback, it mentions "surgeons' specialists" as the intended users and that "Experience in usage and a clinical assessment are necessary for a proper use of the software," implying that the feedback and assessment come from qualified medical professionals, but a specific number and detailed qualifications (e.g., years of experience) are not provided.

    4. Adjudication Method for the Test Set

    For the measurement validation using the three implantable devices, no adjudication method (like 2+1 or 3+1) is indicated because the "strictly defined dimensions" of the CE-marked prostheses served as the objective ground truth. The software's measurements were compared against these known dimensions.

    For broader validation including user satisfaction and functionality, adjudication methods are not typically applicable in the same way as for diagnostic accuracy studies.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study is explicitly mentioned in the provided text. The submission focuses on demonstrating substantial equivalence to a predicate device (TraumaCad version 2.0) based on similar indications for use, technological characteristics, and performance testing, rather than a comparative effectiveness study showing improvement with AI assistance.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop Performance)

    The device is described as "a software system designed to help surgeons' specialists carry out the pre-operative planning." It "allows health professional to digitally perform the surgical planning," and "human intervention for image interpretation" is explicitly listed as "Yes." This indicates that the device is an aid to a human professional and not intended for standalone use without a human in the loop for interpretation and decision-making. Therefore, a standalone (algorithm only) performance study as typically understood for diagnostic AI is not explicitly described or claimed. The validation focuses on the tools it provides to the surgeon.

    7. Type of Ground Truth Used

    • For the core measurement accuracy validation: Known physical dimensions of CE-marked implantable medical devices (prostheses).
    • For overall functionality and user experience: Implied expert consensus/feedback from qualified personnel with anatomy/biomedical background during internal validation, and feedback from surgeons via satisfaction questionnaires and follow-up.
    • For comparing to predicate: The predicate device's established performance records.

    8. Sample Size for the Training Set

    The document does not describe the use of machine learning or AI models in a way that would require a distinct "training set" for an algorithm. It is presented as a software tool for pre-operative planning. Therefore, a training set size is not applicable or provided.

    9. How the Ground Truth for the Training Set Was Established

    Since no training set for an AI/ML algorithm is mentioned, this point is not applicable. The device is a "Picture Archiving And Communications System" with image processing capabilities facilitating human planning, rather than an autonomous diagnostic AI.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1