K Number
K141669
Device Name
SURGIMAP
Manufacturer
Date Cleared
2014-09-19

(88 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The Surgimap software assists healthcare professionals in viewing, and measuring images as well as planning orthopedic surgeries. The device allows service providers to perform generic as well as specialty measurements of the images, and to plan surgical procedures. The device also includes tools for measuring anatomical components for placement of surgical implants, and offer online synchronization of the database with the possibility to share data among Surgimap users. Clinical judgment and experience are required to properly use the software.

Device Description

Surgimap is software developed for the medical community. It is intended to be used to view, store and transport images as well as perform generic or specialty measurements and plan or simulate aspects of surgical procedures. The image formats supported encompasses the standard image formats (jpeg, tiff, png, ....) and also DICOM images. Images can be stored in the Surgimap database and measurements (generic or specialty specific) can be overlaid to each image. Surgimap also offers the ability for the end user to plan, or simulate, aspects of certain surgical procedures such as osteotomies and templating implants (including but not limited to screws, interbody cages, rods). Via internet connection during use of the software application, the database can be synchronized online. An optional feature consists in organizing patient information into cases with the possibility to share these cases among Surgimap users.

AI/ML Overview

The provided text describes Surgimap 2.0, a Picture Archiving and Communication System (PACS) software. However, it does not contain specific acceptance criteria, study details, or performance metrics for the device. Instead, it focuses on regulatory submission information, comparison to predicate devices, and software development guidelines.

Therefore, I cannot fulfill your request to describe the acceptance criteria and the study that proves the device meets them from the provided text.

Here's a breakdown of why I cannot answer each point based on the given information:

  1. A table of acceptance criteria and the reported device performance: The document does not list any specific acceptance criteria (e.g., accuracy thresholds, precision values, usability metrics) or reported performance data of the device in terms of these criteria.
  2. Sample size used for the test set and the data provenance: There is no mention of a test set, its sample size, or the provenance of the data used for any performance evaluation.
  3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As no test set performance study is described, there's no information on experts or ground truth establishment.
  4. Adjudication method (e.g., 2+1, 3+1, none) for the test set: Not applicable as no test set study is detailed.
  5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: The document does not describe any MRMC study or any study involving human readers or AI assistance effect size. It states that "Human Intervention for interpretation and manipulation of images" is "Required," implying it's a tool for human professionals, not necessarily replacing or improving their diagnostic accuracy in a quantifiable way related to AI.
  6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: The document describes the device as assisting healthcare professionals and requiring clinical judgment and experience. It does not mention any standalone algorithm performance testing.
  7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable as no performance study is detailed.
  8. The sample size for the training set: There is no mention of a training set or its sample size. The device is described as software for viewing, measuring, and planning, not explicitly as a machine learning/AI model that requires training data in the traditional sense.
  9. How the ground truth for the training set was established: Not applicable as there's no mention of a training set.

The document primarily focuses on demonstrating substantial equivalence to predicate devices based on intended use, technological characteristics, and adherence to software development and risk management standards (e.g., EN ISO 14971, IEC 62304). It confirms "Software verification and validation testing were conducted," but provides no details about the methodology, results, or specific acceptance criteria met by these tests beyond implying they satisfy regulatory guidance for a "Moderate" level of concern software.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).