K Number
K123269
Manufacturer
Date Cleared
2013-01-29

(102 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The PROcedure Rehearsal Studio software is intended for use as a software interface and image segmentation system for the transfer of imaging information from a medical scanner such as a CT scanner to an output file. It is also intended as pre-operative software for simulating/evaluating surgical treatment options.

Device Description

The Simbionix PROcedure Rehearsal Studio software allows clinicians to create a patient specific 3D anatomical model based on a patient's CT for the purpose of simulating, analyzing and evaluating for preoperative surgical treatment options. Once the 3D segmentation model has been exported to the Simbionix ANGIO Mentor Simulator Practice Environment, the physician can use it to create a library of modules for training and post operative debriefing. The modifications subject to this Special 510(k) submission are: (1) Graphic User Interface changes in various locations; (2) functional change in various locations which include the addition of a TEVAR module that allows the software to create 3D models of chest scans in addition to the EVAR and carotid options that were previously cleared.

AI/ML Overview

No, this device is not an AI/ML device. It is a software for creating patient-specific 3D anatomical models from CT scans for surgical simulation and evaluation. The document does not provide a table of acceptance criteria or details of a study with performance metrics in the way typically expected for AI/ML device evaluations (e.g., accuracy, sensitivity, specificity, AUC).

Here's an analysis based on the provided text, focusing on why it doesn't fit the requested format for AI/ML device performance and what information is available:

1. A table of acceptance criteria and the reported device performance

The document does not provide a table of acceptance criteria with specific quantitative performance metrics (e.g., accuracy, sensitivity, specificity, F1 score) that are typical for an AI/ML device.

Instead, the "Performance Data" section states:

  • "The verification stage of the software consisted of tests performed for each phase of the user work flow, verifying: Correct functionality of each of the software features, which are part of this work phase and Correct UI."
  • "The validation stage consisted of a high level integration test of the device module and included a run through of 10 additional datasets, verifying work flow of all software components."
  • "The testing activities were conducted according to the following phases of the user work flow: Importing Patient Data, Segmentation and Centerlines. All testing met the Pass criteria."

This describes a software validation process for functionality and user interface, rather than a clinical performance study with statistical endpoints for an algorithm's diagnostic or predictive capabilities. The "Pass criteria" are mentioned, but not specified in detail or linked to quantitative performance.

2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Test Set Sample Size: "10 additional datasets" were used for the validation stage.
  • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The document only refers to "patient's CT" for creating models.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

Not applicable or not specified. Since the validation was focused on software functionality and workflow, rather than diagnostic accuracy against a ground truth, this information is not provided. The device aids clinicians in creating models, but its own "performance" isn't measured against expert consensus for disease detection or measurement.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

Not applicable or not specified. This is relevant for studies where multiple readers interpret cases and their interpretations are adjudicated to establish ground truth or evaluate reader performance. This document describes software workflow verification.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

No MRMC comparative effectiveness study was done or reported. This device is not an AI performing automated analysis; it's a tool for clinicians to create 3D models. The focus is on the software's ability to create these models and support a workflow, not on improving human reader performance with AI assistance.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

Yes, in a sense, the "performance data" describes the standalone performance of the software in terms of its functionality and workflow. However, it's not "standalone" in the context of an AI algorithm making independent decisions. The device is a "software interface and image segmentation system" which implies it's a tool used by a human, not an independent decision-maker. The validation tested whether "all software components" functioned correctly.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

Not applicable directly in the context of diagnostic accuracy. The "ground truth" for the validation described appears to be the expected functional behavior and correct UI representation of the software features, as determined by the software's specifications and design documents. It's not a clinical ground truth like pathology for disease presence.

8. The sample size for the training set

Not applicable. This device is not an AI/ML system that undergoes a "training" phase with data in the typical sense. It's a deterministic software for segmentation and 3D model creation.

9. How the ground truth for the training set was established

Not applicable, as there is no mention of a training set or AI/ML model training.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).