K Number
K213684
Device Name
SurgiCase Viewer
Manufacturer
Date Cleared
2022-06-15

(205 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

SurgiCase Viewer is intended to be used as a software interface to assist in visualization of treatment options.

Device Description

SurgiCase Viewer provides functionality to allow visualization of 3D data and to perform measurements on these 3D data, which should allow a clinician to evaluate and communicate about treatment options.

SurgiCase Viewer is intended for use by people active in the medical sector. When used to review and validate treatment options, SurgiCase Viewer is intended to be used in conjunction with other diagnostic tools and expert clinical judgment.

The SurgiCase Viewer can be used by a medical device/service manufacturer/provider or hospital department to visualize 3D data during the manufacturing process of the product/service to the end-user who is ordering the device/service. This allows the end-user to evaluate and provide feedback on proposals or intermediate steps in the manufacturing of the device or service.

The SurgiCase Viewer is to be integrated with an online Medical Device Data System which is used to process the medical device or service and which is responsible for case management, user management, authorization, authentication, etc.

The data visualized in the SurgiCase Viewer is controlled by the medical device manufacturer using the SurgiCase Viewer in its process. The Device manufacturer will create the 3D data to be visualized to the end-user and export it to one of the dedicated formats supported by the SurgiCase Viewer. Each of these formats describe the 3D data in STL format with additional meta-data on the 3D models. The SurgiCase Viewer does not alter the 3D data it imports and its functioning is independent of the specific medical indication/situation or product/service it is used for. It's the responsibility of the Medical device company using the SurgiCase Viewer to comply with the applicable medical device regulations.

AI/ML Overview

The provided text describes the 510(k) submission for the "SurgiCase Viewer" device (K213684). However, it does not contain the specific details required to fully address all parts of your request related to acceptance criteria, test set specifics, expert ground truth establishment, MRMC studies, or training set details. This document primarily focuses on demonstrating substantial equivalence to a predicate device.

The study presented here is a non-clinical performance evaluation comparing the new SurgiCase Viewer with its predicate (K170419) and a secondary reference device (K183105).

Here's a breakdown of what can be extracted and what is missing, based on your questions:

1. Table of Acceptance Criteria and Reported Device Performance

The document does not explicitly present a table of acceptance criteria with numerical performance metrics. Instead, it states that the device was validated to determine substantial equivalence based on:

  • Intended Use: "Both the subject device as well as the predicate device have the same intended use; They are both intended to be used as a software interface to assist in visualization and communication of treatment options."
  • Device Functionality: The new device was compared to the predicate in terms of features like 3D view navigation, visualization options, measuring, and annotations. For new functionalities (medical image visualization, VR visualization), it states "The abovementioned technological differences do not impact the safety and effectiveness of the subject device for the proposed intended use as is demonstrated by the verification and validation plan."
  • Medical Images Functionality (compared to Mimics Medical K183105): "Both functionality produce the same results in: Contrast adjustments, Interactive image reslicing, 3D contour overlay on images."
  • Measurement functionality: "Measurement functionality on images was compared with already existing functionality on the 3D models and shown to provide correct results both on images and 3D."

2. Sample size used for the test set and the data provenance:

  • Sample Size: Not explicitly stated. The document refers to "verification and validation" and "performance testing" but does not provide details on the number of cases or images used in these tests.
  • Data Provenance: Not explicitly stated (e.g., country of origin). It refers to "medical images functionality" and "3D models" but doesn't specify if these were from retrospective patient data, simulated data, etc. The study is described as "non-clinical testing."

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Experts: Not explicitly stated. The validation involved "end-users," but their specific number, roles, or qualifications are not provided.
  • Ground Truth Establishment: Not explicitly detailed. The comparison against the predicate and reference device functionalities implies that their established performance served as a form of "ground truth" for the new device's functions.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

  • Not explicitly stated. There is no mention of a formal reader adjudication process.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No MRMC study described. This submission focuses on the device's substantial equivalence in functionality and safety, not on human reader performance improvement with AI assistance. The device's stated indication is "to assist in visualization of treatment options," implying a tool for clinicians, but not an AI-driven diagnostic aid that would typically undergo MRMC studies.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

  • The context suggests a standalone functional assessment of the software's capabilities (e.g., whether it correctly performs contrast adjustments, measurement calculations, etc.) in comparison to the predicate and reference device. It's not an AI algorithm with a distinct "performance" metric like sensitivity/specificity, but rather a functional software application.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

  • For the functional comparison: The "ground truth" seems to be the established, correct functioning of the predicate and reference devices for equivalent features, and the defined requirements for new features. For instance, if the Mimics Medical device correctly performs "contrast adjustments," the SurgiCase Viewer needs to produce the "same results." For measurements, it needs to provide "correct results." This isn't a traditional clinical ground truth like pathology for a diagnostic AI.

8. The sample size for the training set:

  • Not applicable / Not mentioned. This device description does not indicate the use of machine learning or AI models that require a "training set" in the conventional sense. It's described as a software interface for visualization and measurements.

9. How the ground truth for the training set was established:

  • Not applicable. (See point 8).

In summary, the provided document demonstrates that the SurgiCase Viewer is substantially equivalent to existing cleared devices based on a functional and software validation process. It assures that new functionalities do not negatively impact safety or effectiveness and that shared functionalities perform comparably. However, it does not detail the type of rigorous clinical performance study (e.g., with patient data, expert readers, and quantitative statistical metrics) that would be common for AI/ML-driven diagnostic devices.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).