K Number
K232339
Manufacturer
Date Cleared
2024-02-01

(181 days)

Product Code
Regulation Number
892.2050
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Sira Medical Augmented Reality Application is intended as a medical imaging system that allows the processing, review, analysis, and communication of augmented reality images acquired from the same data used to generate conventional CT scans and MRIs. Sira Medical software is designed for preoperative surgical planning and intended to be used by trained and qualified clinicians who are responsible for making patient management decisions.

Device Description

The Sira Medical Augmented Reality Application is used for viewing and manipulating 3D models created by the Sira 3D Image Preparation Service from customer-supplied anonymized (de-identified) imaging in Augmented Reality (AR) on a Head Mounted Display (HMD). The application allows the user to manipulate one or more 3D models in real time in an Augmented Reality (AR) environment. The user can view the model, adjust the orientation, scale, rotate and position the models within the AR visual space. Models can be sliced to create separate objects or merged together to create a single object. These models are used for preoperative surgical planning but are not used intraoperatively (during surgical procedures).

AI/ML Overview

The provided text does not contain detailed information about specific acceptance criteria, a study proving the device meets those criteria, or quantitative performance metrics for the Sira Medical Augmented Reality Application.

The document discusses the device's intended use, its substantial equivalence to a predicate device (Albolit Technologies, LLC, Aibolit 3D+), and general non-clinical performance testing. However, it does not provide the specific data needed to complete the requested table or answer the specific questions about sample sizes, ground truth establishment, or multi-reader multi-case studies.

Here's a breakdown of what is and is not available in the provided text:

Information Available:

  • Acceptance Criteria/Performance (General): The document states that "The results demonstrated that the Sira Medical, Inc. Augmented Reality Application performs according to its specifications and functions as intended." This is a general statement of performance but lacks specific, quantifiable acceptance criteria or reported device performance metrics.
  • Study Types: Non-clinical performance testing included:
    • Software verification and validation testing (IEC 62304).
    • Human factors testing (IEC 62366-1).
  • Ground Truth Type (Implied for training or preparation): The process involves a radiologist annotating (segmenting) images and identifying organs, which forms a kind of expert-derived ground truth for the 3D model generation. This isn't for testing the AI, but rather for the workflow of creating the augmented reality models.

Information NOT Available (and thus cannot be answered from the text):

  • A table of specific acceptance criteria and reported device performance.
  • Sample sizes used for test sets.
  • Data provenance (country of origin, retrospective/prospective).
  • Number of experts used for ground truth or their qualifications for a test set.
  • Adjudication method for a test set.
  • Multi-Reader Multi-Case (MRMC) comparative effectiveness study information (effect size, human reader improvement with/without AI).
  • Standalone (algorithm-only) performance data.
  • Specific ground truth type used for testing the device's performance against quantifiable criteria.
  • Sample size for the training set.
  • How the ground truth for the training set was specifically established beyond "Radiologist annotates sample (segments) images." (e.g., how consistency was ensured, specific software used for ground truth labeling if applicable to an AI component if one existed for segmentation).

Based on the available text, here is what can be inferred or directly stated, and where information is missing:

1. Table of Acceptance Criteria and Reported Device Performance

Acceptance CriteriaReported Device Performance
Not specified quantitatively in the provided text."The results demonstrated that the Sira Medical, Inc. Augmented Reality Application performs according to its specifications and functions as intended."
(No quantitative metrics provided.)

2. Sample size used for the test set and the data provenance

  • Sample Size (Test Set): Not specified in the provided text.
  • Data Provenance: Not specified in the provided text. The device processes "customer-supplied anonymized (de-identified) imaging."

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

  • Number of Experts: Not specified for a test set.
  • Qualifications: For the workflow description (how the 3D models are created), it states "Radiologist annotates sample (segments) images" and "Radiologist reviews images generated by imaging technician." This implies medical doctors, specifically radiologists, are involved in defining anatomical structures.

4. Adjudication method for the test set

  • Not specified in the provided text, as specific test set details are absent.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • Not mentioned or detailed in the provided text. The device is for "preoperative surgical planning" and helps clinicians, but there's no mention of a comparative effectiveness study with AI assistance on human reader performance.
    • Note: The subject device's workflow states "Radiologist annotates sample (segments) images," while the predicate's workflow mentions "AI software facilitates annotation of available images under guidance and control by the Radiologist." This difference might imply the predicate has an AI component for annotation, but the subject device does not explicitly mention AI for this specific function in its workflow description.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Not mentioned or detailed in the provided text. The device is described as "designed for preoperative surgical planning and intended to be used by trained and qualified clinicians who are responsible for making patient management decisions," implying a human-in-the-loop context.

7. The type of ground truth used

  • For the generation of 3D models (part of the device's function): Expert consensus/annotation by Radiologists (MDs) is used for image segmentation and organ identification. This forms the basis for the 3D models viewed in AR. This isn't a "ground truth" for testing the device's accuracy against a gold standard, but rather how the input data for the AR visualization is prepared.
  • For performance testing of the device itself: Not explicitly stated. The "Software verification and validation testing" and "Human factors testing" are mentioned, but the nature of the ground truth or criteria for these tests isn't detailed.

8. The sample size for the training set

  • Not specified in the provided text.

9. How the ground truth for the training set was established

  • Not explicitly stated for a "training set." However, for the process of creating the 3D models viewed by the device, the text states: "By Radiologist (MD) - Manual annotation is done for all image data" for image segmentation, and organ identification is "By Radiologist." This suggests expert manual annotation by radiologists is the method for establishing the anatomical definitions used by the system. If there were an AI component for segmentation, this manual annotation would likely be its ground truth for training. The subject device description indicates manual annotation ("Radiologist annotates sample (segments) images") for the generation of the images used by the AR application.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).