K Number
K211443
Device Name
AIBOLIT 3D+
Date Cleared
2022-01-07

(242 days)

Product Code
Regulation Number
892.2050
Reference & Predicate Devices
Predicate For
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Aibolit 3D+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT imaging devices. Aibolit 3D+ is intended as software for preoperative surgical planning, training, patient information and as software for the intraoperative display of the multidimensional digital images. Aibolit 3D+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

Device Description

Aibolit 3D+ is a web-based stand-alone application that can be presented on a computer connected to the internet. Once the enhanced images are created, they can be used by the physician for case review, patient education, professional training and intraoperative reference.

Aibolit 3D+ is a software only device, which processes CT images from a patient to create 3-dimensional images that may be manipulated to view the anatomy from virtually any perspective. The software also allows for transparent viewing of anatomical structures artifacts inside organs such as ducts, vessels, lesions and entrapped calcifications (stones). Anatomical structures are identified by name and differential coloration to highlight them within the region of interest.

The software may help to facilitate the surgeon's decision-making during planning, review and conduct of surgical procedures and, hence, may potentially help them to decrease or prevent possible errors caused by the misidentification of anatomical structures and their positional relationship.

AI/ML Overview

Here's an analysis of the provided text to fulfill your request, noting that the document is a 510(k) summary focused on substantial equivalence rather than a detailed performance study report. Therefore, some information requested (e.g., specific performance metrics against acceptance criteria, detailed ground truth establishment for a test set, MRMC study results) is not explicitly present.

Device Name: Aibolit 3D+
Manufacturer: Albolit Technologies, LLC
K Number: K211443
Predicate Device: Ceevra Reveal 2.0 Image Processing System [510(k) K173274]


Acceptance Criteria and Reported Device Performance

The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device, rather than presenting a performance study against specific, quantitative acceptance criteria. Therefore, there isn't a direct "table of acceptance criteria and reported device performance" in the traditional sense of metrics like sensitivity, specificity, accuracy, or volume measurement error for a standalone deep learning algorithm.

Instead, the "acceptance criteria" for a 510(k) are generally met by demonstrating that the new device is as safe and effective as a legally marketed predicate device. This is achieved by comparing their Indications for Use, technological characteristics, and performance. The document implies that the device "meets" its "criteria" by being substantially equivalent to the Ceevra Reveal 2.0 system.

The "Performance Testing" section lists various documentation submitted (Hardware Requirements, Software Description, etc.), but does not specify quantitative performance metrics or the results of comparative studies with the predicate for individual features. The "Conclusion" explicitly states: "AIBOLIT 3D+ is substantially equivalent to the previously cleared Ceevra Reveal 2.0 Image Processing System with respect to intended use, principle of operation, general technological characteristics and performance."

Therefore, for the requested table, we can infer the implicit acceptance criteria based on the comparison to the predicate. The "reported device performance" is essentially the claim of "substantial equivalence" across these aspects.

Acceptance Criterion (Implicitly based on Predicate Comparison)Reported Device Performance (Claimed)
Intended Use: Processing, review, analysis, communication, media interchange of multi-dimensional digital images from CT devices; preoperative surgical planning, training, patient information, intraoperative display.Substantially Equivalent (Same Indications for Use)
Principle of Operation: Software-based capture and enhancement of DICOM images, conversion to 2D/3D anatomical structure images manipulation.Substantially Equivalent (Same Mechanism of Action)
Technological Characteristics:
- Input: DICOM images from CTSubstantially Equivalent (DICOM from CT)
- Functions: Generation of 2D/3D images, organ segmentation/identification, dimensional/volume references, multi-axis rotation, organ transparency.Substantially Equivalent (Includes all predicate functions, plus "organ retraction animation")
- Image output: High-definition digital imagesSubstantially Equivalent (Up to 4K, vs predicate's "High-definition")
- Security: Data coded and HIPAA compliantSubstantially Equivalent (Data coded and HIPAA compliant)
Performance (General): Safe and effective for intended useSubstantially Equivalent (Claimed based on the full submission)

Study Details (Based on available information)

Given this is a 510(k) for substantial equivalence, the "study" described is a comparison to a predicate, not necessarily a de novo clinical trial with a traditional test set and ground truth.

  1. Sample Size used for the Test Set and Data Provenance:

    • The document does not explicitly state a quantitative "test set" sample size in terms of number of cases or scans used for validation testing of the algorithm's performance (e.g., for segmentation accuracy or other quantitative metrics).
    • It mentions "Performance Testing" by listing documentation submitted, such as "Software Validation Report" and "Usability Evaluation." These documents would contain details about the types and number of cases used, but this information is not directly provided in the summary.
    • Data Provenance: Not specified in the provided text (e.g., country of origin, retrospective/prospective).
  2. Number of Experts used to establish the Ground Truth for the Test Set and Qualifications of those Experts:

    • The document states, for the AIBOLIT 3D+ workflow, under "Image Segmentation": "By Radiologist (MD) – Manual annotation is done for all CT slices with optional use of AI/ML algorithms as determined by Radiologist and with Radiologist's approval."
    • Under "Organ identification": "By Radiologist."
    • This indicates that Radiologists are involved in the segmentation and identification process for the device's operation, and implicitly for any internal validation or ground truth generation.
    • The number of radiologists/experts and their specific qualifications (e.g., years of experience) used for establishing a test set ground truth are not explicitly stated in this 510(k) summary. It only indicates that "Radiologist (MD)" performs these tasks.
  3. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set:

    • The document describes a user interface and system workflow where a "Radiologist reviews images generated by imaging technician and returns output file to requesting physician."
    • However, a specific "adjudication method" involving multiple experts resolving discrepancies for a test set ground truth is not described in the provided text. The workflow suggests a single Radiologist's approval of the imaging technician's work, but not a consensus process for validation per se.
  4. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study is described in this 510(k) summary. The device's substantial equivalence is based on its similarity to the predicate device in functionality and intended use. The device is intended to "assist the clinician who is responsible for making all final patient management decisions," but no data on human reader improvement with the AI assistance is provided.
  5. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The device workflow clearly involves a human-in-the-loop (Radiologist) for "Manual annotation" and "approval" even with the optional use of AI/ML algorithms. The AI/ML is stated to "facilitate annotation," suggesting it is a tool for the radiologist, not a fully standalone diagnostic or analytical algorithm.
    • The summary does not explicitly describe a standalone performance study of the algorithm component without human interaction.
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • The ground truth for the operation of the device itself (i.e., for segmentation and organ identification within the Aibolit 3D+ workflow) is stated to be established "By Radiologist (MD) – Manual annotation." This implies expert (Radiologist) annotation/consensus is the basis for the data processed by the system.
    • For any underlying software validation/training data:
      • For segmentation, it's explicitly stated to be "By Radiologist (MD) – Manual annotation."
      • For organ identification, "By Radiologist."
    • It is not stated if this "ground truth" was verified by pathology or outcomes data.
  7. The Sample Size for the Training Set:

    • The document does not specify the sample size for any training set. It mentions the "optional use of AI/ML algorithms" to "facilitate annotation," suggesting that AI/ML components exist, which would imply a training phase. However, no details on the training data size are provided.
  8. How the Ground Truth for the Training Set was Established:

    • While a specific "training set" is not detailed, the general method for annotation and identification within the Aibolit 3D+ workflow is described as being performed "By Radiologist (MD) – Manual annotation" and "By Radiologist" for organ identification. This suggests that any ground truth used for training would also be based on manual annotations by medical professionals (Radiologists).

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).