Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K222458
    Device Name
    AIBOLIT 3D+
    Date Cleared
    2023-01-12

    (150 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Aibolit 3D+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT and MRI imaging devices. Aibolit 3D+ is intended as software for preoperative surgical planning, patient information and as software for the intraoperative display of the multidimensional digital images. Aibolit 3D+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    Device Description

    Aibolit 3D+ is a web-based stand-alone application that can be presented on a computer connected to the internet. Once the enhanced images are created, they can be used by the physician for case review, patient education, professional training and intraoperative reference.

    Aibolit 3D+ is a software only device, which processes CT and MR images from a patient to create 3-dimensional images that may be manipulated to view the anatomy from virtually any perspective. The software also allows for transparent viewing of anatomical structures artifacts inside organs such as ducts, vessels, lesions and entrapped calcifications (stones). Anatomical structures are identified by name and differential coloration to highlight them within the region of interest.

    The software may help to facilitate the surgeon's decision-making during planning, review and conduct of surgical procedures and, hence, may potentially help them to decrease or prevent possible errors caused by the misidentification of anatomical structures and their positional relationship.

    AI/ML Overview

    Here's a summary of the acceptance criteria and the study details for the AIBOLIT 3D+ device, extracted from the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are generally focused on the validation of the software's ability to segment anatomical structures accurately and generate 3D models with conservation of shape dimensions and volume. The reported performance indicates that the device met these criteria through validation studies.

    Acceptance Criteria CategorySpecific CriteriaReported Device Performance/Validation
    Software ValidationSoftware functions as intended and meets user needs.Software verification and validation performed against defined requirements and user needs.
    Segmentation ValidationAccurate segmentation of organs/structures.Segmentation validation of the Customize software performed. R&R study on segmentation of multiple internal organ/structure anatomies performed. AI-based algorithm demonstrated identification of organs/structures based on trained dataset.
    3D Model Generation AccuracyAccurate generation of 3D models from segmented data.Accuracy study on 3D model generation for multiple organ structures performed. Validation demonstrated conservation of shape dimensions and volume of structures when compared to a "ground truth" accepted standard.
    MRI ValidationPerformance maintained when using MRI images.Expansion of Software Validation to include MRI validation using multiple organ structures, multiple radiologists, and multiple view perspectives. Conducted per written protocol with pre-determined acceptance criteria.
    Conservation of Shape/Volume3D models accurately represent original dimensions/volume.The MRI validation demonstrated "conservation of shape dimensions, volume of the structures in a side-by-side testing comparison with a 'ground truth' accepted standard independent of radiologist, organ structure and view perspective."

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Segmentation Training/Evaluation: A dataset of 108 anatomical structures was used for training the AI-based algorithm, obtained from medical images (MRI scans) and their corresponding segmentation. While this is referred to as "trained to identify organs/structures," the subsequent statement about "evaluated from 3 perspectives by 4 radiologists" suggests this dataset may also have served as a test/evaluation set for the AI component. It's not explicitly stated if a separate, distinct test set was used solely for performance evaluation post-training.
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective or prospective). The type of images are MRI scans. However, the study involved "multiple radiologists and multiple view perspectives" suggesting a multi-center or varied data collection, though specifics are missing.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: 4 radiologists were involved in evaluating the AI-based algorithm's segmentations. For the "final patient management decisions" and the manual annotation process, the text specifies "Radiologist (MD)" and "Radiologist."
    • Qualifications of Experts: All experts are identified as radiologists. No specific years of experience or subspecialty are mentioned.

    4. Adjudication Method for the Test Set

    • The AI-based algorithm's segmentations were "evaluated from 3 perspectives by 4 radiologists." This implies a review process.
    • After the AI system produces additional segmentations, a radiologist reviews them.
    • For image segmentation, the "Radiologist (MD) – Manual annotation is done for all CT and MRI slices with optional use of software as determined by Radiologist and with Radiologist's approval and control." This indicates that the radiologists act as the final decision-makers and can modify annotations.

    The precise adjudication method (e.g., majority vote, or whether disagreements were resolved by a super-reviewer) is not explicitly detailed for the evaluation phase. However, the overall process shows a human-in-the-loop approach where radiologists have final say over segmentations.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Not explicitly stated. The document describes a validation study focused on the device's accuracy against a ground truth and its substantial equivalence to predicate devices. It mentions that "multiple radiologists and multiple view perspectives" were used in the MRI validation, but it does not describe a comparative effectiveness study measuring the improvement of human readers with AI assistance versus without AI assistance. The device is described as assisting the clinician, but no quantitative measure of this assistance's effect on human reader performance is provided.

    6. Standalone Performance Study (Algorithm Only)

    • Yes, a standalone performance aspect is implied. The AI-based algorithm is described as being "trained to identify organs/structures" and then "produces additional segmentations for review by the radiologist." This indicates that the algorithm itself performs segmentation, which is then subject to human review. The "segmentation validation of the Customize software" and "accuracy study on 3D model generation" would also likely assess the algorithm's performance in isolation before human review. The validation demonstrated "conservation of shape dimensions, volume...independent of radiologist," which points to the intrinsic accuracy of the software's processing.

    7. Type of Ground Truth Used

    • Expert Consensus / Accepted Standard:
      • For the AI training/evaluation, ground truth was "corresponding segmentation" from the MRI scans. This usually implies expert-labeled segmentations.
      • For the MRI validation, "conservation of shape dimensions, volume of the structures in a side-by-side testing comparison with a 'ground truth' accepted standard independent of radiologist, organ structure and view perspective" was used. This suggests an established reference or gold standard for anatomical dimensions and volumes.

    8. Sample Size for the Training Set

    • The AI-based algorithm was trained using a dataset of 108 anatomical structures obtained from MRI scans.

    9. How Ground Truth for the Training Set Was Established

    • The ground truth for the training set (the "corresponding segmentation") was likely established by experts, as the device's workflow involves radiologists making annotations: "After a radiologist establishes contours, the system produces additional segmentations for review by the radiologist." The expert radiologists are central to the process of creating and validating segmented structures, which would form the basis of the ground truth for training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211443
    Device Name
    AIBOLIT 3D+
    Date Cleared
    2022-01-07

    (242 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Aibolit 3D+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT imaging devices. Aibolit 3D+ is intended as software for preoperative surgical planning, training, patient information and as software for the intraoperative display of the multidimensional digital images. Aibolit 3D+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    Device Description

    Aibolit 3D+ is a web-based stand-alone application that can be presented on a computer connected to the internet. Once the enhanced images are created, they can be used by the physician for case review, patient education, professional training and intraoperative reference.

    Aibolit 3D+ is a software only device, which processes CT images from a patient to create 3-dimensional images that may be manipulated to view the anatomy from virtually any perspective. The software also allows for transparent viewing of anatomical structures artifacts inside organs such as ducts, vessels, lesions and entrapped calcifications (stones). Anatomical structures are identified by name and differential coloration to highlight them within the region of interest.

    The software may help to facilitate the surgeon's decision-making during planning, review and conduct of surgical procedures and, hence, may potentially help them to decrease or prevent possible errors caused by the misidentification of anatomical structures and their positional relationship.

    AI/ML Overview

    Here's an analysis of the provided text to fulfill your request, noting that the document is a 510(k) summary focused on substantial equivalence rather than a detailed performance study report. Therefore, some information requested (e.g., specific performance metrics against acceptance criteria, detailed ground truth establishment for a test set, MRMC study results) is not explicitly present.

    Device Name: Aibolit 3D+
    Manufacturer: Albolit Technologies, LLC
    K Number: K211443
    Predicate Device: Ceevra Reveal 2.0 Image Processing System [510(k) K173274]


    Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device, rather than presenting a performance study against specific, quantitative acceptance criteria. Therefore, there isn't a direct "table of acceptance criteria and reported device performance" in the traditional sense of metrics like sensitivity, specificity, accuracy, or volume measurement error for a standalone deep learning algorithm.

    Instead, the "acceptance criteria" for a 510(k) are generally met by demonstrating that the new device is as safe and effective as a legally marketed predicate device. This is achieved by comparing their Indications for Use, technological characteristics, and performance. The document implies that the device "meets" its "criteria" by being substantially equivalent to the Ceevra Reveal 2.0 system.

    The "Performance Testing" section lists various documentation submitted (Hardware Requirements, Software Description, etc.), but does not specify quantitative performance metrics or the results of comparative studies with the predicate for individual features. The "Conclusion" explicitly states: "AIBOLIT 3D+ is substantially equivalent to the previously cleared Ceevra Reveal 2.0 Image Processing System with respect to intended use, principle of operation, general technological characteristics and performance."

    Therefore, for the requested table, we can infer the implicit acceptance criteria based on the comparison to the predicate. The "reported device performance" is essentially the claim of "substantial equivalence" across these aspects.

    Acceptance Criterion (Implicitly based on Predicate Comparison)Reported Device Performance (Claimed)
    Intended Use: Processing, review, analysis, communication, media interchange of multi-dimensional digital images from CT devices; preoperative surgical planning, training, patient information, intraoperative display.Substantially Equivalent (Same Indications for Use)
    Principle of Operation: Software-based capture and enhancement of DICOM images, conversion to 2D/3D anatomical structure images manipulation.Substantially Equivalent (Same Mechanism of Action)
    Technological Characteristics:
    - Input: DICOM images from CTSubstantially Equivalent (DICOM from CT)
    - Functions: Generation of 2D/3D images, organ segmentation/identification, dimensional/volume references, multi-axis rotation, organ transparency.Substantially Equivalent (Includes all predicate functions, plus "organ retraction animation")
    - Image output: High-definition digital imagesSubstantially Equivalent (Up to 4K, vs predicate's "High-definition")
    - Security: Data coded and HIPAA compliantSubstantially Equivalent (Data coded and HIPAA compliant)
    Performance (General): Safe and effective for intended useSubstantially Equivalent (Claimed based on the full submission)

    Study Details (Based on available information)

    Given this is a 510(k) for substantial equivalence, the "study" described is a comparison to a predicate, not necessarily a de novo clinical trial with a traditional test set and ground truth.

    1. Sample Size used for the Test Set and Data Provenance:

      • The document does not explicitly state a quantitative "test set" sample size in terms of number of cases or scans used for validation testing of the algorithm's performance (e.g., for segmentation accuracy or other quantitative metrics).
      • It mentions "Performance Testing" by listing documentation submitted, such as "Software Validation Report" and "Usability Evaluation." These documents would contain details about the types and number of cases used, but this information is not directly provided in the summary.
      • Data Provenance: Not specified in the provided text (e.g., country of origin, retrospective/prospective).
    2. Number of Experts used to establish the Ground Truth for the Test Set and Qualifications of those Experts:

      • The document states, for the AIBOLIT 3D+ workflow, under "Image Segmentation": "By Radiologist (MD) – Manual annotation is done for all CT slices with optional use of AI/ML algorithms as determined by Radiologist and with Radiologist's approval."
      • Under "Organ identification": "By Radiologist."
      • This indicates that Radiologists are involved in the segmentation and identification process for the device's operation, and implicitly for any internal validation or ground truth generation.
      • The number of radiologists/experts and their specific qualifications (e.g., years of experience) used for establishing a test set ground truth are not explicitly stated in this 510(k) summary. It only indicates that "Radiologist (MD)" performs these tasks.
    3. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set:

      • The document describes a user interface and system workflow where a "Radiologist reviews images generated by imaging technician and returns output file to requesting physician."
      • However, a specific "adjudication method" involving multiple experts resolving discrepancies for a test set ground truth is not described in the provided text. The workflow suggests a single Radiologist's approval of the imaging technician's work, but not a consensus process for validation per se.
    4. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study is described in this 510(k) summary. The device's substantial equivalence is based on its similarity to the predicate device in functionality and intended use. The device is intended to "assist the clinician who is responsible for making all final patient management decisions," but no data on human reader improvement with the AI assistance is provided.
    5. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The device workflow clearly involves a human-in-the-loop (Radiologist) for "Manual annotation" and "approval" even with the optional use of AI/ML algorithms. The AI/ML is stated to "facilitate annotation," suggesting it is a tool for the radiologist, not a fully standalone diagnostic or analytical algorithm.
      • The summary does not explicitly describe a standalone performance study of the algorithm component without human interaction.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The ground truth for the operation of the device itself (i.e., for segmentation and organ identification within the Aibolit 3D+ workflow) is stated to be established "By Radiologist (MD) – Manual annotation." This implies expert (Radiologist) annotation/consensus is the basis for the data processed by the system.
      • For any underlying software validation/training data:
        • For segmentation, it's explicitly stated to be "By Radiologist (MD) – Manual annotation."
        • For organ identification, "By Radiologist."
      • It is not stated if this "ground truth" was verified by pathology or outcomes data.
    7. The Sample Size for the Training Set:

      • The document does not specify the sample size for any training set. It mentions the "optional use of AI/ML algorithms" to "facilitate annotation," suggesting that AI/ML components exist, which would imply a training phase. However, no details on the training data size are provided.
    8. How the Ground Truth for the Training Set was Established:

      • While a specific "training set" is not detailed, the general method for annotation and identification within the Aibolit 3D+ workflow is described as being performed "By Radiologist (MD) – Manual annotation" and "By Radiologist" for organ identification. This suggests that any ground truth used for training would also be based on manual annotations by medical professionals (Radiologists).
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1