Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K211443
    Device Name
    AIBOLIT 3D+
    Date Cleared
    2022-01-07

    (242 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K182643

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Aibolit 3D+ is intended as a medical imaging system that allows the processing, review, analysis, communication and media interchange of multi-dimensional digital images acquired from CT imaging devices. Aibolit 3D+ is intended as software for preoperative surgical planning, training, patient information and as software for the intraoperative display of the multidimensional digital images. Aibolit 3D+ is designed for use by health care professionals and is intended to assist the clinician who is responsible for making all final patient management decisions.

    Device Description

    Aibolit 3D+ is a web-based stand-alone application that can be presented on a computer connected to the internet. Once the enhanced images are created, they can be used by the physician for case review, patient education, professional training and intraoperative reference.

    Aibolit 3D+ is a software only device, which processes CT images from a patient to create 3-dimensional images that may be manipulated to view the anatomy from virtually any perspective. The software also allows for transparent viewing of anatomical structures artifacts inside organs such as ducts, vessels, lesions and entrapped calcifications (stones). Anatomical structures are identified by name and differential coloration to highlight them within the region of interest.

    The software may help to facilitate the surgeon's decision-making during planning, review and conduct of surgical procedures and, hence, may potentially help them to decrease or prevent possible errors caused by the misidentification of anatomical structures and their positional relationship.

    AI/ML Overview

    Here's an analysis of the provided text to fulfill your request, noting that the document is a 510(k) summary focused on substantial equivalence rather than a detailed performance study report. Therefore, some information requested (e.g., specific performance metrics against acceptance criteria, detailed ground truth establishment for a test set, MRMC study results) is not explicitly present.

    Device Name: Aibolit 3D+
    Manufacturer: Albolit Technologies, LLC
    K Number: K211443
    Predicate Device: Ceevra Reveal 2.0 Image Processing System [510(k) K173274]


    Acceptance Criteria and Reported Device Performance

    The provided 510(k) summary focuses on demonstrating substantial equivalence to a predicate device, rather than presenting a performance study against specific, quantitative acceptance criteria. Therefore, there isn't a direct "table of acceptance criteria and reported device performance" in the traditional sense of metrics like sensitivity, specificity, accuracy, or volume measurement error for a standalone deep learning algorithm.

    Instead, the "acceptance criteria" for a 510(k) are generally met by demonstrating that the new device is as safe and effective as a legally marketed predicate device. This is achieved by comparing their Indications for Use, technological characteristics, and performance. The document implies that the device "meets" its "criteria" by being substantially equivalent to the Ceevra Reveal 2.0 system.

    The "Performance Testing" section lists various documentation submitted (Hardware Requirements, Software Description, etc.), but does not specify quantitative performance metrics or the results of comparative studies with the predicate for individual features. The "Conclusion" explicitly states: "AIBOLIT 3D+ is substantially equivalent to the previously cleared Ceevra Reveal 2.0 Image Processing System with respect to intended use, principle of operation, general technological characteristics and performance."

    Therefore, for the requested table, we can infer the implicit acceptance criteria based on the comparison to the predicate. The "reported device performance" is essentially the claim of "substantial equivalence" across these aspects.

    Acceptance Criterion (Implicitly based on Predicate Comparison)Reported Device Performance (Claimed)
    Intended Use: Processing, review, analysis, communication, media interchange of multi-dimensional digital images from CT devices; preoperative surgical planning, training, patient information, intraoperative display.Substantially Equivalent (Same Indications for Use)
    Principle of Operation: Software-based capture and enhancement of DICOM images, conversion to 2D/3D anatomical structure images manipulation.Substantially Equivalent (Same Mechanism of Action)
    Technological Characteristics:
    - Input: DICOM images from CTSubstantially Equivalent (DICOM from CT)
    - Functions: Generation of 2D/3D images, organ segmentation/identification, dimensional/volume references, multi-axis rotation, organ transparency.Substantially Equivalent (Includes all predicate functions, plus "organ retraction animation")
    - Image output: High-definition digital imagesSubstantially Equivalent (Up to 4K, vs predicate's "High-definition")
    - Security: Data coded and HIPAA compliantSubstantially Equivalent (Data coded and HIPAA compliant)
    Performance (General): Safe and effective for intended useSubstantially Equivalent (Claimed based on the full submission)

    Study Details (Based on available information)

    Given this is a 510(k) for substantial equivalence, the "study" described is a comparison to a predicate, not necessarily a de novo clinical trial with a traditional test set and ground truth.

    1. Sample Size used for the Test Set and Data Provenance:

      • The document does not explicitly state a quantitative "test set" sample size in terms of number of cases or scans used for validation testing of the algorithm's performance (e.g., for segmentation accuracy or other quantitative metrics).
      • It mentions "Performance Testing" by listing documentation submitted, such as "Software Validation Report" and "Usability Evaluation." These documents would contain details about the types and number of cases used, but this information is not directly provided in the summary.
      • Data Provenance: Not specified in the provided text (e.g., country of origin, retrospective/prospective).
    2. Number of Experts used to establish the Ground Truth for the Test Set and Qualifications of those Experts:

      • The document states, for the AIBOLIT 3D+ workflow, under "Image Segmentation": "By Radiologist (MD) – Manual annotation is done for all CT slices with optional use of AI/ML algorithms as determined by Radiologist and with Radiologist's approval."
      • Under "Organ identification": "By Radiologist."
      • This indicates that Radiologists are involved in the segmentation and identification process for the device's operation, and implicitly for any internal validation or ground truth generation.
      • The number of radiologists/experts and their specific qualifications (e.g., years of experience) used for establishing a test set ground truth are not explicitly stated in this 510(k) summary. It only indicates that "Radiologist (MD)" performs these tasks.
    3. Adjudication Method (e.g., 2+1, 3+1, none) for the Test Set:

      • The document describes a user interface and system workflow where a "Radiologist reviews images generated by imaging technician and returns output file to requesting physician."
      • However, a specific "adjudication method" involving multiple experts resolving discrepancies for a test set ground truth is not described in the provided text. The workflow suggests a single Radiologist's approval of the imaging technician's work, but not a consensus process for validation per se.
    4. If a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • No MRMC comparative effectiveness study is described in this 510(k) summary. The device's substantial equivalence is based on its similarity to the predicate device in functionality and intended use. The device is intended to "assist the clinician who is responsible for making all final patient management decisions," but no data on human reader improvement with the AI assistance is provided.
    5. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The device workflow clearly involves a human-in-the-loop (Radiologist) for "Manual annotation" and "approval" even with the optional use of AI/ML algorithms. The AI/ML is stated to "facilitate annotation," suggesting it is a tool for the radiologist, not a fully standalone diagnostic or analytical algorithm.
      • The summary does not explicitly describe a standalone performance study of the algorithm component without human interaction.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • The ground truth for the operation of the device itself (i.e., for segmentation and organ identification within the Aibolit 3D+ workflow) is stated to be established "By Radiologist (MD) – Manual annotation." This implies expert (Radiologist) annotation/consensus is the basis for the data processed by the system.
      • For any underlying software validation/training data:
        • For segmentation, it's explicitly stated to be "By Radiologist (MD) – Manual annotation."
        • For organ identification, "By Radiologist."
      • It is not stated if this "ground truth" was verified by pathology or outcomes data.
    7. The Sample Size for the Training Set:

      • The document does not specify the sample size for any training set. It mentions the "optional use of AI/ML algorithms" to "facilitate annotation," suggesting that AI/ML components exist, which would imply a training phase. However, no details on the training data size are provided.
    8. How the Ground Truth for the Training Set was Established:

      • While a specific "training set" is not detailed, the general method for annotation and identification within the Aibolit 3D+ workflow is described as being performed "By Radiologist (MD) – Manual annotation" and "By Radiologist" for organ identification. This suggests that any ground truth used for training would also be based on manual annotations by medical professionals (Radiologists).
    Ask a Question

    Ask a specific question about this device

    K Number
    K202370
    Date Cleared
    2020-11-16

    (89 days)

    Product Code
    Regulation Number
    874.4680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K182643

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Ion™ Endoluminal System (Model IF1000) assists the user in navigating a catheter and endoscopic tools in the pulmonary tract using endoscopic visualization of the tracheobronchial tree for diagnostic and therapeutic procedures. The Ion™ Endoluminal System enables fiducial marker placement. It does not make a diagnosis and is not for pediatric use.

    The Flexision™ Biopsy Needle is used with the Ion™ Endoluminal System to biopsy tissue from a target area in the lung,

    The PlanPoint™ Software uses patient CT scans to create a 3D plan of the lung and navigation pathways for use with the Ion™ Endoluminal System.

    Device Description

    The Ion™ Endoluminal System, Model IF1000, is a software-controlled, electromechanical system designed to assist qualified physicians to navigate a catheter and endoscopic tools in the pulmonary tract using endoscopic visualization of the tracheobronchial tree for diagnostic and therapeutic procedures. It consists of a Planning Laptop with PlanPoint™ Software, a System Cart with System Software, a Controller, Instruments, and Accessories. The IF1000 Instruments include the Ion™ Fully Articulating Catheter, the Ion™ Peripheral Vision Probe, and the Flexision™ Biopsy Needles.

    The Planning Laptop is a separate computer from the System Cart and Controller. A 3D airway model is generated from the patient's chest CT scan using the PlanPoint™ Software.

    The System Cart contains the Instrument Arm, electronics for the slave portion of the servomechanism, and two monitors. The System Cart allows the user to navigate the Catheter Instrument with the Controller, which represents the master slave relationship. For optimal viewing, the physician can position the monitors in both vertical and horizontal axes.

    The Controller is the user input device on the Ion™ Endoluminal System. It provides the controls to command insertion, retraction, and articulation of the Catheter. The Controller also has buttons to operate the Catheter control states.

    The Ion™ Endoluminal System enables automatic device logs retrieval via the network and connects to a hospital networked Picture Archiving and Communication System (PACS).

    AI/ML Overview

    The provided document describes a 510(k) submission for the Ion™ Endoluminal System, Model IF1000, which has undergone modifications from its predicate device (K182188). The key changes relate to networking capabilities for automatic log retrieval and accessing CT scans directly from hospital networked PACS.

    Here's an analysis of the acceptance criteria and supporting studies based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state quantitative acceptance criteria or assign a specific "device performance" metric for the modified features in a structured table. Instead, it relies on demonstrating compliance with standards and successful completion of verification and validation activities. The primary acceptance criteria for the modifications (network communication and PACS access) are implied to be functional correctness, reliability, and security, as demonstrated through various tests.

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Correctness of Network Communication & PACS Access"Software for the Ion™ Endoluminal System underwent verification and validation testing. results demonstrate the System meets design specifications and user needs."
    "The System Cart and Planning Laptop were subjected to bench testing and results confirm the design outputs for the System Cart and the Planning Laptop meet the design input requirements."
    "The device modifications made to enable the network communication and direct patient CT scan images download via PACS server on the hospital network have been evaluated and do not raise different questions of safety or effectiveness."
    "The performance testing data confirmed that the device performs as intended to its specifications and meets its intended use."
    Electromagnetic Compatibility (EMC)"The subject device (Ion Endoluminal System) was tested for compliance with the new 4th edition version of the standard, IEC 60601-1-2: 2014."
    "The 4th edition EMC testing of the subject device also verified hardware modifications made to the subject device."
    Electrical Safety"The electrical safety testing performed for compliance with IEC 60601-1 and IEC 60601-2-18 on predicate device (K182188) remains valid for the subject device because the scope of changes for this submission do not impact the electrical safety of the Ion Endoluminal System."
    Cybersecurity"The Ion™ Endoluminal System was subjected to Cybersecurity verification and validation testing. Cybersecurity was evaluated per FDA's Draft Guidance 'Content of Premarket Submissions for Management of Cybersecurity in Medical Devices' (October 18, 2018). ... The cybersecurity verification and validation test results demonstrate the adequacy of the implemented cybersecurity controls."
    System Performance (under simulated use conditions - Animal)"For system design validation, animal testing was performed under simulated use conditions to assess the system performance. Test results demonstrated the Ion™ Endoluminal System performs according to its intended use."
    Usability"Changes made to the subject device do not affect previously identified critical tasks, and no new critical tasks were identified. Therefore, data collected during the previous summative usability study for the predicate device... remains valid, and no additional testing was required."

    2. Sample Size Used for the Test Set and Data Provenance

    • Software Verification and Validation, Bench Testing, Cybersecurity Testing: The document does not specify a numerical sample size for these tests. The nature of these tests (e.g., unit tests, integration tests, system tests, penetration tests for cybersecurity) typically involves comprehensive coverage rather than a statistical sample size of cases in the way that clinical studies do.
    • Animal Testing: The document mentions "animal testing was performed," but does not specify the number of animals or the type of animal used. The provenance is implied to be experimental (simulated use conditions).
    • The document does not mention the use of retrospective or prospective data sets from human patients for the evaluation of the new features. The evaluation focuses on technical performance and safety of the modified system.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document does not describe the use of experts to establish "ground truth" for the test sets in the context of diagnostic accuracy, as the modifications are related to system functionality, networking, and safety rather than a diagnostic algorithm. The ground truth for functional tests would be the expected system behavior, and for cybersecurity, it would be adherence to security best practices and robustness against identified threats.

    4. Adjudication Method for the Test Set

    Not applicable, as no human-expert-based ground truth establishment or adjudication is described for the technical tests performed for this submission.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was mentioned in this document. The submission focuses on demonstrating substantial equivalence of the modified system, not on comparing reader performance with and without AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    While the PlanPoint™ Software uses CT scans to create 3D plans and navigation pathways, this document primarily discusses the system's ability to access these CT scans via PACS and automatic log retrieval. It does not provide details about a standalone performance study of the PlanPoint™ Software's algorithmic accuracy in path planning or 3D reconstruction. The animal testing assessed "system performance" under simulated use, which would include the integrated functionality, but not an isolated algorithm-only performance metric.

    7. Type of Ground Truth Used

    • For Software Verification and Validation, Bench Testing: The ground truth is implicit in the design specifications and user needs. The tests verified that the system's outputs matched the expected behavior defined by these specifications.
    • For Cybersecurity Testing: Ground truth is defined by established cybersecurity standards, best practices, and the identified vulnerability landscape.
    • For Animal Testing: The ground truth is the successful demonstration of the "Ion™ Endoluminal System performs according to its intended use" in a simulated environment.
    • No pathology, clinical outcomes data, or expert consensus (in a diagnostic sense) ground truth is described as being used for the evaluation of the modifications in this submission.

    8. Sample Size for the Training Set

    The document does not describe any machine learning or AI models being "trained" as part of the modifications being evaluated. Therefore, there is no mention of a training set sample size. The PlanPoint™ Software uses patient CT scans for planning, but it's not described as a learning algorithm that requires a "training set" in the conventional AI/ML sense.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for a machine learning model is described in this document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1