K Number
K233572
Device Name
Autofuse
Manufacturer
Date Cleared
2024-03-06

(121 days)

Product Code
Regulation Number
892.2050
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Autofuse is a software package that provides physicians a means for comparison of medical data including imaging data that is DICOM compliant. It allows the display, annotation, volume operation, volume rendering, registration and fusion of medical images as an aid during use by diagnostic radiology, oncology, radiation therapy planning, and other medical specialties. Autofuse is not intended for mammography.

Device Description

Autofuse is a software application providing relevant tools for Radiotherapy professionals to use while creating treatment plans.

The Autofuse device is a Picture Archiving and Communication System (medical imaging software). Autofuse provides medical image processing designed to facilitate the oncology or other clinical specialty workflow by allowing the comparison of medical imaging data from different modalities, points in time, and / or scanning protocols. The product provides users with the means to display, co-register and analyze medical images from multiple modalities including PET, CT, RT Dose and MR; draw Regions of Interest (ROI); and import / export results to/from commercially available radiation treatment planning systems and PACS devices. Co-registration includes deformable registration technology that can be applied to DICOM data including diagnostic and planning image volumes, structures, and dose.

Autofuse is used as a stand-alone application on recommended Off-The-Shelf (OTS) computers supplied by the company or by the end-user.

AI/ML Overview

Here's a breakdown of the acceptance criteria and study information for the "Autofuse" device, based on the provided FDA 510(k) submission:

Acceptance Criteria and Device Performance

The submission primarily focuses on demonstrating substantial equivalence to a predicate device ("Velocity AIS") by comparing technological characteristics and asserting similar or better performance based on non-clinical testing. There isn't a table explicitly listing "acceptance criteria" with numerical targets and then "reported device performance" against those targets in a quantitative sense for Autofuse itself. Instead, the "acceptance criteria" are implied by the features and performance of the predicate, and the "reported device performance" is a general statement of conformance based on software verification and validation.

Implied Acceptance Criteria and Reported Device Performance (Derived from Comparison to Predicate):

Feature/Characteristic (Implied Acceptance Criteria from Predicate)Autofuse Performance (Reported)
General Functionality
Display, annotation, volume operation, volume rendering, registration, fusion of medical images as an aid during diagnostic radiology, oncology, radiation therapy planning, and other medical specialties. (Not for mammography)Performs all these functions.
DICOM compliant data handlingDICOM compliant.
Image Study Importation (CT/MR/PET/Dose/SPECT)Imports CT/MR/PET. Imports DICOM-RT Dose. (Note: Autofuse sees Dose as an object associated with a scan, not an imaging modality itself, but this is deemed not to raise new safety/efficacy questions). The predicate supports SPECT, while Autofuse does not explicitly mention it for import, but the overall import capability is "Similar" and the difference is deemed not to raise new safety/efficacy questions.
Image Structure Import, Save & Export to Treatment Planning SystemsYes
Volume RenderingYes
Advanced Visualization and Navigation FeaturesYes
Volume OperationsNo (Same as predicate)
Diagnostic Image RegistrationYes
Image Fusion (Anatomical and Functional)Yes
ROI & ContouringYes
Manual Contouring ToolsYes
Image AnalysisYes
Plan Review of Imported Plans or Created Dose CompositesNo (Same as predicate)
Oncology Workflow AutomationNo (Same as predicate)
Image/ROI Export to DICOM RTYes
User Scaling of Image VolumesNo (Same as predicate)
Biological Effective Dose (BED) ScalingNo (Same as predicate)
Y-90 Microsphere DosimetryNo (Same as predicate)
Navigator Semi-Automated WorkflowsNo (Image registration is fully automated with no user-facing workflow; same as predicate).
Adaptive Navigators to Assist in Offline Dose ReviewNo (Same as predicate)
Automated Image-Based Registration
Manual Registration EditingNo (Predicate has it). Autofuse differs by not allowing manual registration editing, citing AAPM TG-132. This difference is deemed not to bring up new questions of safety or efficacy.
Auto Rigid RegistrationYes
Deformable RegistrationYes
Inverse Deformable RegistrationNo (Same as predicate)
Structure Guided DeformableNo (Same as predicate)
Segmentation
Atlas Auto-SegmentationNo (Predicate has it). Autofuse does not contain or use atlases for auto-segmentation. This difference is deemed not to bring up new questions of safety or efficacy.
Image Analysis with Volumetric Graphs
Histograms and Voxel Assessment GraphsYes
DVH Statistics DisplayYes
Security/Platform
Secure Login and Data StorageNo (Predicate has application-level login). Autofuse runs on a secure workstation, logging OS-level info, with storage and authentication handled by the OS. This difference is deemed not to bring up new questions of safety or efficacy.
Logging of Database ActivityNo (Predicate logs database activity). Autofuse's database activity is timestamped, tagged by user, and logged for all transactions. This difference is deemed not to bring up new questions of safety or efficacy. (Note: The table entry for Autofuse says "No" for logging, but the remark describes logging functionality. This appears to be a discrepancy in the table's "Yes/No" column vs. the descriptive remark.)
Operating System PlatformUbuntu 22.04 (Predicate: Microsoft Windows, MAC OSX). Different platform (Linux-based) but deemed not to raise new questions of safety or efficacy.

Study Information:

  1. Sample sizes used for the test set and the data provenance:

    • The submission does not specify a distinct "test set" sample size in terms of number of cases or patients.
    • The non-clinical validation testing was performed in a "Staging Environment (SE), which consists of a network of virtual machines (VMs) within a Kernel Virtual Machine (KVM) hypervisor." This environment was "designed to mimic a typical clinical set up" and included a departmental CT scanner and a radiology PACS.
    • No information is provided regarding the country of origin of the data used for testing, nor whether it was retrospective or prospective. It's implied to be simulated or example data suitable for verification and validation.
  2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The submission does not mention the use of experts to establish ground truth for a test set. The validation is focused on software verification against specifications and intended use in a simulated environment, rather than clinical performance against expert-defined ground truth.
  3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • No information on adjudication methods is provided, as the study described is software verification and validation, not a reader-based clinical study.
  4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC comparative effectiveness study was done or reported in this submission.
  5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Yes, the primary performance data presented is "Non-clinical performance Data" based on "Software Verification and Validation Testing." This represents the standalone performance of the algorithm against its documented specifications and user needs in a simulated environment. The device is described as "a stand-alone application."
  6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the non-clinical performance data, the "ground truth" is effectively the device specifications, user needs, and applicable requirements and standards against which the software was tested. It is not clinical ground truth like pathology or expert consensus on patient data.
  7. The sample size for the training set:

    • The submission does not explicitly describe a "training set" or "training process" for the Autofuse software in the context of machine learning, as it is characterized as medical image management and processing software with deformable registration technology, rather than a deep learning-based diagnostic algorithm requiring extensive training data. Therefore, no sample size for a training set is provided.
  8. How the ground truth for the training set was established:

    • As no training set is described in the context of machine learning, no information is provided on how ground truth for a training set was established.

§ 892.2050 Medical image management and processing system.

(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).