Search Results
Found 1 results
510(k) Data Aggregation
(121 days)
Autofuse is a software package that provides physicians a means for comparison of medical data including imaging data that is DICOM compliant. It allows the display, annotation, volume operation, volume rendering, registration and fusion of medical images as an aid during use by diagnostic radiology, oncology, radiation therapy planning, and other medical specialties. Autofuse is not intended for mammography.
Autofuse is a software application providing relevant tools for Radiotherapy professionals to use while creating treatment plans.
The Autofuse device is a Picture Archiving and Communication System (medical imaging software). Autofuse provides medical image processing designed to facilitate the oncology or other clinical specialty workflow by allowing the comparison of medical imaging data from different modalities, points in time, and / or scanning protocols. The product provides users with the means to display, co-register and analyze medical images from multiple modalities including PET, CT, RT Dose and MR; draw Regions of Interest (ROI); and import / export results to/from commercially available radiation treatment planning systems and PACS devices. Co-registration includes deformable registration technology that can be applied to DICOM data including diagnostic and planning image volumes, structures, and dose.
Autofuse is used as a stand-alone application on recommended Off-The-Shelf (OTS) computers supplied by the company or by the end-user.
Here's a breakdown of the acceptance criteria and study information for the "Autofuse" device, based on the provided FDA 510(k) submission:
Acceptance Criteria and Device Performance
The submission primarily focuses on demonstrating substantial equivalence to a predicate device ("Velocity AIS") by comparing technological characteristics and asserting similar or better performance based on non-clinical testing. There isn't a table explicitly listing "acceptance criteria" with numerical targets and then "reported device performance" against those targets in a quantitative sense for Autofuse itself. Instead, the "acceptance criteria" are implied by the features and performance of the predicate, and the "reported device performance" is a general statement of conformance based on software verification and validation.
Implied Acceptance Criteria and Reported Device Performance (Derived from Comparison to Predicate):
Feature/Characteristic (Implied Acceptance Criteria from Predicate) | Autofuse Performance (Reported) |
---|---|
General Functionality | |
Display, annotation, volume operation, volume rendering, registration, fusion of medical images as an aid during diagnostic radiology, oncology, radiation therapy planning, and other medical specialties. (Not for mammography) | Performs all these functions. |
DICOM compliant data handling | DICOM compliant. |
Image Study Importation (CT/MR/PET/Dose/SPECT) | Imports CT/MR/PET. Imports DICOM-RT Dose. (Note: Autofuse sees Dose as an object associated with a scan, not an imaging modality itself, but this is deemed not to raise new safety/efficacy questions). The predicate supports SPECT, while Autofuse does not explicitly mention it for import, but the overall import capability is "Similar" and the difference is deemed not to raise new safety/efficacy questions. |
Image Structure Import, Save & Export to Treatment Planning Systems | Yes |
Volume Rendering | Yes |
Advanced Visualization and Navigation Features | Yes |
Volume Operations | No (Same as predicate) |
Diagnostic Image Registration | Yes |
Image Fusion (Anatomical and Functional) | Yes |
ROI & Contouring | Yes |
Manual Contouring Tools | Yes |
Image Analysis | Yes |
Plan Review of Imported Plans or Created Dose Composites | No (Same as predicate) |
Oncology Workflow Automation | No (Same as predicate) |
Image/ROI Export to DICOM RT | Yes |
User Scaling of Image Volumes | No (Same as predicate) |
Biological Effective Dose (BED) Scaling | No (Same as predicate) |
Y-90 Microsphere Dosimetry | No (Same as predicate) |
Navigator Semi-Automated Workflows | No (Image registration is fully automated with no user-facing workflow; same as predicate). |
Adaptive Navigators to Assist in Offline Dose Review | No (Same as predicate) |
Automated Image-Based Registration | |
Manual Registration Editing | No (Predicate has it). Autofuse differs by not allowing manual registration editing, citing AAPM TG-132. This difference is deemed not to bring up new questions of safety or efficacy. |
Auto Rigid Registration | Yes |
Deformable Registration | Yes |
Inverse Deformable Registration | No (Same as predicate) |
Structure Guided Deformable | No (Same as predicate) |
Segmentation | |
Atlas Auto-Segmentation | No (Predicate has it). Autofuse does not contain or use atlases for auto-segmentation. This difference is deemed not to bring up new questions of safety or efficacy. |
Image Analysis with Volumetric Graphs | |
Histograms and Voxel Assessment Graphs | Yes |
DVH Statistics Display | Yes |
Security/Platform | |
Secure Login and Data Storage | No (Predicate has application-level login). Autofuse runs on a secure workstation, logging OS-level info, with storage and authentication handled by the OS. This difference is deemed not to bring up new questions of safety or efficacy. |
Logging of Database Activity | No (Predicate logs database activity). Autofuse's database activity is timestamped, tagged by user, and logged for all transactions. This difference is deemed not to bring up new questions of safety or efficacy. (Note: The table entry for Autofuse says "No" for logging, but the remark describes logging functionality. This appears to be a discrepancy in the table's "Yes/No" column vs. the descriptive remark.) |
Operating System Platform | Ubuntu 22.04 (Predicate: Microsoft Windows, MAC OSX). Different platform (Linux-based) but deemed not to raise new questions of safety or efficacy. |
Study Information:
-
Sample sizes used for the test set and the data provenance:
- The submission does not specify a distinct "test set" sample size in terms of number of cases or patients.
- The non-clinical validation testing was performed in a "Staging Environment (SE), which consists of a network of virtual machines (VMs) within a Kernel Virtual Machine (KVM) hypervisor." This environment was "designed to mimic a typical clinical set up" and included a departmental CT scanner and a radiology PACS.
- No information is provided regarding the country of origin of the data used for testing, nor whether it was retrospective or prospective. It's implied to be simulated or example data suitable for verification and validation.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The submission does not mention the use of experts to establish ground truth for a test set. The validation is focused on software verification against specifications and intended use in a simulated environment, rather than clinical performance against expert-defined ground truth.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- No information on adjudication methods is provided, as the study described is software verification and validation, not a reader-based clinical study.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done or reported in this submission.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the primary performance data presented is "Non-clinical performance Data" based on "Software Verification and Validation Testing." This represents the standalone performance of the algorithm against its documented specifications and user needs in a simulated environment. The device is described as "a stand-alone application."
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the non-clinical performance data, the "ground truth" is effectively the device specifications, user needs, and applicable requirements and standards against which the software was tested. It is not clinical ground truth like pathology or expert consensus on patient data.
-
The sample size for the training set:
- The submission does not explicitly describe a "training set" or "training process" for the Autofuse software in the context of machine learning, as it is characterized as medical image management and processing software with deformable registration technology, rather than a deep learning-based diagnostic algorithm requiring extensive training data. Therefore, no sample size for a training set is provided.
-
How the ground truth for the training set was established:
- As no training set is described in the context of machine learning, no information is provided on how ground truth for a training set was established.
Ask a specific question about this device
Page 1 of 1