Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
Ablation Confirmation™ (AC), is a Computed Tomography (CT) image processing software package available as an optional feature for use with the NEUWAVE Microwave Ablation System. AC is controlled by the user via an independent user interface on a second monitor separate from the NEUWAVE Microwave Ablation System user interface. AC imports images from CT scanners and facility PACS systems for display and processing during ablation procedures. AC assists physicians in identifying ablation targets, assessing proper ablation probe placement and confirming ablation zones. The software is not intended for diagnosis.
AC is resident on the NEUWAVE Microwave Ablation System and is accessible to the physicians via a second, dedicated monitor with its own user interface separate from the ablation user interface. AC functions are controlled via a USB connected mouse. AC connects to a facility PACS system and CT scanner and receives and sends CT, fused PET and MR images via the DICOM protocol.
AC contains a wide range of image processing tools, including:
- 2D image manipulation
- 3D image generation (from 2D images)
- 3D image manipulation
- Region of interest (ROI) identification, segmentation and measurement
- Automatic identification of ablation probes
- Registration of multiple images into a single view
Here's a breakdown of the acceptance criteria and study information for the NeuWave Medical, Inc.'s Ablation Confirmation device, based on the provided FDA 510(k) summary:
This device is primarily an imaging processing software, and the documentation focuses on its functionality and user interface rather than a clinical outcome study. Therefore, the "performance data" section emphasizes verification and validation testing against a test plan and pre-determined acceptance criteria, rather than a traditional clinical trial or MRMC study.
1. Table of Acceptance Criteria and Reported Device Performance
The provided document doesn't explicitly list a table of quantitative acceptance criteria and their corresponding performance metrics as would be seen for an AI diagnostic device. Instead, it states:
| Acceptance Criteria Category | Reported Device Performance |
|---|---|
| Overall Functionality | "Ablation Confirmation™ was tested in accordance with a test plan that fully evaluated all functions performed by the software." |
| Meeting Pre-determined Criteria | "The system passed all pre-determined acceptance criteria identified in the test plan." |
| Compliance with Regulations/Guidance | "Verification and validation testing were completed in accordance with the company's Design Control process in compliance with 21 CFR Part 820.30, which included testing that fulfills the requirements of FDA “Guidance on Software Contained in Medical Devices”." |
| Risk Mitigation | "Potential risks arising from the new or updated features were analyzed and satisfactorily mitigated in the device design and labeling." |
| Substantial Equivalence (Safety & Effectiveness) | "This version of the AC software does not present any new questions of safety or effectiveness." |
The document details numerous "Modifications" and compares "Feature/Specification" between the subject device and the predicate device. These comparisons implicitly define the acceptance criteria, which seem to be primarily functional and qualitative:
- Improved automatic probe detection feature: Expected to perform better or at least as well as the predicate.
- New feature for manual probe definition: Expected to work as intended.
- Network communication monitoring: Expected to function as a troubleshooting aid.
- Improvements to target/ablation zone edit tools: Expected to allow selection for single or multiple slices.
- Undo/redo capability for segmentation operations: Expected to function correctly.
- Importation of fused PET (and MR) scan for comparison: Expected to perform this display function without manipulation, processing, or registration.
- Viewing Set Up scan as a comparison scan: Expected to function as a comparison option.
- Rendering objects as semi-transparent: Expected new visualization.
- Image registration improvements (manual registration, undo): Expected to function as described and improve user experience.
- New function to measure distance between probe tips: Expected to perform this measurement.
- Displaying diameter of sphere during placement/sizing: Expected to show this information.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: The document does not specify a numerical sample size for "test sets" in terms of cases or patient data. The "Performance Data" section broadly refers to tests performed on "all functions." This suggests the testing was more akin to software verification and validation, likely using diverse test cases rather than a statistically powered clinical dataset.
- Data Provenance: Not explicitly stated. Given it's a software update (510k for modifications), it's highly probable that existing CT images (potentially from a variety of sources/countries, retrospective) were used for testing various functionalities. No indication of prospective data collection for this submission.
3. Number of Experts and Qualifications for Ground Truth
- The document does not mention the use of experts to establish ground truth for this specific 510(k) submission. The device is described as assisting physicians, and its functions (segmentation, probe detection, registration) are user-controlled or semi-automatic with user adjustment. The "ground truth" for verifying its functions would likely be defined by internal testing against expected algorithmic outputs or manual verification by engineers against known inputs.
4. Adjudication Method for the Test Set
- No adjudication method is mentioned. This is consistent with the nature of the submission being about software updates and functional verification, not a clinical diagnostic assessment requiring expert consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was performed or mentioned. The device's indications for use emphasize "assisting physicians" and "not intended for diagnosis," positioning it as a tool to enhance existing procedures rather than a standalone diagnostic or a system intended to directly replace human interpretation. The claim is about functional equivalence and improvement, not comparative reader performance.
6. Standalone (Algorithm Only) Performance
- The document does not present specific "standalone" performance metrics (e.g., sensitivity, specificity, or object detection accuracy) for its automatic features (like automatic probe detection, or segmentation algorithms). The software is designed to be "user-controlled" and provide "assistance." For instance, for segmentation, "The physician initiates the segmentation with tools provided on the screen. AC then uses segmentation algorithms to construct a 2-D visualization of the target lesion selected. The physician can accept the initial segmentation results or use AC tools to manually adjust the defined target lesion." This implies a human-in-the-loop design where the final decision and potentially the refinement of the algorithm's output rest with the user.
7. Type of Ground Truth Used
- Given the nature of the software (image processing and visualization), the "ground truth" for its testing would effectively be the expected output or behavior of the software for given inputs. For example:
- Functional Ground Truth: Does the "undo" button correctly undo the last action? Does the 3D rendering rotate as expected?
- Algorithmic Ground Truth: Does the automatic probe detection correctly identify probes in a test image (which might be verified manually by an engineer or a simulated ground truth).
- User Experience Ground Truth: Are the "improvements to the target/ablation zone edit tools" functioning as intended for user adjustment?
There is no mention of "pathology" or "outcomes data" as ground truth, as the device is not intended for diagnosis or determining clinical outcomes.
8. Sample Size for the Training Set
- The document does not provide any information regarding a "training set" or its size. This device is presented more as a deterministic image processing and visualization tool with semi-automatic features, rather than a deep learning/AI model that typically requires large training datasets. While some "segmentation algorithms" and "automatic probe detection" features might have involved some form of machine learning or rule-based training during their initial development (prior to this 510(k) in 2019), this documentation focuses on the modifications and their functional testing, not the underlying model development.
9. How Ground Truth for the Training Set Was Established
- As no training set is mentioned, there's no information on how its ground truth might have been established.
Ask a specific question about this device
Page 1 of 1