(34 days)
Ablation Confirmation™ (AC), is a Computed Tomography (CT) image processing software package available as an optional feature for use with the Certus® 140 2.45 GHz Ablation System. AC is controlled by the user via an independent user interface on a second monitor separate from the Certus 140 user interface. AC imports images from CT scanners and facility PACS systems for display and processing during ablation procedures. AC assists physicians in identifying ablation targets, assessing proper ablation probe placement and confirming ablation zones. The software is not intended for diagnosis.
AC is resident on the Certus® 140 system and is accessible to the physicians via a second, dedicated monitor with its own user interface separate from the ablation user interface. AC functions are controlled via a USB connected mouse. AC connects to a facility PACS system and CT scanner and receives and sends CT and MR images via the DICOM protocol.
AC contains a wide range of image processing tools, including:
- 2D image manipulation
- 3D image generation (from 2D images)
- 3D image manipulation
- Region of interest (ROI) identification, segmentation and measurement
- Automatic identification of ablation probes
- . Registration of multiple images into a single view
Prior to an ablation procedure, physicians can use AC to semi-automatically segment and visualize ablation target lesions in soft tissue including liver, lung and kidney. The physician initiates the segmentation with tools provided on the screen. AC then uses segmentation algorithms to construct a 2-D visualization of the target lesion selected. The physician can accept the initial segmentation results or use AC tools to manually adjust the defined target lesion. Once accepted, the identified target is rendered into a 3D image.
Upon the placement of ablation probes, taking and importing the CT scan, AC can process the image and identify up to three ablation probes. AC can then perform a registration of the initial CT scan, containing the identified target with the second scan containing the ablation probe(s) in place. The resulting image allows the physician to visualize the ablation probe(s) in relation to the identified target. This enables physicians to ensure probe(s) placement prior to starting the ablation.
Following the ablation procedure and a post-procedure CT scan, AC allows the physician to semiautomatically segment and visualize the ablation zone using the same process as in the initial target segmentation. AC then performs a registration of the initial CT scan, containing the identified target, with the final CECT scan containing the segmented ablation zone. The physician also has the option to evaluate the effect of potential tissue contraction to help determine the technical success (ablation zone covers target lesion with desired amount of margin) of the ablation procedure.
All AC processing and viewing is accomplished at the Certus® 140 Ablation System without the physician having to leave the procedure area to utilize separate image processing tools.
Additionally, AC allows for the images to be viewed by a remote physician for time-saving clinical consultation on the current procedure.
The provided text does not contain detailed information about the acceptance criteria or a specific study proving the device meets the acceptance criteria, as one might find for a clinical performance study of an AI/ML medical device. This document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device rather than presenting extensive performance study data with specific metrics.
However, based on the available information, we can infer some aspects and highlight what is explicitly stated:
Overall Statement on Performance Data:
The document states: "Ablation Confirmation™ was tested in accordance with a test plan that fully evaluated all functions performed by the software. The system passed all pre-determined acceptance criteria identified in the test plan." and "Verification and validation testing were completed in accordance with the company's Design Control process in compliance with 21 CFR Part 820, which included testing that fulfills the requirements of FDA "Guidance on Software Contained in Medical Devices"."
This indicates that internal testing was conducted against a set of acceptance criteria, but the specific metrics, thresholds, and study design details (like sample size, ground truth establishment, etc.) are not included in this 510(k) summary. The focus is on functional testing and compliance with design controls rather than a clinical multi-reader, multi-case (MRMC) or standalone performance study.
Given this limitation, I will address the requested points by stating what is present, inferred, or explicitly missing from the provided text.
1. Table of acceptance criteria and the reported device performance
Acceptance Criteria (Inferred/Stated) | Reported Device Performance |
---|---|
All functions evaluated | "The system passed all pre-determined acceptance criteria identified in the test plan." |
Compliance with 21 CFR Part 820.30 | "Verification and validation testing were completed in accordance with the company's Design Control process..." |
Fulfillment of FDA "Guidance on Software Contained in Medical Devices" | "...which included testing that fulfills the requirements of FDA "Guidance on Software Contained in Medical Devices"." |
Satisfactory mitigation of risks from tissue contraction feature expansion | "Potential risks arising from the expansion of the Tissue Contraction feature were analyzed and satisfactorily mitigated in the device design and labeling." |
Software's ability to support specific workflows (e.g., semi-automatic segmentation, identification of ablation probes, image registration) | Implied as "functions performed by the software" that were "fully evaluated". No specific quantitative performance metrics (e.g., accuracy, precision of segmentation) are provided. |
2. Sample size used for the test set and the data provenance
- Sample Size: Not specified in the provided text. The document refers to "a test plan," but does not detail the number of cases or scans used in this testing.
- Data Provenance: Not specified. It's likely internal testing data, but the country of origin, or whether it was retrospective or prospective data, is not mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not provided. The document describes software segmentation where "The physician can accept the initial segmentation results or use AC tools to manually adjust the defined target lesion." This suggests user interaction for defining targets/zones, but it doesn't describe a formal expert-driven ground truth establishment process for a test set.
4. Adjudication method for the test set
- This information is not provided. As a formal clinical performance study with expert readers is not detailed, an adjudication method would not be relevant in the context of the testing described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study is not described or referenced in this 510(k) summary. The device "assists physicians," but no study on the impact of this assistance on human reader performance (e.g., diagnostic accuracy, time saving) is presented.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the software includes "segmentation algorithms" and "automatic identification of ablation probes." It states "Prior to an ablation procedure, physicians can use AC to semi-automatically segment..." and "AC then uses segmentation algorithms to construct a 2-D visualization..." and "AC can process the image and identify up to three ablation probes." While these algorithms likely underwent internal standalone testing for functionality and accuracy, the details of such standalone performance (e.g., specific metrics like Dice coefficient for segmentation, sensitivity/specificity for probe detection) are not provided in this summary. The summary focuses on the end-user workflow involving physician interaction ("semi-automatically segment," "physician can accept... or manually adjust").
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the internal "test plan" mentioned, the specific type of ground truth used to evaluate the software's performance (e.g., for segmentation accuracy or probe identification) is not explicitly stated. Given the device's function, anatomical landmarks and physician-defined regions (either through manual outlining or verification of semi-automatic results as described) would likely serve as a practical form of ground truth for functional verification. Pathology or outcomes data are generally not applicable for confirming image processing and segmentation accuracy.
8. The sample size for the training set
- This information is not provided. The document does not describe a machine learning training process with a distinct training set. It refers to "segmentation algorithms" but does not detail their development or the data used to train them.
9. How the ground truth for the training set was established
- This information is not provided, as details about a distinct training set and its ground truth establishment are absent from the summary.
§ 892.1750 Computed tomography x-ray system.
(a)
Identification. A computed tomography x-ray system is a diagnostic x-ray system intended to produce cross-sectional images of the body by computer reconstruction of x-ray transmission data from the same axial plane taken at different angles. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.