Search Results
Found 1 results
510(k) Data Aggregation
(351 days)
R.A.W. Srl
Ablation-fit is a medical imaging application available for use with liver ablation procedures.
Ablation-fit is used to assist physicians in planning, permitting the graphical display of anatomy involved in the procedure, ablation targets and ablation needle placement.
Ablation-fit is used to assist physicians in confirming ablation zones during follow-up.
The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.
Ablation-fit is a stand-alone medical imaging software that integrates Reconstruction, Segmentation, Registration and Visualization algorithms into a user interface to support physicians during liver ablation treatments planning and follow-up.
Ablation-fit allows to perform the entire workflow from DICOM (Digital Imaging and COmmunications in Medicine) images to 3D reconstruction of volume of interests, ablation probe placement and treatment outcome verification.
Specifically, Ablation-fit main functionalities include:
- Image loading from different supports (including PACS),
- DICOM images handling and visualization in axial, sagittal, coronal views,
- image segmentation,
- tools for manual edit of segmentations.
- 3D visualization,
- virtualization of ablation probe placement,
- pre- and post-treatment images registration.
The software permits segmentation and 3D reconstruction of volumes of interest. The software contours all of this anatomic information not only in axial, sagittal, and coronal planes for 2D visualization, but also three-dimensionally. Every computed segmentation can be manually modified in the 2D axial visualization and consequently the three-dimensional mapping of the scan changes accordingly.
Ablation-fit let the user simulate the virtual needle insertion and shows the desired ellipsoid of ablation.
Once the ablation procedure has been performed, pre- and post-treatment scans are registered. Consequently, the software can verify whether the ablation zones entirely surrounds the lesion and the safety margin.
Here's a breakdown of the acceptance criteria and the study details for the Ablation-fit device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Device Performance
The document doesn't present a direct table of specific acceptance criteria with corresponding performance metrics in a single, clear format. However, it states that "software testing using retrospective image data of ablation procedures" was conducted to evaluate "the accuracy of Ablation-fit in assessing the outcome of lesion percutaneous thermal ablations and the accuracy of the automatically performed segmentations." It also mentions "Bench tests that compare the output of all segmentation and registration processes with ground truth annotated by qualified experts show that the algorithms performed as expected."
Based on these statements, we can infer the following general acceptance criteria and reported performance:
Acceptance Criteria Category | Reported Device Performance |
---|---|
Segmentation Accuracy | Algorithms performed as expected (compared to qualified expert-annotated ground truth). Retrospective evaluation showed accuracy in automatically performed segmentations. |
Registration Accuracy | Algorithms performed as expected (compared to qualified expert-annotated ground truth). |
Assessment of Ablation Outcome Accuracy | Retrospective evaluation showed accuracy in assessing the outcome of lesion percutaneous thermal ablations. |
Measurement Accuracy | Measurement Accuracy Test performed to evaluate the accuracy of measurements carried out with Ablation-fit software on CT images. (Specific metrics not provided in this summary). |
Functionality (User Interaction) | All semi-automatic functionalities underwent testing by three radiologists to account for variability resulting from user interaction, and the system satisfied user demands and requirements. (Specific metrics not provided, but implies satisfactory performance with user variability accounted for). |
Compliance with Standards & Requirements | Designed and developed according to ANSI AAMI IEC 62304:2006/A1:2016. Software verification and validation testing conducted according to FDA guidance. User acceptance test performed according to ANSI AAMI IEC 62366-1:2015+AMD1:2020. |
Safety and Effectiveness Equivalence | Functions at least as safely and effectively as the designated predicate device and is essentially equivalent to it. Does not introduce any new potential safety risks. |
Study Details
Here's the breakdown of the study information:
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The exact number of cases or images in the "retrospective image data of ablation procedures" used for software testing is not specified in this document.
- Data Provenance: The data used for testing was "retrospective image data of ablation procedures." The country of origin for the data is not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: "Qualified experts" were used to annotate ground truth for segmentation and registration processes. In addition, "three radiologists" performed testing for semi-automatic functionalities to account for user interaction variability.
- Qualifications of Experts: The specific qualifications (e.g., years of experience, subspecialty) of the "qualified experts" and the "three radiologists" are not specified beyond their profession.
4. Adjudication Method for the Test Set
- The document implies that ground truth was "annotated by qualified experts." For the "semi-automatic functionalities," testing involved "three radiologists" to account for user variability. There is no explicit mention of an adjudication method (e.g., 2+1, 3+1 consensus) for establishing the ground truth, particularly expert consensus for discrepancies. It's possible annotation by a single "qualified expert" was considered ground truth, or an unstated consensus method was used.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The testing involved "three radiologists" testing "semi-automatic functionalities to account for variability resulting from user interaction," but this appears to be part of validating the device's interaction and robustness, not a comparative effectiveness study of human readers with vs. without AI assistance.
- Effect Size of Human Readers Improvement with AI vs. without AI assistance: This information is not provided as an MRMC study was not described.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance assessment was conducted for some aspects. The document states: "Bench tests that compare the output of all segmentation and registration processes with ground truth annotated by qualified experts show that the algorithms performed as expected." This implies an evaluation of the algorithm's performance in these tasks independent of a human user's interaction in the final output generation.
- Additionally, "the accuracy of the automatically performed segmentations" was evaluated, which is a standalone assessment.
7. The Type of Ground Truth Used
- The primary type of ground truth used was expert consensus / expert annotation. Specifically, "ground truth annotated by qualified experts" was used for segmentation and registration processes.
8. The Sample Size for the Training Set
- The document does not specify the sample size used for the training set.
9. How the Ground Truth for the Training Set Was Established
- The document does not explicitly state how the ground truth for the training set was established. It only discusses the ground truth for the "test set" or "bench tests."
Ask a specific question about this device
Page 1 of 1