Search Results
Found 2 results
510(k) Data Aggregation
(25 days)
VisAble.IO
VisAble.IO is a Computed Tomography (CT) and Magnetic Resonance (MR) image processing software package available for use with liver ablation procedures.
VisAble.IO is controlled by the user via a user interface.
VisAble.IO imports images from CT and MR scanners and facility PACS systems for display and processing during liver ablation procedures.
VisAble.IO is used to assist physicians in planning ablation procedures, including identifying ablation targets and virtual ablation needle placement. VisAble.IO is used to assist physicians in confirming ablation zones.
The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.
VisAble.IO is a stand-alone software application with tools and features designed to assist users in planning ablation procedures as well as tools for treatment confirmation. The use environment for VisAble.IO is the Operating Room and the hospital healthcare environment such as interventional radiology control room.
VisAble.IO has five distinct workflow steps:
- Data Import
- . Anatomic Structures Segmentation (Liver, Hepatic Vein, Portal Vein, Ablation Target)
- . Instrument Placement (Needle Planning)
- Ablation Zone Segmentation
- . Treatment Confirmation (Registration of Pre- and Post-Interventional Images; Quantitative Analysis)
Of these workflow steps, two (Anatomic Segmentation, and Instrument Placement) make use of the planning image. These workflow steps contain features and tools designed to support the planning of ablation procedures. The other two (Ablation Zone Segmentation, and Treatment Confirmation) make use of the confirmation image volume. These workflow steps contain features and tools designed to support the evaluation of the ablation procedure's technical performance in the confirmation image volume.
Key features of the VisAble.IO Software include:
- . Workflow steps availability
- Manual and automated tools for anatomic structures and ablation zone segmentation
- Overlaying and positioning virtual instruments (ablation needles) and user-selected estimates of the ablation regions onto the medical images
- . Image fusion and registration
- . Compute achieved margins and missed volumes to help the user assess the coverage of the ablation target by the ablation zone
- . Data saving and secondary capture generation
The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of the ablation target and ablation zones.
The software system runs on a dedicated computer and is intended for display and processing, of a Computed Tomography (CT) and Magnetic Resonance (MR), including contrast enhanced images.
The system can be used on patient data for any patient demographic chosen to undergo the ablation treatment.
VisAble.IO uses several algorithms to perform operations to present information to the user in order for them to evaluate the planned and post ablation zones. These include:
- . Segmentation
- . Image Registration
- . Measurement and Quantification
VisAble.IO is intended to be used for ablations with the following ablation instruments:
For needle planning, the software currently supports the following needle models:
- Medtronic: Emprint Antenna 15CM, 20CM, 30CM -
- -NeuWave Medical: PR Probe 15CM, 20CM; PR XT Probe 15CM, 20CM; LK Probe 15CM, 20CM; LK XT Probe 15CM, 20CM
- -H.S. Hospital Service: AMICA Probe 15 CM, 20 CM, 27 CM.
For treatment confirmation (including segmentation and registration), the software is compatible with all ablation devices as these functions are independent from probes/power settings.
Here's a summary of the acceptance criteria and study details for the Techsomed VisAble.IO device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Algorithm | Performance Goal (Acceptance Criteria) | Reported Performance |
---|---|---|
CT Processing | ||
Liver Segmentation | Mean DICE = 0.92 | Mean DICE = 0.98 |
Ablation Target Segmentation | Mean DICE = 0.70 | Mean DICE = 0.82 |
Ablation Zone Segmentation | Mean DICE = 0.70 | Mean DICE = 0.88 |
Liver Vessels Segmentation | Mean DICE = 0.70 | Mean DICE = 0.72 |
MR Processing | ||
Liver Segmentation | Mean DICE = 0.92 | Mean DICE = 0.93 |
Ablation Target Segmentation | Mean DICE = 0.70 | Mean DICE = 0.76 |
Image Registration | ||
Pre-ablation CT to Post Ablation CT Image Registration | MCD* = 6.06 mm | MCD* = 4.09 mm |
Pre-ablation MR to Post-ablation CT Image Registration | MCD* = 6.06 mm | MCD* = 4.72 mm |
Pre-ablation MR to Pre-ablation CT Image Registration | MCD* = 7.90 mm | MCD* = 5.10 mm |
*MCD = Mean Corresponding Distance
Note on Segmentation and Registration Accuracy: The document explicitly states:
- "The use of the segmentation tools to achieve a satisfactory delineation of ablation target or ablation zone is a user operation and the clinical accuracy of segmentation is the responsibility of the user and not a VisAble.IO function."
- "Final accuracy of registration is dependent on user assessment and manual modification of the registration prior to acceptance, and not a VisAble.IO function."
This suggests that while the algorithms perform well against the statistical metrics, the final clinical accuracy is attributed to the user.
2. Sample Sizes Used for the Test Set and Data Provenance
Algorithm | N (Sample Size) | Data Provenance (Countries/Regions) | Retrospective/Prospective |
---|---|---|---|
CT Processing | |||
Liver Segmentation | 50 | US: 32, OUS: 18 | Not specified (implied retrospective from clinical sites) |
Ablation Target Segmentation | 59 | US: 30, OUS: 29 | Not specified (implied retrospective from clinical sites) |
Ablation Zone Segmentation | 59 | US: 30, OUS: 29 | Not specified (implied retrospective from clinical sites) |
Liver Vessels Segmentation | 100 | US: 72, OUS: 28 | Not specified (implied retrospective from clinical sites) |
MR Processing | |||
Liver Segmentation | 25 | US: 25 | Not specified (implied retrospective from clinical sites) |
Ablation Target Segmentation | 50 | US: 46, OUS: 4 | Not specified (implied retrospective from clinical sites) |
Image Registration | |||
Pre-ablation CT to Post Ablation CT Image Registration | 46 | US: 13, OUS: 33 | Not specified (implied retrospective from clinical sites) |
Pre-ablation MR to Post-ablation CT Image Registration | 25 | US: 25 | Not specified (implied retrospective from clinical sites) |
Pre-ablation MR to Pre-ablation CT Image Registration | 18 | US: 14, OUS: 4 | Not specified (implied retrospective from clinical sites) |
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The document does not explicitly state the "number of experts used to establish the ground truth for the test set" or their specific "qualifications." It generally refers to "performance data demonstrate that the VisAble.IO (V 1.4) is as safe and effective as the cleared VisAble.IO (K223693)," but does not detail the specific ground truth generation process for the reported performance metrics.
4. Adjudication Method for the Test Set
The document does not specify an adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
No MRMC comparative effectiveness study is mentioned in the provided text, nor is there any discussion of human reader improvement with or without AI assistance.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
Yes, the performance data presented in the table (DICE scores, MCD) are for the algorithms themselves, indicating a standalone performance evaluation. The document highlights that "VisAble.IO uses several algorithms to perform operations to present information to the user in order for them to evaluate the planned and post ablation zones," and then presents the algorithmic validation results. However, it also clarifies that the final clinical accuracy of segmentations and registrations is dependent on user actions.
7. The Type of Ground Truth Used
The ground truth for the algorithmic performance (e.g., DICE scores for segmentation, MCD for registration) is implicitly expert-derived segmentation and registration. While the document doesn't explicitly detail the process, DICE scores and Mean Corresponding Distances are calculated by comparing algorithmic outputs to a pre-established "true" segmentation or correspondence, which in medical imaging is typically generated by human experts (e.g., radiologists, experienced technicians).
8. The Sample Size for the Training Set
- CT Processing - Liver Segmentation Algorithm: N = 1091 contrast-enhanced CT images
- CT Processing - Liver Vessel Segmentation Algorithm: N = 393 contrast-enhanced CT images
- MR Processing - Liver Segmentation AI algorithm: N = 418 MR images
9. How the Ground Truth for the Training Set Was Established
The document provides details on the characteristics of the training datasets but does not explicitly state how the ground truth for these training sets was established. It describes the data as "contrast-enhanced CT images taken for diagnostic reading" or "MR images taken for diagnostic reading," suggesting that these were real-world clinical images, but the manual annotation or expert review process for creating the ground truth for training is not described.
Ask a specific question about this device
(266 days)
VisAble.IO
VisAble.IO is a Computed Tomography (CT) image processing software package available for use with liver ablation procedures.
VisAble.IO is controlled by the user via a user interface.
VisAble.IO imports images from CT scanners and facility PACS systems for display and processing during liver ablation procedures.
VisAble.IO is used to assist physicians in planning liver ablation procedures, including identifying ablation targets and virtual ablation needle placement. VisAble.IO is used to assist physicians in confirming ablation zones.
The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.
VisAble.IO is a stand-alone software application with tools and features designed to assist users in planning ablation procedures as well as tools for treatment confirmation. The use environment for VisAble.IO is the Operating Room and the hospital healthcare environment such as interventional radiology control room.
VisAble.IO has five distinct workflow steps:
- Data Import
- Anatomic Structures Segmentation (Liver, Hepatic Vein, Portal Vein, Ablation Target)
- Instrument Placement (Needle Planning)
- Ablation Zone Segmentation
- Treatment Confirmation (Registration of Pre- and Post-Interventional Images; Quantitative Analysis)
Of these workflow steps, two (Anatomic Segmentation and Instrument Placement) make use of the planning image. These workflow steps contain features and tools designed to support the planning of ablation procedures. The other two (Ablation Zone Seqmentation, and Treatment Confirmation) make use of the confirmation image volume. These workflow steps contain features and tools designed to support the evaluation of the ablation procedure's technical performance in the confirmation image volume.
Key features of the VisAble.IO Software include:
- Workflow steps availability
- Manual and automated tools for anatomic structures and ablation zone segmentation
- Overlaying and positioning virtual instruments (ablation needles) and user-selected estimates of the ablation regions onto the medical images
- Image fusion and registration
- Compute achieved margins and missed volumes to help the user assess the coverage of the ablation target by the ablation zone
- Data saving and secondary capture generation
The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of the ablation target and ablation zones.
The software system runs on a dedicated computer and is intended for display and processing, of a Computed Tomography (CT), including contrast enhanced images.
The system can be used on patient data for any patient demographic chosen to undergo the ablation treatment.
VisAble.IO uses several algorithms to perform operations to the user in order for them to evaluate the planned and post ablation zones. These include:
- Seamentation
- Image Registration
- Measurement and Quantification
VisAble.IO is intended to be used for ablations with the following ablation instruments:
For needle planning, the software currently supports the following needle models:
- Medtronic: Emprint Antenna 15CM, 20CM, 30CM
- NeuWave Medical: PR Probe 15CM, 20CM: PR XT Probe 15CM, 20CM: LK ー Probe 15CM, 20CM; LK XT Probe 15CM, 20CM
For treatment confirmation (including seqmentation and registration), the software is compatible with all ablation devices as these functions are independent from probes/power settings.
The provided text describes the VisAble.IO device and its performance testing for FDA 510(k) clearance. Here's a breakdown of the requested information based on the document:
1. A table of acceptance criteria and the reported device performance
The document uses "Primary Performance Goal" as the acceptance criterion and "Primary Endpoint" as the reported device performance.
Algorithm | Primary Performance Goal (Acceptance Criteria) | Primary Endpoint (Reported Performance) |
---|---|---|
Liver Segmentation | Mean DICE = 0.92 | Mean DICE = 0.98 |
Ablation Target Segmentation | Mean DICE = 0.70 | Mean DICE = 0.80 |
Ablation Zone Segmentation | Mean DICE = 0.70 | Mean DICE = 0.86 |
Liver Vessels Segmentation | Mean DICE = 0.70 | Mean DICE = 0.72 |
PrePost Ablation Image Registration | MCD* = 6.06 mm | MCD* = 4.11 mm |
*MCD=Mean Corresponding Distance |
Note: The document states that segmentation tools provide manual and semi-automated segmentation, and post-processing. The clinical accuracy of segmentation is referred to as "a user operation and the clinical accuracy of segmentation is the responsibility of the user and not a VisAble.IO function." Similarly, for registration, it states "Final accuracy of registration is dependent on user assessment and manual modification of the registration prior to acceptance, and not a VisAble.IO function." This suggests that the reported performance metrics (DICE scores and MCD) likely reflect the algorithm's capability to provide good initial segmentations and registrations for user refinement.
2. Sample sizes used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
The sample sizes for the test sets are provided in the table. The provenance for the training/validation datasets are described generally as:
- Liver Segmentation Algorithm Test Set Size: N=50
- Provenance for training/validation (not explicitly test set data): 1091 contrast-enhanced CT images from arterial or venous phases.
- Location of clinical sites: Germany, France, Turkey, Japan, Israel, Netherlands, Canada, USA, UK (38 clinical sites)
- Ablation Target Segmentation Test Set Size: N=59
- Ablation Zone Segmentation Test Set Size: N=59
- Liver Vessels Segmentation Test Set Size: N=100
- Provenance for training/validation (not explicitly test set data): N=393 contrast-enhanced CT images from the portal-venous or late venous phases.
- Location of clinical sites: Central Europe, North America, East Asia (36 clinical sites)
- PrePost Ablation Image Registration Test Set Size: N=46
The document doesn't explicitly state whether the test set data was retrospective or prospective. Given that it's performance data for a 510(k) submission, it is typically retrospective data collected for validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not specify the number of experts used to establish the ground truth for the test set or their qualifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not specify the adjudication method used for the test set, nor does it explicitly mention a process of expert adjudication for the ground truth.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
The document does not mention a multi-reader multi-case (MRMC) comparative effectiveness study or any effect size related to human reader improvement with AI assistance. The study focuses on the standalone algorithmic performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, the performance data presented (DICE scores and MCD) are for the standalone algorithmic performance. The text explicitly states that the "clinical accuracy of segmentation is the responsibility of the user and not a VisAble.IO function" and "final accuracy of registration is dependent on user assessment and manual modification... and not a VisAble.IO function," suggesting the provided metrics are for the initial algorithmic output prior to user intervention.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used (e.g., expert consensus, pathology, etc.) for the segmentation and registration algorithms. It implies that the "Primary Performance Goal" was set for these algorithms, suggesting a pre-defined or expert-derived ground truth was used for comparison, but the methodology for establishing it is not detailed.
8. The sample size for the training set
The document provides the sample sizes for the training and model validation datasets as:
- Liver Segmentation Algorithm: 1091 contrast-enhanced CT images.
- Liver Vessel Segmentation Algorithm: N=393 contrast-enhanced CT images.
- The sample sizes for training of Ablation Target Segmentation, Ablation Zone Segmentation, and PrePost Ablation Image Registration algorithms are not explicitly stated in the provided text.
9. How the ground truth for the training set was established
The document does not explicitly describe how the ground truth for the training set was established. It only mentions the characteristics of the images used for training (e.g., contrast-enhanced CT, arterial/venous phases, age/sex distribution, location of clinical sites, imaging procedure).
Ask a specific question about this device
Page 1 of 1