Search Results
Found 1 results
510(k) Data Aggregation
(76 days)
BioTraceIO Vision is a Computed Tomography (CT) and Magnetic Resonance (MR) image processing software package available for use with ablation procedures.
BioTraceIO Vision is controlled by the user via a user interface.
BioTraceIO Vision imports images from CT and MR scanners and facility PACS systems for display and processing during ablation procedures.
BioTraceIO Vision is used to assist physicians in planning ablation procedures, including identifying ablation targets and virtual ablation needle placement. BioTraceIO Vision is used to assist physicians in confirming ablation zones.
The software is not intended for diagnosis. The software is not intended to predict ablation volumes or predict ablation success.
BioTraceIO Vision is a stand-alone software application with tools and features designed to assist users in planning ablation procedures as well as tools for treatment confirmation. The use environment for BioTraceIO Vision is the Operating Room and the hospital healthcare environment such as interventional radiology control room.
BioTraceIO Vision has six distinct workflow steps:
- Data Import
- Anatomic Structures Segmentation (Liver, Kidney, Hepatic Vein, Portal Vein, Ablation Target)
- Instrument Placement (Needle Planning) for Microwave Ablation (MWA) or Cryoablation (Cryo) Procedures
- Ablation Zone/Ice ball Segmentation
- Registration of Pre-Procedure Images
- Treatment Confirmation (Registration of Pre- and Post-Interventional Images; Quantitative Analysis)
Of these workflow steps, four (Anatomic Segmentation, Ablation Target Segmentation, Registration of Pre-Procedure Images and Instrument Placement) make use of the planning image. These workflow steps contain features and tools designed to support the planning of ablation procedures. The other two (Ablation Zone Segmentation, and Treatment Confirmation) make use of the confirmation image volume. These workflow steps contain features and tools designed to support the evaluation of the ablation procedure's technical performance in the confirmation image volume.
Key features of the BioTraceIO Vision Software include:
- Workflow steps availability
- Manual and automated tools for anatomic structures and ablation target/zone segmentation
- Overlaying and positioning virtual instruments (ablation needles) and user-selected estimates of the ablation regions onto the medical images
- Image registration
- Compute achieved margins and missed volumes to help the user assess the coverage of the ablation target by the ablation zone
- Data saving and secondary capture generation
The software components provide functions for performing operations related to image display, manipulation, analysis, and quantification, including features designed to facilitate segmentation of the ablation target and ablation zones.
The software system runs on a dedicated computer and is intended for display and processing, of a Computed Tomography (CT) and Magnetic Resonance (MR), including contrast enhanced images.
The system can be used on patient data for any patient demographic chosen to undergo the ablation treatment.
BioTraceIO Vision uses several algorithms to perform operations to present information to the user in order for them to evaluate the planned and post ablation zones. These include:
- Segmentation
- Image Registration
- Measurement and Quantification
BioTraceIO Vision is intended to be used for ablations with the following ablation instruments:
For needle planning, the software currently supports the following needle models:
- Microwave ablation
- AngioDynamics: Solero Probe 14CM, 19CM, 29CM
- HS HOSPITAL SERVICE: Amica Probe (16G) 15CM, 20CM, 27CM; Amica Probe (14G) 15CM, 20CM, 27CM
- Medtronic Covidien: Emprint Antenna 15CM, 20CM, 30CM
- NeuWave Medical: LK Probe 15CM, 20CM; LK XT Probe 15CM, 20CM; PR Probe 15CM, 20CM; PR XT Probe 15CM, 20CM
- Varian Medical Systems: Ximitry Probe 15CM, 20CM, 27CM
- Cryo ablation
- Boston Scientific: IceForce 2.1 CX, CX L; IcePearl 2.1 CX, CX L; IceRod 1.5 CX; IceSeed 1.5 CX, CX S;Ice Sphere 1.5 CX
- IceCure Medical: ProSense 10G Spheric, Elliptic 14CM, Elliptic 18.5CM; ProSense 13G Spheric, Elliptic
- Varian Medical Systems: ISOLIS 2.1 E Probe 15CM, 20CM; ISOLIS 2.1 S Probe 15CM, 20CM; RA Slimline 1.7 15CM, 20CM; RA Slimline 1.7 Round 15CM, 7CM; RA Slimline 2.4 15CM, 23CM; RA 3.8 13CM, 28CM; V-Probe
For treatment confirmation (including segmentation and registration), the software is compatible with all ablation devices as these functions are independent from probes/power settings.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) clearance letter for BioTraceIO Vision (V1.7):
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" with numerical thresholds against which the performance metrics are directly compared. Instead, it presents performance metrics from various algorithmic tests, implying that the achieved performance was deemed acceptable for clearance. Based on the provided performance data, here's a table that summarizes the key metrics:
| Imaging Modality | Algorithm | Metric | Reported Device Performance | Implied Acceptance Criteria (Minimum Threshold for Clearance) |
|---|---|---|---|---|
| CT | Liver Segmentation | Mean DICE | 0.98 | Implied: High DICE score (e.g., >0.90 for organ segmentation) |
| MR | Liver Segmentation | Mean DICE | 0.93 | Implied: High DICE score (e.g., >0.90 for organ segmentation) |
| CT | Kidney Segmentation | Mean DICE | 0.91 | Implied: High DICE score (e.g., >0.90 for organ segmentation) |
| CT | Liver Ablation Target Segmentation | Mean DICE | 0.82 | Implied: Good DICE score (e.g., >0.75 for target segmentation) |
| CT | Kidney Ablation Target Segmentation (1, 2, 3 strokes) | Mean DICE | 0.79 | Implied: Good DICE score (e.g., >0.75 for target segmentation) |
| MR | Liver Ablation Target Segmentation | Mean DICE | 0.76 | Implied: Good DICE score (e.g., >0.75 for target segmentation) |
| CT | Liver Ablation Zone Segmentation | Mean DICE | 0.88 | Implied: Good DICE score (e.g., >0.85 for ablation zone) |
| CT | Kidney Ablation Zone Segmentation (1, 2, 3 strokes) | Mean DICE | 0.76, 0.77, 0.78 | Implied: Good DICE score (e.g., >0.75 for ablation zone) |
| CT | Kidney Ice Ball Segmentation (1, 2, 3 strokes) | Mean DICE | 0.80, 0.81, 0.83 | Implied: Good DICE score (e.g., >0.80 for ice ball) |
| CT | Liver Vessels Segmentation (HV+PV) | Mean DICE | 0.72 | Implied: Acceptable DICE score (e.g., >0.70 for vessels) |
| CT | Liver Vessels Segmentation (HV+PV) | Mean Centerline DICE | 0.76 | Implied: Acceptable Centerline DICE score (e.g., >0.75 for vessels) |
| CT/MR | Liver Registration Pre-ablation MR – Pre-ablation CT | MCD | 5.04 mm | Implied: Acceptable registration accuracy (e.g., < 5-10 mm) |
| CT | Kidney Registration Diagnostic CT – Pre-ablation CT | MCD | 4.61 mm | Implied: Acceptable registration accuracy (e.g., < 5 mm) |
| CT | Liver Registration Pre-ablation CT – Post-ablation CT | MCD | 4.09 mm | Implied: Acceptable registration accuracy (e.g., < 5 mm) |
| CT | Kidney Registration Pre-ablation CT – Post-ablation CT | MCD | 3.06 mm | Implied: Acceptable registration accuracy (e.g., < 5 mm) |
| CT/MR | Liver Registration Pre-ablation MR – Post-ablation CT | MCD | 4.75 mm | Implied: Acceptable registration accuracy (e.g., < 5 mm) |
Note: The "Implied Acceptance Criteria" are inferred from the successful clearance and commonly accepted performance benchmarks in medical imaging. The FDA clearance document itself does not explicitly list these thresholds.
2. Sample Sizes Used for the Test Set and the Data Provenance
The document provides details of the training sets for different algorithms. However, it does not explicitly state the sample size or provenance for the test set used for the performance validation summarized in the table. The performance data section refers to "validation results" but doesn't detail the test cohort separately from the training cohort or its specific characteristics (country of origin, retrospective/prospective). This is a common characteristic of 510(k) summaries which often highlight training data but may not provide granular details on validation sets.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
The document does not specify the number or qualifications of experts used to establish the ground truth for the test set. It mentions that ground truth for the training set was established, but not for the validation exercises specifically.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method (e.g., 2+1, 3+1) used for establishing the ground truth for the algorithms' test sets.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs without AI Assistance
The provided text does not indicate that a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was performed. The performance data focuses on algorithmic performance (DICE, MCD) in a standalone manner, not on human reader improvement with AI assistance. The device's intended use is to "assist physicians" and the clinical accuracy of segmentation/registration is stated as "the responsibility of the user." This suggests the focus was on the algorithm's accuracy as a tool rather than a comprehensive human-AI interaction study.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, a standalone performance study was done. The performance data presented (Mean DICE and Mean Corresponding Distance) directly reflects the accuracy of the algorithms (segmentation, registration) in isolation, without human intervention in the loop during the measurement of these specific metrics. For example, for "Ablation Target Segmentation", it presents the algorithm's DICE score, not a human reader's improvement using the tool. However, it's important to note that for "Ablation Target Segmentation" (Kidney and Liver) and "Ablation Zone Segmentation" (Kidney and Ice Ball), it differentiates between 1, 2, or 3 strokes, indicating that some level of user input still contributes to the algorithm's final output for these specific functionalities. The performance metrics themselves ("Mean DICE", "MCD") are measures of the algorithm's output compared to ground truth.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
The document does not explicitly state the type of ground truth used for validation (e.g., expert consensus, pathology, surgical findings). For the training set, it can be inferred that expert annotations were used, but this is not explicitly stated for the validation.
8. The Sample Size for the Training Set
The training set sample sizes are provided for the AI algorithms:
- Liver Segmentation Algorithm for CT Processing: 1091 contrast-enhanced CT images.
- Liver Segmentation for MR Processing: 418 MR images.
- Liver Vessel Segmentation Algorithm for CT processing: 393 contrast-enhanced CT images.
- Kidney Segmentation for CT processing: 300 contrast-enhanced preoperative CT images.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly detail the method for establishing the ground truth for the training set. It mentions the imaging procedure (e.g., "Contrast-enhanced CT images taken for diagnostic reading") which implies that clinical images with pre-existing or subsequently generated expert annotations would have been used for training. However, the exact process (e.g., number of annotators, their qualifications, adjudication) is not specified.
Ask a specific question about this device
Page 1 of 1