K Number
K222998
Device Name
F3
Date Cleared
2023-06-20

(265 days)

Product Code
Regulation Number
892.1650
Panel
RA
Reference & Predicate Devices
Predicate For
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

F3 provides image guidance by overlaying a previously constructed preoperative vessel anatomy, from a previously acquired contrast-enhanced, diagnostic CT scan, onto live fluoroscopic images in order to assist in the positioning of guidewires, catheters and other endovascular devices.

F3 is intended to assist fluoroscopy-guided endovascular procedures in the thorax. Suitable procedures include endovascular aortic aneurysm repair (AAA and mid-distal TAA) and angioplasty.

F3 i not intended for use in the X-Ray guided procedures in the liver, kidneys or pelvic organs.

Device Description

The purpose of F3 is to assist the user with the visual evaluation, comparison, and merging of information between anatomical and functional images from a single patient. The user needs to take into consideration the product's limitations and accuracy when integrating the information from the registration results for final interpretation. F3 does not replace the usual procedures for visual comparison of datasets by a user. Fusion images are intended to provide additional information to a user's existing workflow for patient evaluation.

F3 offers:

  • Visualization of multi-modality image data
  • Automatic registration
  • Import of DICOM data
  • Capture of fluoroscopic image frames
AI/ML Overview

Based on the provided text, the document describes the non-clinical performance testing of the F3 device, focusing on its registration accuracy and qualitative performance.

Here's a breakdown of the requested information:

1. Table of acceptance criteria and the reported device performance:

Acceptance Criteria (Defined by Predicate)Reported Device Performance (F3)
Clinically acceptable accuracy for rigid 6-parameter registration and dynamic panning: 3mm Target Registration Error (TRE) on clinical thoracic images captured from an F3 configured setup.F3 produces clinically acceptable accuracy as defined by the predicate device (3mm) on clinical thoracic images that have been captured from an F3 configured setup. (Also, F3 produces similar TRE to the predicate device on a thoracic phantom for both rigid 6-parameter registration and dynamic panning).
Qualitative preference: Results are comparable or preferred by a group of board-certified radiologists.F3 creates results that are preferred qualitatively by a group of board-certified radiologists.

2. Sample size used for the test set and the data provenance:

  • Test Set Sample Size: 20 clinical cases.
  • Data Provenance:
    • Country of Origin: Not explicitly stated, but the context implies it's likely a US-based study given the FDA submission.
    • Retrospective or Prospective: Retrospective. The cases were collected "retrospectively over a 2 year period."

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

  • Number of Experts: "a group of board certified radiologists." The exact number is not specified beyond "a group."
  • Qualifications of Experts: Board certified radiologists. No further details on their experience (e.g., years of experience) are provided.

4. Adjudication method for the test set:

  • Not explicitly mentioned for establishing ground truth or for the qualitative preference assessment beyond "a group of board certified radiologists" preferring the results. The phrasing "expert identified results" for the training set suggests expert consensus, but details for the test set ground truth are limited.

5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

  • No, a multi-reader multi-case (MRMC) comparative effectiveness study assisting human readers was not conducted. The study evaluated the device's standalone accuracy (technical performance) and qualitative preference by radiologists for the device's output. There's no mention of a study comparing human reader performance with and without F3 assistance.

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

  • Yes, standalone performance was evaluated. The primary focus of the non-clinical testing was on the algorithm's registration accuracy (TRE) and its output's qualitative preference, without human interaction as part of the performance metric. The statement "Our registration engines differ... In this work we tested it on hundreds of image pairs to demonstrate its accuracy relative to expert identified results" confirms an algorithm-only evaluation for accuracy.

7. The type of ground truth used:

  • For accuracy assessment (TRE):
    • Phantom data: For the thoracic phantom, the ground truth would be precise, known measurements or landmarks within the phantom.
    • Clinical data: For "clinically acceptable accuracy," the ground truth was defined by the predicate device's acceptable TRE of 3mm. This implies a reference or established method for measuring TRE on clinical images. The text also mentions "accuracy relative to expert identified results" when discussing the intensity-based registration engine more generally.
  • For qualitative preference: Expert consensus/opinion from "a group of board certified radiologists."

8. The sample size for the training set:

  • The training set size is not explicitly stated. However, for the intensity-based registration engine, it mentions, "In this work we tested it on hundreds of image pairs to demonstrate its accuracy relative to expert identified results." This "hundreds of image pairs" likely refers to the development/testing of the engine, which could encompass training, validation, and internal testing. But a specific "training set" size distinct from the "test set" (20 cases) is not provided.

9. How the ground truth for the training set was established:

  • For the intensity-based registration engine, the ground truth was "expert identified results." This implies human experts (likely radiologists or other medical imaging specialists) manually established accurate registrations or measurements that the algorithm was trained and/or validated against. Details on the number of experts or their qualifications for the training set ground truth are not provided.

§ 892.1650 Image-intensified fluoroscopic x-ray system.

(a)
Identification. An image-intensified fluoroscopic x-ray system is a device intended to visualize anatomical structures by converting a pattern of x-radiation into a visible image through electronic amplification. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). An anthrogram tray or radiology dental tray intended for use with an image-intensified fluoroscopic x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9. In addition, when intended as an accessory to the device described in paragraph (a) of this section, the fluoroscopic compression device is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.