K Number
K244010
Device Name
ExamVue Apex
Date Cleared
2025-02-24

(60 days)

Product Code
Regulation Number
892.1680
Panel
RA
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The ExamVue Apex flat panel x-ray detector system is indicated for use in general radiology including podiatry, orthopedic, and other specialties, and in mobile x-ray systems. The Exam Vue Apex flat panel x-ray detector system is not indicated for use in mammography.

Device Description

The ExamVue Apex flat panel x-ray detector system consists of a line of 3 different models of solid state x-ray detectors, of differing size and characteristics, combined with a single controlling software designed for use by radiologists and radiology technicians for the acquisition of digital x-ray images. The ExamVue Apex flat panel x-ray detector system captures digital images of anatomy through the conversion of x-rays to electronic signals, eliminating the need for film or chemical processing to create a hard copy image. ExamVue Apex flat panel x-ray detector system incorporates the ExamVue Duo software, which performs the processing, presentation and storage of the image in DICOM format. All models of the ExamVue Apex flat panel x-ray detector system use a Si TFTD for the collection of light generated by a CsI scintillator, for the purpose of creating a digital x-ray image.

The three available models are:

EVA 14W, with a 14x17in (35x43cm) wireless cassette sized panel
EVA 17W, with a 17x17in (43x43cm) wireless cassette sized panel
EVA 10W, with a 10x12 (24x30cm) wireless cassette sized panel

AI/ML Overview

The provided text describes the regulatory clearance of a digital X-ray detector system, the "ExamVue Apex," and its substantial equivalence to a predicate device. However, it does not contain specific acceptance criteria or an analytical study proving the device meets those criteria, as one would typically find for an AI/ML medical device submission with defined performance metrics (e.g., sensitivity, specificity, AUC).

Instead, the submission focuses on demonstrating substantial equivalence through:

  • Bench Testing: Comparing engineering specifications like resolution, sensitivity, and dynamic range to the predicate device.
  • Software Validation: Ensuring the software adheres to relevant standards (IEC 62304) and performs expected functions.
  • Clinical Testing: An ABR-certified radiologist visually evaluating image quality as equivalent or better than the predicate device.

Therefore, many of the requested fields regarding a detailed statistical study (sample size, ground truth, expert adjudication, MRMC study, standalone performance) cannot be filled from the provided text because such a study, with quantitative acceptance criteria, does not appear to have been performed or reported in this 510(k) summary. The submission relies more on demonstrating equivalence through technical specifications and expert opinion on image quality rather than rigorous statistical performance criteria for an AI algorithm.

Here's a breakdown of what can be extracted and what cannot:

1. A table of acceptance criteria and the reported device performance

The provided text does not define explicit quantitative acceptance criteria for device performance in the typical AI/ML sense (e.g., a target sensitivity of X% or specificity of Y%). Instead, the "acceptance criteria" are implied by demonstrating "similar or greater imaging characteristics" compared to the predicate device and that the software "performs the same required basic functions."

Acceptance Criteria (Implied)Reported Device Performance
Bench Testing (Comparison to Predicate):
a. Resolution equivalent or greater"product performs with similar or greater imaging characteristics" (general statement). Specific comparison metrics for ExamVue Apex vs. Predicate: Pixel Pitch (99um vs 143/140/143um), DQE @ 0 lp/mm (73% @ 6 μGy / 70% @ 2 μGy vs 57%/60%/58%), MTF @ 1 lp/mm (68% vs 63%/68%/65%) - all indicate equal or improved performance for Apex.
b. Sensitivity equivalent or greater"product performs with similar or greater imaging characteristics" (general statement).
c. Dynamic range in image acquisition equivalent or greater"product performs with similar or greater imaging characteristics" (general statement).
d. Software performs the same basic functions"software performs the same required basic functions as the predicate device."
Software Validation:
a. Designed and developed according to IEC 62304"The software was designed and developed according to a software development process in compliance with IEC 62304."
b. Performs all functions of the predicate's software"tested to show that it performs all the functions of the software in the predicate device." The software "performs the same functions as the software for the predicate device with some additional features."
Clinical Testing:
a. Image quality equivalent or better than predicate device."The images were evaluated by an ABR certified radiologist who evaluated the image quality to be of equivalent or better to the predicate device."

2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

  • Sample Size: Not specified. The clinical testing merely states "Clinical data was provided with the submission to demonstrate equivalence with the predicate device. This data includes images of all the relevant ROI." It doesn't quantify the number of images or patients.
  • Data Provenance: Not specified (country, retrospective/prospective).

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

  • Number of Experts: "an ABR certified radiologist" (singular, implied to be one).
  • Qualifications: "ABR certified radiologist." No mention of years of experience.

4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

  • Adjudication Method: Not specified. Since only one radiologist is mentioned, it's likely "none" in the sense of a consensus or adjudication process among multiple readers. The single ABR-certified radiologist provided the evaluation.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • MRMC Study: No, an MRMC comparative effectiveness study was not explicitly stated or described. This submission is for a general X-ray detector system, not specifically an AI-powered diagnostic algorithm designed to assist human readers. The "AI" mentioned is the software components related to image acquisition, processing, and management, not a diagnostic AI.

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • Standalone Performance: Not applicable in the context of an AI diagnostic algorithm. This device is a digital X-ray detector system. Its "performance" refers to image quality and functionality, not a diagnostic output from an algorithm that would then require standalone performance metrics (e.g., sensitivity/specificity for disease detection). The software handles image processing, presentation, and storage, not automated diagnosis.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

  • Type of Ground Truth: The "ground truth" for the clinical testing was the visual evaluation of image quality by an ABR-certified radiologist. It's not a diagnostic ground truth (like pathology for cancer detection) but rather an assessment of whether the images produced by the new device are diagnostically acceptable and equivalent/superior to those from the predicate device.

8. The sample size for the training set

  • Sample Size for Training Set: Not applicable/not specified. This device is an X-ray detector and associated software for image acquisition and processing, not a deep learning AI model that requires a "training set" to learn features for interpretation. The software's development (as per IEC 62304) involves validation and verification, but not "training" in the machine learning sense.

9. How the ground truth for the training set was established

  • Ground Truth for Training Set: Not applicable. (See #8).

§ 892.1680 Stationary x-ray system.

(a)
Identification. A stationary x-ray system is a permanently installed diagnostic system intended to generate and control x-rays for examination of various anatomical regions. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II (special controls). A radiographic contrast tray or radiology diagnostic kit intended for use with a stationary x-ray system only is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.