Search Results
Found 2 results
510(k) Data Aggregation
(60 days)
The ExamVue Apex flat panel x-ray detector system is indicated for use in general radiology including podiatry, orthopedic, and other specialties, and in mobile x-ray systems. The Exam Vue Apex flat panel x-ray detector system is not indicated for use in mammography.
The ExamVue Apex flat panel x-ray detector system consists of a line of 3 different models of solid state x-ray detectors, of differing size and characteristics, combined with a single controlling software designed for use by radiologists and radiology technicians for the acquisition of digital x-ray images. The ExamVue Apex flat panel x-ray detector system captures digital images of anatomy through the conversion of x-rays to electronic signals, eliminating the need for film or chemical processing to create a hard copy image. ExamVue Apex flat panel x-ray detector system incorporates the ExamVue Duo software, which performs the processing, presentation and storage of the image in DICOM format. All models of the ExamVue Apex flat panel x-ray detector system use a Si TFTD for the collection of light generated by a CsI scintillator, for the purpose of creating a digital x-ray image.
The three available models are:
EVA 14W, with a 14x17in (35x43cm) wireless cassette sized panel
EVA 17W, with a 17x17in (43x43cm) wireless cassette sized panel
EVA 10W, with a 10x12 (24x30cm) wireless cassette sized panel
The provided text describes the regulatory clearance of a digital X-ray detector system, the "ExamVue Apex," and its substantial equivalence to a predicate device. However, it does not contain specific acceptance criteria or an analytical study proving the device meets those criteria, as one would typically find for an AI/ML medical device submission with defined performance metrics (e.g., sensitivity, specificity, AUC).
Instead, the submission focuses on demonstrating substantial equivalence through:
- Bench Testing: Comparing engineering specifications like resolution, sensitivity, and dynamic range to the predicate device.
- Software Validation: Ensuring the software adheres to relevant standards (IEC 62304) and performs expected functions.
- Clinical Testing: An ABR-certified radiologist visually evaluating image quality as equivalent or better than the predicate device.
Therefore, many of the requested fields regarding a detailed statistical study (sample size, ground truth, expert adjudication, MRMC study, standalone performance) cannot be filled from the provided text because such a study, with quantitative acceptance criteria, does not appear to have been performed or reported in this 510(k) summary. The submission relies more on demonstrating equivalence through technical specifications and expert opinion on image quality rather than rigorous statistical performance criteria for an AI algorithm.
Here's a breakdown of what can be extracted and what cannot:
1. A table of acceptance criteria and the reported device performance
The provided text does not define explicit quantitative acceptance criteria for device performance in the typical AI/ML sense (e.g., a target sensitivity of X% or specificity of Y%). Instead, the "acceptance criteria" are implied by demonstrating "similar or greater imaging characteristics" compared to the predicate device and that the software "performs the same required basic functions."
| Acceptance Criteria (Implied) | Reported Device Performance |
|---|---|
| Bench Testing (Comparison to Predicate): | |
| a. Resolution equivalent or greater | "product performs with similar or greater imaging characteristics" (general statement). Specific comparison metrics for ExamVue Apex vs. Predicate: Pixel Pitch (99um vs 143/140/143um), DQE @ 0 lp/mm (73% @ 6 μGy / 70% @ 2 μGy vs 57%/60%/58%), MTF @ 1 lp/mm (68% vs 63%/68%/65%) - all indicate equal or improved performance for Apex. |
| b. Sensitivity equivalent or greater | "product performs with similar or greater imaging characteristics" (general statement). |
| c. Dynamic range in image acquisition equivalent or greater | "product performs with similar or greater imaging characteristics" (general statement). |
| d. Software performs the same basic functions | "software performs the same required basic functions as the predicate device." |
| Software Validation: | |
| a. Designed and developed according to IEC 62304 | "The software was designed and developed according to a software development process in compliance with IEC 62304." |
| b. Performs all functions of the predicate's software | "tested to show that it performs all the functions of the software in the predicate device." The software "performs the same functions as the software for the predicate device with some additional features." |
| Clinical Testing: | |
| a. Image quality equivalent or better than predicate device. | "The images were evaluated by an ABR certified radiologist who evaluated the image quality to be of equivalent or better to the predicate device." |
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size: Not specified. The clinical testing merely states "Clinical data was provided with the submission to demonstrate equivalence with the predicate device. This data includes images of all the relevant ROI." It doesn't quantify the number of images or patients.
- Data Provenance: Not specified (country, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Number of Experts: "an ABR certified radiologist" (singular, implied to be one).
- Qualifications: "ABR certified radiologist." No mention of years of experience.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified. Since only one radiologist is mentioned, it's likely "none" in the sense of a consensus or adjudication process among multiple readers. The single ABR-certified radiologist provided the evaluation.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not explicitly stated or described. This submission is for a general X-ray detector system, not specifically an AI-powered diagnostic algorithm designed to assist human readers. The "AI" mentioned is the software components related to image acquisition, processing, and management, not a diagnostic AI.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: Not applicable in the context of an AI diagnostic algorithm. This device is a digital X-ray detector system. Its "performance" refers to image quality and functionality, not a diagnostic output from an algorithm that would then require standalone performance metrics (e.g., sensitivity/specificity for disease detection). The software handles image processing, presentation, and storage, not automated diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Type of Ground Truth: The "ground truth" for the clinical testing was the visual evaluation of image quality by an ABR-certified radiologist. It's not a diagnostic ground truth (like pathology for cancer detection) but rather an assessment of whether the images produced by the new device are diagnostically acceptable and equivalent/superior to those from the predicate device.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable/not specified. This device is an X-ray detector and associated software for image acquisition and processing, not a deep learning AI model that requires a "training set" to learn features for interpretation. The software's development (as per IEC 62304) involves validation and verification, but not "training" in the machine learning sense.
9. How the ground truth for the training set was established
- Ground Truth for Training Set: Not applicable. (See #8).
Ask a specific question about this device
(149 days)
ExamVue Duo is a software for the acquisition, processing, storage and viewing of digital radiology studies. ExamVue Duo is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects. ExamVue Duo is indicated for use in general imaging including podiatry, orthopedic, and other specialties, and in mobile x-ray systems.
ExamVue Duo is not indicated for use in mammography.
The ExamVue Duo software is designed for use by radiologists and radiology technicians for the acquisition of digital x-ray images. It interfaces with 3rd party digital x-ray detectors and (optionally) generators and manufacturer supplied software for the acquisition and storage of digital x-ray studies. The ExamVue Duo software then provides a user interface for the viewing, annotating, and other workstation functions.
Exam Vue DR includes the ability to receive patient information and send studies to remote destinations using the DICOM 3.0 protocol.
I apologize, but the provided text from the FDA 510(k) Premarket Notification for the ExamVue Duo device does not include the specific details of a clinical study or performance data that would enable me to fulfill your request.
The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device (ExamVue DR) rather than providing detailed acceptance criteria and a standalone study proving the device meets those criteria.
Here's why I cannot provide the requested information based on the given text:
- No Acceptance Criteria Table or Reported Performance: The document lists characteristics for comparison between the proposed device (ExamVue Duo) and the predicate (ExamVue DR), but it doesn't define quantitative acceptance criteria for parameters like sensitivity, specificity, accuracy, or other performance metrics. It primarily states "SAME" or lists technical specifications (e.g., RAM, Processor) rather than clinical performance data.
- Absence of a Clinical Study Description: The document states "Bench tests reports and clinical data have been provided, detailing the direct comparison of functions between the ExamVue Duo and predicate device" (Page 4). However, the actual reports or detailed methodology of such "clinical data" are not included in this 510(k) summary. It vaguely refers to "direct comparison of functions," which likely means comparing the implementation of features rather than a clinical outcome study.
- No mention of AI/Algorithm-Specific Performance: The document describes ExamVue Duo as "software for the acquisition, processing, storage and viewing of digital radiology studies." While it mentions "image processing," there is no indication that the device includes an AI/ML algorithm that would necessitate specific performance metrics like those typically associated with AI-driven diagnostic aids (e.g., AUC, sensitivity, specificity for disease detection). The "added functions" like "Auto Stitching" and "Bone Suppression (option)" appear to be standard image processing features, not necessarily AI.
- No Details on Ground Truth Establishment, Experts, Adjudication, or MRMC Studies: Since no clinical study evaluating specific performance metrics is described in the provided text, there are no details about sample size, data provenance, number or qualifications of experts, adjudication methods, MRMC studies, or how ground truth was established for a test set.
In summary, the provided document is a 510(k) submission showing substantial equivalence, not a detailed clinical study report. It compares the technical specifications and general functions of the new device to a previously cleared one. Therefore, I cannot extract the specific information you are looking for regarding acceptance criteria and a study proving those criteria are met for an AI/CAD/similar device.
Ask a specific question about this device
Page 1 of 1