(66 days)
Intended for use by a qualified/trained doctor or technologist on both adult and pediatric patients for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with patient sitting, standing or lying in the prone or supine positions. Not for mammography.
The MobileDiagnost wDR 2.0 system is a motorized mobile radiographic system consisting of a mobile base unit, and a user interface (computer, keyboard, display, mouse), combined with a flat solid state X-ray detector. It is used by the operator to generate, process and handle digital X-ray images. The MobileDiagnost wDR 2.0 integrates a new generation of wireless portable x-ray detectors (SkyPlate) to replace the former detector WPD FD-W17 cleared under the predicate submission (K111725).
Unfortunately, based solely on the provided text, I cannot provide a detailed table of acceptance criteria and reported device performance with the specific metrics you requested (e.g., sample sizes, number of experts, adjudication methods, MRMC study details, ground truth types for test and training sets).
The document is a 510(k) summary for the MobileDiagnost wDR 2.0, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a comprehensive report of a standalone clinical study with detailed performance metrics against predefined acceptance criteria.
However, I can extract and infer some information that partially addresses your request, particularly concerning the non-clinical and "clinical image concurrence" studies mentioned.
Here's an attempt to answer your questions based on the available text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily states that the device's non-clinical performance values are "basically equal or better than the predicate device." Specific acceptance criteria or targets are not explicitly listed in a table format in the provided text. The performance is reported in comparison to the predicate device.
Metric (Non-Clinical) | Acceptance Criteria (Inferred from "equal or better than predicate") | Reported Device Performance (MobileDiagnost wDR 2.0) |
---|---|---|
Modulation Transfer Function (MTF) | Equal to or better than predicate (60% to 15%) | Better than predicate, ranging from 61% to 14% (implies some values are better) |
Detective Quantum Efficiency (DQE) | Equal to or better than predicate (66% to 22%) | Equal to or better than predicate, ranging from 66% to 24% (implies some are better) |
Other Non-Clinical Tests | Compliance with standards and satisfactory results | Complies with listed international and FDA-recognized consensus standards (IEC, AAMI, ISO) |
2. Sample Size Used for the Test Set and Data Provenance
- Non-Clinical Tests: Sample sizes are not specified for the non-clinical tests (DQE, MTF, Aliasing, etc.). The provenance is not explicitly stated but is implicitly from laboratory testing of the device itself.
- Clinical Image Concurrence Study: Sample size for the test set is not specified. The data provenance is not explicitly stated, but it involved "clinical images were collected and analyzed." It is a retrospective study ("clinical images were collected"). No country of origin is specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Non-Clinical Tests: Not applicable, as these are objective physical measurements of device characteristics.
- Clinical Image Concurrence Study: The text mentions a "single blinded, concurrence study" to ensure images meet "user needs" and provide "equivalent diagnostic capability" to the predicate. The number of experts is not specified, nor are their specific qualifications (e.g., "radiologist with 10 years of experience"). It implies expert readers assessed the images.
4. Adjudication Method for the Test Set
- Non-Clinical Tests: Not applicable.
- Clinical Image Concurrence Study: The term "concurrence study" is used, implying agreement among readers or against a standard. However, the specific adjudication method (e.g., 2+1, 3+1, none) is not detailed in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- The document describes a "single blinded, concurrence study" for clinical image analysis. While it compares the new device's images to the predicate, it doesn't explicitly state it was an MRMC comparative effectiveness study in the typical sense of measuring human reader improvement with AI vs. without AI assistance. This device is an X-ray system, not an AI diagnostic tool. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not apply directly to this device's evaluation as presented. No effect size is mentioned.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- This question is not directly applicable as the MobileDiagnost wDR 2.0 is an X-ray imaging system, not an AI algorithm. The performance evaluation focuses on the image quality and physical characteristics of the imaging system itself, not an algorithm's diagnostic capabilities in isolation. The non-clinical performance data represents the standalone performance of the device's imaging capabilities.
7. The Type of Ground Truth Used
- Non-Clinical Tests: The "ground truth" for these tests are objective physical standards and measurements (e.g., how a detector should perform according to physics principles and consensus standards).
- Clinical Image Concurrence Study: The "ground truth" was established by expert assessment for "equivalent diagnostic capability" and meeting "user needs." This is a form of expert consensus on image quality and diagnostic utility, comparing images from the new device to those from the predicate.
8. The Sample Size for the Training Set
- Not applicable / Not specified. The document describes performance testing of an imaging device, not a machine learning algorithm that requires a training set. The "clinical images" mentioned were for validation, not training.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As above, there is no mention of a training set for an algorithm in this context.
In summary: The provided 510(k) summary focuses on demonstrating that the MobileDiagnost wDR 2.0 is substantially equivalent to its predicate device through non-clinical (technical) performance criteria and a clinical image concurrence study. It highlights compliance with recognized standards and claims that the new device's technical performance is "equal or better" than the predicate and that its images provide "equivalent diagnostic capability." However, it lacks the fine-grained details about sample sizes, expert qualifications, and specific adjudication methods for the clinical image study that would be typical for evaluating a new diagnostic AI algorithm.
§ 892.1720 Mobile x-ray system.
(a)
Identification. A mobile x-ray system is a transportable device system intended to be used to generate and control x-ray for diagnostic procedures. This generic type of device may include signal analysis and display equipment, patient and equipment supports, component parts, and accessories.(b)
Classification. Class II.