Search Results
Found 3 results
510(k) Data Aggregation
(28 days)
MobileDiagnost wDR 2.2
Intended for use by a qualified/trained doctor or technologist on both adult and pediatric patients for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremittes, and other body parts. Applications can be performed with patient sitting, standing or lying in the prone or supine positions. Not for mammography.
The MobileDiagnost wDR 2.2 is a motorized mobile radiographic system consisting of a mobile base unit, a user interface consisting of Eleva Workspot combined with flat solid state X-ray detectors used to operate, generate, process and handle digital X- ray images. The MobileDiagnost wDR 2.2 integrates a new wireless portable detector (SkyPlate E). The family of SkyPlate detectors (Large and Small) have already been integrated into the MobileDiagnost wDR 2.0 based on the K141736 pre-market submission.
The provided text describes a 510(k) premarket notification for the MobileDiagnost wDR 2.2, a mobile X-ray system. The focus of the submission is to demonstrate substantial equivalence to a predicate device, the MobileDiagnost wDR 2.0 (K141895), with several modifications. The document does not contain a clinical study to prove the device meets acceptance criteria in terms of diagnostic effectiveness or a "performance study" in the typical sense of evaluating a new diagnostic algorithm's accuracy. Instead, the submission relies on demonstrating substantial equivalence through non-clinical verification and validation tests against established standards.
Therefore, the requested information needs to be framed within this context of demonstrating substantial equivalence, rather than a traditional diagnostic performance study.
Here's the breakdown of the information based on the provided text:
Acceptance Criteria and Device Performance (within the context of Substantial Equivalence)
The acceptance criteria are implicitly defined by the compliance with recognized international and FDA consensus standards and the outcome of the comparison to the predicate device, showing that modifications do not raise new questions of safety or effectiveness. The device performance is assessed against these standards and through direct comparison of technical characteristics to the predicate.
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a substantial equivalence submission relying on technical changes and compliance with standards, the "acceptance criteria" are compliance with these standards and the "reported device performance" is the verification that these standards are met, and that the technical characteristics of the modified device are acceptably equivalent to the predicate.
Feature Area / Acceptance Criteria | Reported Device Performance (MobileDiagnost wDR 2.2) | Conclusion (vs. Predicate) |
---|---|---|
I. Compliance with International and FDA-recognized Consensus Standards & FDA Guidance Documents | Non-clinical verification and validation tests demonstrate compliance with: |
- IEC 60601-1 (Edition 3.1)
- IEC 60601-1-2 (Edition 4.0)
- IEC 60601-1-3 (Edition 2.1)
- IEC 60601-1-6 (Edition 3.1)
- IEC 60601-2-54 (Edition 1.1)
- IEC 62304 (Edition 1.0:2006)
- IEC 62366 (Edition 1.0 2015)
- ISO 14971 (Edition 2.0, 2007)
- ISO 10993-1 (Edition 4.0 2009)
- IEC 60601-2-28 (Edition 2.0 2010-03)
- IEC 62220-1 (Edition 1.0 2015-03)
- NEMA PS 3.1 - 3.20 (DICOM)
- "Guidance for the Submission of 510(k)s for Solid State X-Ray Imaging Devices" (Sept 1, 2016)
- "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices" (May 11, 2005)
- "Pediatric Information for X-ray Imaging Device Premarket Notifications" (Nov 28, 2017) | Meets acceptance criteria and is adequate for its intended use, demonstrating substantial equivalence. Non-clinical information deemed sufficient to support substantial equivalence. |
| II. Technical Characteristics Equivalence (MobileDiagnost wDR 2.2 vs. MobileDiagnost wDR 2.0) | | |
| Base Unit Type, X-ray Tube rotational capabilities, Mains Supply, Mode of Exposure, Available Exposure Methods, X-ray Absorber, Installation type, Readout Mechanism, Maximum X-ray Dose for Linear Response, Maximum Usable Dose, Maximum Lifetime Dose, Image Processing (Eleva Workstation), ADC Digitisation, Signal to Electronic Noise Ratio (SENR), Data Interface to Workstation, X-ray Tube type, material, maximum voltage, nominal focal spot, anode type, nominal anode input power, Generator configurations, Collimator operation mode, beam shape, External Connectivity (DICOM), Software Platform (Eleva WorkSpot with SkyFlow). | Same/Identical to predicate. | Identical; thus, demonstrating Substantial Equivalence (SE). |
| Dimensions (overall, transport, source-floor distance) | Minor differences in mm measurements. | Differences do not impact safety or effectiveness. Thus, demonstrating SE. |
| Detector Models | Addition of SkyPlate E Large (Trixell 3543DR). Previously cleared SkyPlate Large/Small retained. | Addition of SkyPlate E detector does not impact safety or effectiveness. Thus, demonstrating SE. All technical detector characteristics influencing image quality assessed per FDA guidance. |
| Detector Weight | Max 3.1 kg (vs. Max 3 kg for predicate). | Difference has no impact on clinical workflow, safety, or effectiveness. Thus, demonstrating SE. |
| Image Size (X-ray field) | 345.0 mm x 426.0 mm (vs. 344.8 mm x 421.2 mm for predicate). | Difference does not impact clinical Image Quality, safety, or effectiveness. Thus, demonstrating SE. |
| Pixel Size | 160 µm (vs. 148 µm for predicate). | Difference of 12 µm pixel size does not impact image resolution "to an extent that can impact the clinical image quality," safety or effectiveness. Thus, demonstrating SE. |
| Image matrix size (Number of pixels) | 2156 x 2662 pixels (vs. 2330 x 2846 pixels for predicate). | "Infinitesimal change" and reduction in number of pixels due to 160 µm pixel size does not impact clinical Image Quality, safety, or effectiveness. Thus, demonstrating SE. |
| Nyquist Frequency | 3.125 lp/mm (vs. 3.38 lp/mm for predicate). | Difference does not impact clinical Image Quality, safety, or effectiveness. Thus, demonstrating SE. |
| Modulation Transfer Function (MTF) & Detective Quantum Efficiency (DQE) | Slightly different typical values reported (e.g., MTF at 1 lp/mm: 62% vs. 61%; DQE at 3 lp/mm: 22% vs. 29%). | Differences do not impact clinical Image Quality, safety, or effectiveness. Thus, demonstrating SE. |
| Grids | Addition of new large grids for SkyPlate E. Previously cleared grids retained. | Addition of new grids has no impact on clinical workflow, safety, or effectiveness. Thus, demonstrating SE. |
| Column Configuration | Sliding Column (vs. Standard/Short Column for predicate). | Sliding Column provides better viewing; its introduction does not impact safety or effectiveness. Thus, demonstrating SE. |
| System Power ON/OFF | Keyless user access (vs. Physical Key for predicate). | Keyless user access provides authenticated access; its introduction does not impact safety or effectiveness. Thus, demonstrating SE. |
| Exposure @ zero degree column position | Exposures allowed (vs. No Exposure allowed for predicate). | Allows examinations in space-constrained situations; difference does not impact safety or effectiveness. Thus, demonstrating SE. |
| Exposure during Charging | Exposures allowed (vs. Not allowed for predicate). | Allows charging during exam preparation; exposure energy still drawn from generator batteries, not mains. Does not impact safety or effectiveness. Thus, demonstrating SE. |
| Handles on Collimator | Handles can be used to move both Tube head and Collimator (vs. only Tube head for predicate). | Provides ease of use; difference does not impact safety or effectiveness. Thus, demonstrating SE. |
| Image Processing Algorithm | UNIQUE 2 (vs. UNIQUE for predicate). | UNIQUE 2 provides improved image processing (reduced noise, improved contrast) and was already cleared under K182973. Upgrading does not alter clinical workflow, safety, or effectiveness. Thus, demonstrating SE. |
Study Details (as applicable for a Substantial Equivalence submission based on non-clinical data)
The provided text clearly states: "The MobileDiagnost wDR 2.2 does not require clinical study since substantial equivalence to the primary currently marketed and predicate device MobileDiagnost wDR 2.0 (K141895- September 18, 2014) was demonstrated with the following attributes: Indication for use; Technological characteristics; Non-clinical performance testing; and Safety and effectiveness."
This means there was no "study" in the sense of a clinical trial or a diagnostic performance study to evaluate accuracy, sensitivity, specificity, etc., with human or AI readers. The "study" here refers to the comprehensive non-clinical verification and validation process against relevant standards.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: Not applicable. No clinical test set of patient data was used for a diagnostic performance evaluation. The "tests" were non-clinical verification and validation tests on the device itself (e.g., electrical safety, electromagnetic compatibility, radiation protection, software validation, usability, risk management).
- Data Provenance: Not applicable. No patient data (retrospective or prospective, from any country) was used for evaluating the device's diagnostic performance for the purpose of this 510(k) submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Not applicable. There was no diagnostic test set requiring ground truth established by experts.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. There was no diagnostic test set requiring adjudication.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done. This device is not an AI-assisted diagnostic product but rather an X-ray imaging system with updated components and software functionalities.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is an X-ray system, not a standalone diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not applicable. There was no diagnostic performance study requiring ground truth. Ground truth for the non-clinical tests would be the established scientific/engineering principles, specifications, and regulatory standards.
8. The sample size for the training set
- Not applicable. This device is an X-ray system, not a machine learning model that requires a "training set" in the context of AI. The software (UNIQUE 2) was previously cleared (K182973), indicating its own development and validation, but details of its training are not provided here.
9. How the ground truth for the training set was established
- Not applicable, as there was no training set for the device itself. For the incorporated UNIQUE 2 image processing algorithm, its ground truth establishment (if it involved machine learning) would have been part of its original 510(k) (K182973), but those details are not in this document.
Ask a specific question about this device
(66 days)
MOBILEDIAGNOST WDR
Intended for use by a qualified/trained doctor or technologist on both adult and pediatric patients for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with patient sitting, standing or lying in the prone or supine positions. Not for mammography.
The MobileDiagnost wDR 2.0 system is a motorized mobile radiographic system consisting of a mobile base unit, and a user interface (computer, keyboard, display, mouse), combined with a flat solid state X-ray detector. It is used by the operator to generate, process and handle digital X-ray images. The MobileDiagnost wDR 2.0 integrates a new generation of wireless portable x-ray detectors (SkyPlate) to replace the former detector WPD FD-W17 cleared under the predicate submission (K111725).
Unfortunately, based solely on the provided text, I cannot provide a detailed table of acceptance criteria and reported device performance with the specific metrics you requested (e.g., sample sizes, number of experts, adjudication methods, MRMC study details, ground truth types for test and training sets).
The document is a 510(k) summary for the MobileDiagnost wDR 2.0, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a comprehensive report of a standalone clinical study with detailed performance metrics against predefined acceptance criteria.
However, I can extract and infer some information that partially addresses your request, particularly concerning the non-clinical and "clinical image concurrence" studies mentioned.
Here's an attempt to answer your questions based on the available text:
1. Table of Acceptance Criteria and Reported Device Performance
The document primarily states that the device's non-clinical performance values are "basically equal or better than the predicate device." Specific acceptance criteria or targets are not explicitly listed in a table format in the provided text. The performance is reported in comparison to the predicate device.
Metric (Non-Clinical) | Acceptance Criteria (Inferred from "equal or better than predicate") | Reported Device Performance (MobileDiagnost wDR 2.0) |
---|---|---|
Modulation Transfer Function (MTF) | Equal to or better than predicate (60% to 15%) | Better than predicate, ranging from 61% to 14% (implies some values are better) |
Detective Quantum Efficiency (DQE) | Equal to or better than predicate (66% to 22%) | Equal to or better than predicate, ranging from 66% to 24% (implies some are better) |
Other Non-Clinical Tests | Compliance with standards and satisfactory results | Complies with listed international and FDA-recognized consensus standards (IEC, AAMI, ISO) |
2. Sample Size Used for the Test Set and Data Provenance
- Non-Clinical Tests: Sample sizes are not specified for the non-clinical tests (DQE, MTF, Aliasing, etc.). The provenance is not explicitly stated but is implicitly from laboratory testing of the device itself.
- Clinical Image Concurrence Study: Sample size for the test set is not specified. The data provenance is not explicitly stated, but it involved "clinical images were collected and analyzed." It is a retrospective study ("clinical images were collected"). No country of origin is specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Non-Clinical Tests: Not applicable, as these are objective physical measurements of device characteristics.
- Clinical Image Concurrence Study: The text mentions a "single blinded, concurrence study" to ensure images meet "user needs" and provide "equivalent diagnostic capability" to the predicate. The number of experts is not specified, nor are their specific qualifications (e.g., "radiologist with 10 years of experience"). It implies expert readers assessed the images.
4. Adjudication Method for the Test Set
- Non-Clinical Tests: Not applicable.
- Clinical Image Concurrence Study: The term "concurrence study" is used, implying agreement among readers or against a standard. However, the specific adjudication method (e.g., 2+1, 3+1, none) is not detailed in the provided text.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size
- The document describes a "single blinded, concurrence study" for clinical image analysis. While it compares the new device's images to the predicate, it doesn't explicitly state it was an MRMC comparative effectiveness study in the typical sense of measuring human reader improvement with AI vs. without AI assistance. This device is an X-ray system, not an AI diagnostic tool. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not apply directly to this device's evaluation as presented. No effect size is mentioned.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- This question is not directly applicable as the MobileDiagnost wDR 2.0 is an X-ray imaging system, not an AI algorithm. The performance evaluation focuses on the image quality and physical characteristics of the imaging system itself, not an algorithm's diagnostic capabilities in isolation. The non-clinical performance data represents the standalone performance of the device's imaging capabilities.
7. The Type of Ground Truth Used
- Non-Clinical Tests: The "ground truth" for these tests are objective physical standards and measurements (e.g., how a detector should perform according to physics principles and consensus standards).
- Clinical Image Concurrence Study: The "ground truth" was established by expert assessment for "equivalent diagnostic capability" and meeting "user needs." This is a form of expert consensus on image quality and diagnostic utility, comparing images from the new device to those from the predicate.
8. The Sample Size for the Training Set
- Not applicable / Not specified. The document describes performance testing of an imaging device, not a machine learning algorithm that requires a training set. The "clinical images" mentioned were for validation, not training.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. As above, there is no mention of a training set for an algorithm in this context.
In summary: The provided 510(k) summary focuses on demonstrating that the MobileDiagnost wDR 2.0 is substantially equivalent to its predicate device through non-clinical (technical) performance criteria and a clinical image concurrence study. It highlights compliance with recognized standards and claims that the new device's technical performance is "equal or better" than the predicate and that its images provide "equivalent diagnostic capability." However, it lacks the fine-grained details about sample sizes, expert qualifications, and specific adjudication methods for the clinical image study that would be typical for evaluating a new diagnostic AI algorithm.
Ask a specific question about this device
(29 days)
MOBILEDIAGNOST WDR
Intended for use by a qualified/trained doctor or technologist on both adult and pediatric patients for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with patient sitting, standing or lying in the prone or supine positions. Not intended for mammography.
This device is simply the combination of two cleared devices, the Wireless Portable Detector FD-W17 (K090625) marketed by Philips Medical Systems and the Easy Moving Plus, Mobile Diagnostic X-Ray (K090322) made by Sedecal. The x-ray source is a motor driven mobile x-ray and the x-ray receptor panel is a digital wireless unit. The Wireless Portable Detector FD-W17 consists of three main parts: Portable radiography detector (x-ray sensitive part), docking station which is directly connected to the radiographic workstation, backup cable which can connect the detector to the docking station if the wireless connection cannot be used. Detector size: 35 x 43 cm (14 x 17") Image matrix size: 3000 pixels x 2400 pixels. Pixel size 144 um、Image resolution up to 3.5 LP/mm
Here's an analysis of the provided text regarding the acceptance criteria and study for the MobileDiagnost wDR device:
This 510(k) summary focuses on demonstrating substantial equivalence for the MobileDiagnost wDR as a combination of two already cleared predicate devices. It does not contain a traditional clinical study with detailed performance metrics against specific acceptance criteria for a novel algorithm. Instead, the "study" is a comparison to predicate devices, focusing on the absence of significant differences.
1. A table of acceptance criteria and the reported device performance
The submission does not explicitly define acceptance criteria as quantitative performance metrics for the MobileDiagnost wDR itself (e.g., sensitivity, specificity, accuracy). Instead, the acceptance is based on demonstrating substantial equivalence to its predicate devices, implying that if it performs comparably to already cleared devices, it meets the required safety and effectiveness.
Characteristic | Acceptance Criteria (implied by predicate) | Reported Device Performance (MobileDiagnost wDR) |
---|---|---|
Intended Use | Same as Sedecal Easy Moving Digital (K090322) | SAME |
Configuration | Battery or line operated mobile | SAME |
Performance Standard | 21 CFR 1020.30 | SAME |
Generator | High frequency made by Sedecal | SAME |
Generator power levels | 20 to 50 kw (4 models) | 20 to 50 kw (4 models) |
Collimator | Ralco R221 DHHS | Ralco 108F DHHS (equivalent) |
Image acquisition | Digital CANON CDXI-50G (K031447) | Philips Wireless Portable Detector FD-W17 (K090625) |
Electrical safety | Electrical Safety per IEC-60601, UL listed | SAME |
Clinical Images | No significant differences compared to predicate | No significant differences |
Software Validation | All tests passed | Tests performed, results indicate safety and effectiveness |
Risk Analysis | Acceptable risk profile | Analysis performed |
Hardware Compliance | US Performance Standards, CSA Certified | Conforms to US Performance Standards, CSA Certified |
2. Sample size used for the test set and the data provenance
- Sample Size: The document states "Clinical images were acquired and compared to our predicate images." It does not specify the number of clinical images acquired.
- Data Provenance: Not explicitly stated (e.g., retrospective/prospective, country of origin). It's implied that the images were taken for the purpose of this comparison.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable. This was a direct comparison of images from the new device to those from predicate devices, likely for visual assessment of image quality, not for diagnostic accuracy against a ground truth diagnosis established by experts. The comparison involved assessing clinical images for "significant differences," which would typically be done by qualified personnel familiar with diagnostic image quality, but no specific number or qualification of experts is mentioned for this task.
4. Adjudication method for the test set
Not applicable. There's no mention of an adjudication process, as the goal was a direct comparison of image quality between two systems, not a diagnostic accuracy assessment requiring consensus on diagnoses.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done
No. A MRMC study was not performed. The study described is a comparison of "Clinical images... acquired and compared to our predicate images." This indicates a technical comparison rather than a human reader performance study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in a way. The "algorithm" here refers to the overall device's image acquisition and processing. The comparison of clinical images and software validation are assessments of the device's standalone performance in producing images and functioning correctly, without explicit human assistance in the "performance" measurement.
7. The type of ground truth used
Not explicitly stated. For the "clinical images were acquired and compared to our predicate images" statement, the "ground truth" would be the expected image quality and diagnostic information provided by the predicate device. The comparison aimed to show that the new device produced images of essentially the same quality and diagnostic utility as the predicate. It did not involve comparing readings against a histopathological diagnosis or clinical outcomes.
8. The sample size for the training set
Not applicable. The device is a combination of existing cleared devices (a mobile X-ray unit and a digital detector). This is not an AI/ML algorithm that requires a separate "training set." The software validation refers to functional testing, not model training.
9. How the ground truth for the training set was established
Not applicable, as there is no training set for an AI/ML algorithm mentioned in this submission.
Ask a specific question about this device
Page 1 of 1