Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K162971
    Date Cleared
    2016-11-22

    (28 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K133259

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Multix Fusion Max system is a radiographic system used in hospitals, clinics, and medical practices. Multix Fusion Max enables radiographic exposures of the whole body including: skull, chest, abdomen, and extremities and may be used on pediatic, adult and bariatic patients. Exposures may be taken with the patient sitting, standing, or in the prone position. The Multix Fusion Max system is not meant for mammography.

    Multix Fusion Max uses a mobile(wired), or a fixed (integrated) or wireless digital detector for generating diagnostic images by converting x-rays into image signals. The Multix Fusion Max is also designed to be used with conventional film/screen or Computed Radiography (CR) cassettes.

    Device Description

    The Multix Fusion Max Radiography X-ray system is a modular system of x-ray components (ceiling suspension with X-ray tube, Bucky wall stand, Bucky table, X-ray generator, portable wireless and integrated detectors) same as the predicate the Multix Fusion. This 510(k) submission describes modifications to the predicate device the Multix Fusion cleared via K142049.

    AI/ML Overview

    The provided text is a 510(k) summary for the Siemens Multix Fusion Max X-ray system. This document is a premarket notification to the FDA to demonstrate that the new device is substantially equivalent to a legally marketed predicate device. As such, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" refer to demonstrating substantial equivalence, not typically a clinical performance study with specific metrics like sensitivity or specificity for an AI-powered diagnostic device.

    This document describes the device as a radiographic system with various hardware and software modifications compared to its predicate. The "acceptance criteria" here are primarily about demonstrating that the new modifications do not introduce new safety concerns and that the device performs functionally similarly to its predicates.

    Here's an analysis of the provided information, framed within your request, but acknowledging the nature of a 510(k) for a general radiography system rather than an AI diagnostic algorithm:

    1. Table of Acceptance Criteria (as implied by a 510(k) for an X-ray system) and Reported Device Performance

    For an X-ray system like the Multix Fusion Max, "acceptance criteria" are defined by its substantial equivalence to predicate devices, focusing on technical specifications, safety, and operational performance. The "performance" is the demonstration that these are met or that differences are not significant and do not raise new safety/effectiveness questions.

    Acceptance Criteria (Implied by 510(k) Process for X-ray System)Reported Device Performance
    Similar Indications for Use"Same" as predicate (Multix Fusion K142049) and within same classification regulation.
    Similar Technological Characteristics (Hardware)Many components are "same" (e.g., tube, generator, ceiling-mounted support, imaging system). New components (detectors, Bucky wall stand, table, grid) have comparable specifications (e.g., DQE, MTF values are "Difference not significant" or "Same"). Wireless Bucky wall stand is "Only motorized height adjustment" (compared to predicate's motor/manual option) which is a functional difference, but not affecting safety or fundamental performance.
    Similar Technological Characteristics (Software)Operating system upgraded from Windows XP to Windows 7; software version VE21. Compliance with software guidance documents is stated. Tested for "continued conformance with special controls for medical devices containing software."
    Safety and Effectiveness Not CompromisedNon-clinical tests (verification and validation) were conducted. Conformance to various IEC and ISO standards (e.g., IEC 60601-1:2012, ISO 14971:2007 for risk management). Risk analysis completed and risk control implemented. Software testing demonstrated all specifications met acceptance criteria. Visual and audible warnings are incorporated, and error messages are displayed if an error occurs.
    Image Quality (for new detectors)Comparative tables show "Difference not significant" or "Same" for key image quality metrics (DQE, MTF, pixel size, resolution) between new and predicate detectors. Functional tests included exposure workflow, detector sharing, and image resend. Image quality tests included flat field uniformity.
    No New Safety Risks Introduced"Siemens is of the opinion that the Multix Fusion Max does not introduce any new potential safety risk and is substantially equivalent to and performs as well as the predicate device."

    2. Sample Size Used for the Test Set and the Data Provenance

    This is an X-ray system submission, not a study of an AI-powered diagnostic device. Therefore, there isn't a "test set" in the sense of a dataset of patient images for an AI algorithm. The testing described is primarily non-clinical, focusing on system function, image quality a phantom/test environment, and safety standards compliance.

    • Sample Size: Not applicable in the context of clinical "test set" for an AI algorithm. Non-clinical tests were conducted at the product development stage, but specific "sample sizes" of components or tests are not detailed.
    • Data Provenance: Not applicable. The "data" are internal testing results, not patient data from a specific country or collected retrospectively/prospectively.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    Not applicable. There is no mention of expert readers or ground truth establishment in this type of 510(k) submission for a general X-ray system. The performance demonstration is based on technical specifications and engineering tests.

    4. Adjudication Method for the Test Set

    Not applicable.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    Not applicable. This is not an AI-powered diagnostic device, and no MRMC study is mentioned or required for this type of submission.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    Not applicable. This is an X-ray imaging system, not an AI algorithm.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    Not applicable in the context of clinical ground truth. The "ground truth" for this device's performance demonstration relies on engineering specifications, adherence to international standards (e.g., IEC, ISO), and performance metrics like DQE and MTF measured in a controlled environment, not clinical outcomes or expert labels.

    8. The Sample Size for the Training Set

    Not applicable. This is an X-ray system, not an AI algorithm requiring a "training set."

    9. How the Ground Truth for the Training Set was Established

    Not applicable.

    In summary, the provided document
    describes a 510(k) premarket notification for a conventional X-ray system, not an AI-powered diagnostic device. Therefore, the questions related to AI algorithm development, clinical datasets, expert ground truth, and reader studies are not addressed and are not applicable to this type of device submission.
    The acceptance criteria are based on demonstrating substantial equivalence to predicate devices through non-clinical performance testing and compliance with recognized standards.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141895
    Manufacturer
    Date Cleared
    2014-09-18

    (66 days)

    Product Code
    Regulation Number
    892.1720
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K133259, K141381, K141736

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Intended for use by a qualified/trained doctor or technologist on both adult and pediatric patients for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with patient sitting, standing or lying in the prone or supine positions. Not for mammography.

    Device Description

    The MobileDiagnost wDR 2.0 system is a motorized mobile radiographic system consisting of a mobile base unit, and a user interface (computer, keyboard, display, mouse), combined with a flat solid state X-ray detector. It is used by the operator to generate, process and handle digital X-ray images. The MobileDiagnost wDR 2.0 integrates a new generation of wireless portable x-ray detectors (SkyPlate) to replace the former detector WPD FD-W17 cleared under the predicate submission (K111725).

    AI/ML Overview

    Unfortunately, based solely on the provided text, I cannot provide a detailed table of acceptance criteria and reported device performance with the specific metrics you requested (e.g., sample sizes, number of experts, adjudication methods, MRMC study details, ground truth types for test and training sets).

    The document is a 510(k) summary for the MobileDiagnost wDR 2.0, which focuses on demonstrating substantial equivalence to a predicate device rather than providing a comprehensive report of a standalone clinical study with detailed performance metrics against predefined acceptance criteria.

    However, I can extract and infer some information that partially addresses your request, particularly concerning the non-clinical and "clinical image concurrence" studies mentioned.

    Here's an attempt to answer your questions based on the available text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document primarily states that the device's non-clinical performance values are "basically equal or better than the predicate device." Specific acceptance criteria or targets are not explicitly listed in a table format in the provided text. The performance is reported in comparison to the predicate device.

    Metric (Non-Clinical)Acceptance Criteria (Inferred from "equal or better than predicate")Reported Device Performance (MobileDiagnost wDR 2.0)
    Modulation Transfer Function (MTF)Equal to or better than predicate (60% to 15%)Better than predicate, ranging from 61% to 14% (implies some values are better)
    Detective Quantum Efficiency (DQE)Equal to or better than predicate (66% to 22%)Equal to or better than predicate, ranging from 66% to 24% (implies some are better)
    Other Non-Clinical TestsCompliance with standards and satisfactory resultsComplies with listed international and FDA-recognized consensus standards (IEC, AAMI, ISO)

    2. Sample Size Used for the Test Set and Data Provenance

    • Non-Clinical Tests: Sample sizes are not specified for the non-clinical tests (DQE, MTF, Aliasing, etc.). The provenance is not explicitly stated but is implicitly from laboratory testing of the device itself.
    • Clinical Image Concurrence Study: Sample size for the test set is not specified. The data provenance is not explicitly stated, but it involved "clinical images were collected and analyzed." It is a retrospective study ("clinical images were collected"). No country of origin is specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Non-Clinical Tests: Not applicable, as these are objective physical measurements of device characteristics.
    • Clinical Image Concurrence Study: The text mentions a "single blinded, concurrence study" to ensure images meet "user needs" and provide "equivalent diagnostic capability" to the predicate. The number of experts is not specified, nor are their specific qualifications (e.g., "radiologist with 10 years of experience"). It implies expert readers assessed the images.

    4. Adjudication Method for the Test Set

    • Non-Clinical Tests: Not applicable.
    • Clinical Image Concurrence Study: The term "concurrence study" is used, implying agreement among readers or against a standard. However, the specific adjudication method (e.g., 2+1, 3+1, none) is not detailed in the provided text.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • The document describes a "single blinded, concurrence study" for clinical image analysis. While it compares the new device's images to the predicate, it doesn't explicitly state it was an MRMC comparative effectiveness study in the typical sense of measuring human reader improvement with AI vs. without AI assistance. This device is an X-ray system, not an AI diagnostic tool. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not apply directly to this device's evaluation as presented. No effect size is mentioned.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • This question is not directly applicable as the MobileDiagnost wDR 2.0 is an X-ray imaging system, not an AI algorithm. The performance evaluation focuses on the image quality and physical characteristics of the imaging system itself, not an algorithm's diagnostic capabilities in isolation. The non-clinical performance data represents the standalone performance of the device's imaging capabilities.

    7. The Type of Ground Truth Used

    • Non-Clinical Tests: The "ground truth" for these tests are objective physical standards and measurements (e.g., how a detector should perform according to physics principles and consensus standards).
    • Clinical Image Concurrence Study: The "ground truth" was established by expert assessment for "equivalent diagnostic capability" and meeting "user needs." This is a form of expert consensus on image quality and diagnostic utility, comparing images from the new device to those from the predicate.

    8. The Sample Size for the Training Set

    • Not applicable / Not specified. The document describes performance testing of an imaging device, not a machine learning algorithm that requires a training set. The "clinical images" mentioned were for validation, not training.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable. As above, there is no mention of a training set for an algorithm in this context.

    In summary: The provided 510(k) summary focuses on demonstrating that the MobileDiagnost wDR 2.0 is substantially equivalent to its predicate device through non-clinical (technical) performance criteria and a clinical image concurrence study. It highlights compliance with recognized standards and claims that the new device's technical performance is "equal or better" than the predicate and that its images provide "equivalent diagnostic capability." However, it lacks the fine-grained details about sample sizes, expert qualifications, and specific adjudication methods for the clinical image study that would be typical for evaluating a new diagnostic AI algorithm.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1