Search Filters

Search Results

Found 8 results

510(k) Data Aggregation

    K Number
    K211790
    Manufacturer
    Date Cleared
    2021-07-30

    (50 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D Imaging Package with XD Detectors

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package with XD Detectors is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of the human anatomy. The DX-D Imaging Package with XD Detectors may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package with XD Detectors is not indicated for use in mammography.

    Device Description

    The DX-D Imaging Package, previously cleared under K142184, is a solid state x-ray system, a direct radiography (DR) system (product code MQB) intended to capture general radiographic images of the human body. It is a combination of Agfa's NX workstation with MUSICA TM image processing and one or more flat-panel detectors of the scintillator-photodetector type (Cesium Iodide - CsI or Gadolinium Oxysulfide - GOS). It is capable of replacing other direct radiography, including computed radiography systems with conventional or phosphorous film cassettes.

    This submission is to add the XD Detectors (XD 10/10+, XD 14/14+ and XD 17/17+) Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. Agfa's XD Detectors are currently marketed by Vieworks Co. Ltd. as FXRD-4343VAW/VAW Plus. FXRD-3643VAW/VAW Plus. and FXRD-2530VAW/VAW Plus which is the predicate for this submission (K200418).

    The optional image processing allows users to conveniently select image processing settings for different patient sizes and examinations. The image processing algorithms in the new device are identical to those previously cleared in the DX-D Imaging Package - DX-D 40 (K142184reference) device and other devices in Agfa's radiography portfolio today. The addition of the offline workflow is identical to the Vivix-S VW (K200418) predicate device.

    Principles of operation and technological characteristics of the new, predicate and reference devices are the same. There are no changes to the intended use/indications of the device. The new device is physically and electronically similar to the predicate device (K200418) which includes the addition of an offline workflow capable of storing up to 200 images on the flat-panel detector for later viewing. It uses the same NX workstation with MUSICA™ image processing as the reference device (K142184) and the same flat panel detectors of the scintillator-photodetector type (Cesium Iodide - Csl or Gadolinium Oxysulfide - GOS) to capture and digitize the images as the predicate device (K200418). Laboratory data and image quality evaluations conducted with internal specialists confirm that performance is equivalent to the predicate. Differences in devices do not alter the intended diagnostic effect nor do they impact the safety and effectiveness of the device.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for Agfa’s DX-D Imaging Package with XD Detectors. The submission focuses on demonstrating substantial equivalence to a predicate device (Vieworks Vivix-S VW, K200418) and a reference device (Agfa DX-D Imaging Package -DX-D 40, K142184) rather than establishing new performance criteria or conducting extensive clinical trials of the AI component.

    Key takeaway: This submission primarily focuses on the physical components of the imaging system (detectors, workstation, image processing) and their equivalence to existing cleared devices. It does not contain details about specific acceptance criteria or performance metrics for an AI algorithm in the way one might expect for a novel AI-driven diagnostic device. The "AI" mentioned is related to existing "MUSICA image processing" which is identical to previously cleared versions and is referred to as "image processing algorithms", rather than a new AI model with specific diagnostic performance targets.

    Therefore, many of the requested details, especially those related to AI-specific performance criteria, ground truth establishment for a test set, and multi-reader studies, are not explicitly present in the provided document. The performance data presented refers to the physical detector characteristics (Spatial Resolution, DQE, MTF).

    Here's an attempt to answer the questions based on the provided text, highlighting where information is absent:


    1. A table of acceptance criteria and the reported device performance

    The document does not define explicit "acceptance criteria" in terms of clinical performance metrics for an AI algorithm. Instead, it demonstrates performance equivalence of the new detector models to existing ones.

    Performance CharacteristicDX-D 40 Flat-Panel Detector (K142184) (Reference)XD 10/10+ Wireless Detector (New Device)XD 14/14+ Wireless Detector (New Device)XD 17/17+ Wireless Detector (New Device)
    Spatial Resolution3.5 lp/mm (for 6007/100 & 6007/200)4.0 lp/mm3.5 lp/mm3.5 lp/mm
    DQE @ 1 lp/mmCsI: 0.494, GOS: 0.259XD: 0.500, XD+: 0.587XD: 0.425, XD+: 0.587XD: 0.412, XD+: 0.587
    DQE @ 2 lp/mmCsI: 0.379, GOS: 0.157XD: 0.401, XD+: 0.445XD: 0.321, XD+: 0.399XD: 0.345, XD+: 0.407
    DQE @ 3 lp/mmCsI: 0.215, GOS: 0.061XD: 0.288, XD+: 0.316XD: 0.206, XD+: 0.257XD: 0.220, XD+: 0.280
    MTF @ 1 lp/mmCsI: 0.685, GOS: 0.589XD: 0.729, XD+: 0.650XD: 0.751, XD+: 0.635XD: 0.726, XD+: 0.656
    MTF @ 2 lp/mmCsI: 0.386, GOS: 0.266XD: 0.424, XD+: 0.315XD: 0.446, XD+: 0.302XD: 0.428, XD+: 0.311
    MTF @ 3 lp/mmCsI: 0.209, GOS: 0.115XD: 0.236, XD+: 0.157XD: 0.247, XD+: 0.152XD: 0.231, XD+: 0.161

    The "acceptance criteria" appear to be met by demonstrating that the new detectors (XD series) have comparable or superior technical performance characteristics (Spatial Resolution, DQE, MTF) to the existing reference devices, and that the image processing ("MUSICA™ image processing") is identical to previously cleared devices. The document states: "The results of these tests fell within the acceptance criteria for the DX-D Imaging Package with XD Detectors." However, the quantitative thresholds for these "acceptance criteria" are not specified beyond the presented performance values.

    2. Sample size used for the test set and the data provenance:

    • Test Set (for image quality evaluation): "anthropomorphic adult and pediatric phantoms".
      • Data Provenance: Not explicitly stated (likely internal laboratory data, given "internal specialists"). This was bench testing, not clinical data from patients.
    • Software Test Iterations (NX 23): Two software iterations were tested.
    • Performance Functionality Evaluation: Not a sample size of data, but related to the number of experts (see below).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Image Quality Evaluations: "qualified internal experts" (radiographers) evaluated the overall image quality using phantoms. The exact number of radiographers is not specified.
    • Performance Functionality Evaluations: "four qualified experts". Their specific qualifications (e.g., radiologist, years of experience) are not detailed beyond "qualified experts".
    • Ground Truth: For the phantom studies, the ground truth is inherently defined by the known properties of the phantoms (e.g., specific structures, resolution targets).

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    Not explicitly stated. The evaluations seem to be internal assessments for comparison rather than a formal human reader study with adjudication for a clinical diagnostic task.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC study was performed. The document explicitly states: "No clinical trials were performed in the device. No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld."
    • The "image processing algorithms in the new device are identical to those previously cleared." This suggests that the "AI" (MUSICA image processing) is not a new or modified component requiring a new MRMC study to demonstrate clinical impact or improvement for human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • Performance data (Spatial Resolution, DQE, MTF) are presented for the detectors themselves, which can be considered "standalone" technical performance metrics of the hardware.
    • For the MUSICA image processing software, its "standalone" performance is implied to be equivalent to its previously cleared versions since it is "identical." No new quantitative standalone performance metrics for the algorithm itself are provided in this submission beyond this statement of identity.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For the technical performance data (Spatial Resolution, DQE, MTF) related to the detectors: The ground truth is based on standard metrology and engineering principles for measuring detector performance using physical test objects (e.g., bar patterns, uniform fields) and laboratory equipment according to established standards.
    • For the image quality evaluations: The ground truth was based on "anthropomorphic adult and pediatric phantoms," meaning the content and structures within the phantoms served as the reference.
    • For the software testing: Ground truth for software verification and validation is against pre-defined requirements and design specifications, with "deviations or variances...documented in a defect database and addressed."

    8. The sample size for the training set:

    The document mentions "MUSICA™ image processing" which contains algorithms. However, this submission states these algorithms are "identical to those previously cleared" and does not describe the development or training of any new AI models. Therefore, information about a training set for a novel AI algorithm is not applicable to this 510(k) submission, as it is leveraging previously cleared technology.

    9. How the ground truth for the training set was established:

    As no new AI model training is described in this submission, information on how a training set's ground truth was established is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K172784
    Date Cleared
    2017-10-13

    (28 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D Imaging Package

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture general radiography images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DR14e and DR17e Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. Agfa's DR 14e and DR 17e wireless panels are currently marketed by Innolux as RIC 35C/G and RIC 43C/G, which is one of the predicates for this submission.

    Principles of operation and technological characteristics of the new and predicate devices are the same. There are no changes to the intended use/indications of the device. The new device is physically and electronically identical to both predicates, K161368 and K162344. It uses the same workstation as predicate K161368 and the same flat panel detectors to capture and digitize the images as predicate K162344.

    AI/ML Overview

    1. A table of acceptance criteria and the reported device performance

    Performance CharacteristicsAcceptance Criteria (Implicit: Equivalence to Predicates)Reported Device Performance (DR 14e & DR 17e)
    Image QualityAt least equivalent to other Agfa flat-panel detectors currently on the market (DX-D 10 and DX-D 20), including the predicate K161368.At least the same if not better image quality than other flat-panel detectors currently on the market (DX-D 10 and DX-D 20).
    Usability/FunctionalitySupports a radiographic workflow including calibration, compatibility, linear dose, and dynamic ranges.Results fell within the acceptance criteria for all flat-panel detectors, supporting a radiographic workflow.
    Grid EvaluationConsistent with other Agfa HealthCare flat-panel detectors currently on the market, including the predicate K161368, and fulfills intended use.Results remained consistent with other Agfa HealthCare flat-panel detectors, including the predicate K161368. Intended use fulfilled.
    Software ValidationNo risks identified in the "Not Acceptable Region" after mitigation for XRDi18 and NX9000. Benefits of the device outweigh residual risks.For XRDi18: Zero risks identified in the Not Acceptable Region. For NX9000: No identified residual risks in the ALARP region, only three in the Broadly Acceptable Region. Device assumed safe, benefits outweigh residual risks.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not explicitly stated with a specific number of images or cases. The document mentions "anthropomorphic phantoms" for Image Quality Validation and that "When patient images were utilized, they were first anonymized." This suggests a mix of phantom images and anonymized retrospective patient images, but no specific count is provided for either.
    • Data Provenance: Not specified. The document does not mention the country of origin for any patient data. The studies appear to be internal laboratory tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Image Quality Validation: Evaluated by "qualified independent radiographers." No specific number of experts or their years of experience are provided.
    • Usability and Functionality Evaluations: Conducted with a "qualified internal radiographer." No specific number of experts or their years of experience are provided.
    • Grid Evaluation: Conducted with a "qualified internal radiographer." No specific number of experts or their years of experience are provided.
    • Software Validation (Risk Analysis): Performed by a "risk management team." No specific number or qualifications are given for this team.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No formal adjudication method (like 2+1 or 3+1 consensus) is described for any of the evaluations. The image quality and usability/functionality tests were evaluated by "qualified independent radiographers" or "qualified internal radiographer," suggesting individual assessment rather than a multi-reader consensus process.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was conducted. The study performed was for substantial equivalence with predicate devices, focusing on technical performance and image quality compared to existing devices, not on human reader improvement with or without AI assistance. The device is a digital radiography imaging package, not explicitly described as an AI-powered diagnostic aid.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the primary evaluation was a standalone assessment of the device's image acquisition and processing capabilities. The studies focused on the performance of the DX-D Imaging Package (DR 14e & DR 17e detectors in combination with the NX workstation) itself, confirming its image quality, usability, and functionality against established internal benchmarks and predicate devices. While radiographers were involved in evaluating image quality, their role was to assess the output of the device, not to measure their own diagnostic performance with or without the device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Image Quality Validation: The ground truth for image quality was based on expert assessment by "qualified independent radiographers" comparing images from the new device to established quality standards and images from existing Agfa flat-panel detectors (DX-D 10 and DX-D 20, and predicate K161368). The phrasing "at least the same if not better image quality" implies a subjective expert evaluation against a benchmark.
    • Usability and Functionality: Ground truth was adherence to predefined acceptance criteria for workflow, calibration, compatibility, linear dose, and dynamic ranges, assessed by a "qualified internal radiographer."
    • Grid Evaluation: Ground truth was consistency with other Agfa HealthCare flat-panel detectors and fulfillment of intended use, assessed by a "qualified internal radiographer."
    • Software Validation: Ground truth for software safety and risk was established against internal risk management frameworks and relevant product/quality management standards (IEC 60601-1, ISO 14971, ISO 13485, ISO 62366, ISO 62304).

    8. The sample size for the training set

    The document states, "No animal or clinical studies were performed in the development of the new device." This implies that there wasn't a separate training set of clinical images in the context of developing a new algorithm or AI model for diagnostic purposes. The device is hardware (detectors) and associated software, and development would involve engineering and performance testing rather than machine learning training sets.

    9. How the ground truth for the training set was established

    Not applicable, as no specific training set for an AI algorithm appears to have been used or described for this device submission. The device is described as an imaging package with new detectors integrating into an existing workstation, not a novel AI diagnostic tool requiring extensive clinical training data.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Device Name :

    DX-D Imaging Package (DR 10s), DX-D Imaging Package (DR 14s), DX-D Imaging Package (DR 14s)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DR10s and DR14s Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. The DX-D Imaging Package with the DR 10s and DR 14s wireless panels will be labeled as the Pixium 2430EZ and Pixium 3543EZ. DR 10s and DR 14s are commercial trade names used by Agfa HealthCare for marketing purposes only.

    Principles of operation and technological characteristics of the new and predicate device are the same. There are no changes to the intended use/indications of the new device is physically and electronically identical to the predicate, K142184. It uses the same workstation and the similatorphotodetector flat panel detectors to capture and digitize the images as predicate K142184.

    AI/ML Overview

    This document describes the 510(k) summary for Agfa's DX-D Imaging Package, focusing on the newly added DR10s and DR14s Flat Panel Detectors. The submission aims to demonstrate substantial equivalence to a predicate device (K142184).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a dedicated "acceptance criteria" table with specific quantitative thresholds. Instead, the acceptance criteria are implied to be equivalence to the predicate device (K142184) and performance falling within expected parameters for radiographic systems. The reported device performance is presented through comparison with other Agfa flat-panel detectors on the market, including the predicate.

    Below is a table summarizing the performance characteristics of the new detectors (DR 10s, DR 14s) and the predicate (represented by DX-D 10, DX-D 20, DX-D 40 from the comparison table, as the predicate K142184's individual detector specs aren't explicitly broken out separately):

    CharacteristicDX-D 10 Flat-Panel Detector (Predicate Example)DR 10s Wireless Detector (New Device)DR 14s Wireless Detector (New Device)Acceptance Criteria (Implied)Reported Device Performance (Summary)
    ScintillatorCsI, GOSCsICsI, GOSEquivalent to predicate (CsI, GOS)DR 10s uses CsI, DR 14s uses CsI, GOS. Deemed equivalent.
    Cassette size35x43cm/14x17in24x30cm35x43cm/14x17inAppropriate for general radiography.Different sizes, but appropriate for general radiography.
    Pixel Size139 µm148 µm148 µmComparable to predicate (139-140 µm).Slightly larger pixel size but deemed equivalent.
    A/D Conversion14 bits16 bits16 bitsComparable to predicate (14 bits).Higher (16 bits) – considered an improvement.
    InterfaceEthernetAED & SynchronizedAED & SynchronizedReliable interface.AED & Synchronized.
    CommunicationTetheredWirelessWirelessReliable communication.Wireless (new feature).
    PowerI/O Interface Box: 100-240 VAC, 47-63 HzBattery: replaceable & rechargeableBattery: replaceable & rechargeableReliable power.Battery-powered for wireless operation.
    Weight3.9 kg (8.6 lbs)1.6 kg (3.53 lbs)2.8 kg (6.17 lbs)Ergonomically acceptable.Lighter than predicate examples (due to wireless nature).
    DQE @ 1lp/mm0.530/0.6080.5230.521/0.292Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    DQE @ 2lp/mm0.219/0.2980.4760.449/0.189Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    DQE @ 3lp/mm0.092/0.1470.2950.296/0.071Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 1lp/mm0.205/0.4560.6370.638/0.526Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 2lp/mm0.106/0.3040.3600.363/0.208Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 3lp/mm0.092/0.1470.1990.198/0.081Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    Image Acquisition/hr.150240240At least equivalent to predicate (150).Higher (240) – considered an improvement.

    The overall acceptance criteria for the study is "Substantial Equivalence" to the predicate device (K142184), demonstrated through:

    • Identical Indications for Use.
    • Same principles of operation and technological characteristics (despite some hardware differences).
    • Performance data (laboratory and clinical evaluations) ensuring equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • For laboratory image quality (DQE, MTF) comparisons and grid evaluation: The document does not specify a numerical sample size in terms of images or measurements. It states "equivalent test protocols as used for the cleared detectors" were used and the results "confirmed that the DX-D Imaging Package with DR 10sC, DR14sC, and DR14sG flat-panel detectors was equivalent to other flat-panel detectors Agfa currently markets including the predicate (K142184)."
      • For usability and functionality evaluations: Not specified.
      • For Image Quality Validation testing (using anthropomorphic phantoms): Not specified.
      • For in-hospital image quality comparisons ("clinical evaluations"): "anonymized" patient images were utilized, but the number of images or cases is not specified.
    • Data Provenance: The data appears to be retrospective (for human image data, implied from "anonymized to remove all identifying patient information" and "No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld.") and laboratory-generated (for DQE, MTF, grid, usability, and phantom studies). The country of origin is not explicitly stated.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • For laboratory image quality, grid evaluation, usability/functionality:
      • "qualified individuals employed by the sponsor" conducted these evaluations.
      • "qualified independent radiographers" conducted usability and functionality evaluations.
      • "a qualified internal radiographer" conducted the grid evaluation.
      • "qualified independent radiographers" evaluated anthropomorphic phantoms.
    • For in-hospital image quality comparisons:
      • "qualified independent radiologists" conducted these comparisons.
    • Qualifications: "Qualified independent radiographers" and "qualified independent radiologists" are mentioned. Specific experience levels (e.g., "10 years of experience") are not provided. The term "qualified" implies they possess the necessary expertise for the task.

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). The "clinical evaluations" and "in-hospital image quality comparisons" mention "qualified independent radiologists" in plural, suggesting a consensus or comparison approach among them, but details are not provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was done in the sense of measuring human reader improvement with vs. without AI assistance.
    • The studies conducted were focused on demonstrating that the new device's image quality and performance were equivalent to the predicate device and other established systems, meaning human readers would perform similarly with the new device as with the predicate.

    6. Standalone (Algorithm Only) Performance Study

    • No standalone algorithm performance (AI-only) study was done for diagnostic interpretation. The device is an imaging package (hardware detectors and workstation) for capturing and displaying images, not an AI diagnostic algorithm.

    7. Type of Ground Truth Used

    • For laboratory image quality (DQE, MTF, grid): The "ground truth" is based on physical measurements and standardized test protocols.
    • For usability and functionality: The "ground truth" is based on expert assessment by radiographers against pre-defined workflow and compatibility requirements.
    • For image quality validation (phantoms): The "ground truth" is based on expert assessment by radiographers of the generated images, likely comparing features to expected phantom characteristics and established image quality standards.
    • For in-hospital image quality comparisons: The "ground truth" is implicitly based on radiological expert consensus (potentially with existing patient reports as a reference, though this is not specified), primarily for qualitative comparison against images produced by predicate devices.

    8. Sample Size for the Training Set

    The document does not mention a training set, as this device (an X-ray imaging package) is not an AI diagnostic device that requires a training set in the typical machine learning sense. The "software validation testing" refers to verification and validation of the software components against predefined requirements, not training a machine learning model.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for an AI algorithm is mentioned or implied.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142184
    Date Cleared
    2014-10-16

    (69 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D IMAGING PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DX-D40C/G Flat Panel Detector to Agfa's DX-D Imaging Package portfolio. Agfa's DX-D40C/G is currently marketed by Vieworks as the ViVIX-S Wireless Panel (K122865), which is one of predicates for this submission.

    Principles of operation and technological characteristics of the new and predicate devices are the same. The new device is physically and electronically identical to both predicates, K121095 and K122865. It uses the same workstation as predicate K121095 and the same scintillatorphotodetector flat panel detectors to capture and digitize the images as predicate K122865.

    AI/ML Overview

    I am sorry but I can't fulfill your request. The document describes that the new device, Agfa's DX-D Imaging Package, is substantially equivalent to two predicate devices (K121095 and K122865) and does not provide explicit acceptance criteria with specific numerical thresholds for performance metrics. This makes it difficult to directly populate the "Acceptance Criteria" column of the table you requested with quantitative values. Also, the document states "image quality clinical evaluations" were done but lacks the details of such a study. Without additional information, I am unable to describe the acceptance criteria and study as requested.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141602
    Date Cleared
    2014-09-12

    (88 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D IMAGING PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa HealthCare's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy for adult, pediatic and neonatal examinations. The DX-D Imaging Package may be used wherever conventional screen-film systems, CR or DR systems may be used.

    Agfa HealthCare's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa HealthCare's DX-D Imaging Package is a solid state x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa HealthCare's NX Workstation, one or more flat-panel detectors, needlephosphor detectors for direct radiography (DR) applications.

    DX-D Imaging Package uses the NX Workstation to process data utilizing Agfa HealthCare's MUSICA image processing software, which includes optional image processing algorithms for adult, pediatric and neonatal images that were previously cleared for use in Agfa HealthCare's DX-D Imaging Package (K122736). The acronym MUSICA stands for Multi-Scale-Image-Contrast-Amplification. MUSICA acts on the acquired images to preferentially enhance the diagnostically relevant, moderate and subtle contrasts.

    This submission is to obtain clearance for Agfa HealthCare to market the DX-D Imaging Package using a minimum of 50% dose reduction marketing claims.

    Principles of operation and technological characteristics of the DX-D Imaging Package and predicate devices are the same. The DX-D Imaging Package is physically and electronically identical to the predicate K122736 since it is the same device; however, Agfa HealthCare would like to include a minimum of 50% dose reduction claims for marketing purposes. It uses the same workstation and same scintillator-photodetector flat panel detectors, needle-phosphor detectors and cassettes, or photo-stimulable imaging plates to capture and digitize the image.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study conducted for the Agfa HealthCare DX-D Imaging Package, focusing on its dose reduction claims.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Minimum 50% dose reduction (primary endpoint)58% dose reduction with DX-D30C DR detector
    60% dose reduction with HD5.0 CR plates
    Image quality equivalent to predicate devicesConfirmed by laboratory data and image quality evaluations with board-certified radiologists

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Images: 195 images (13 exposures for each of 5 phantoms using 3 detector types).
      • Phantoms: 5 different anatomical phantoms (skull, chest, abdomen, hand, neonatal).
    • Data Provenance: The study was a laboratory evaluation using phantoms, not human patient data. Therefore, country of origin or retrospective/prospective classification isn't directly applicable in the conventional sense for patient data. It's an experimental study conducted by the sponsor.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Five (5)
    • Qualifications of Experts: Board-certified radiologists.

    4. Adjudication Method for the Test Set

    The radiologists were asked to match test images to a reference image by scrolling through a series of test images until the closest match was found. There was a consistency check where image positions were reversed and a second reading session was done with a different reference image. However, the text does not describe an adjudication method for reconciling disagreements among multiple readers; rather, it implies individual assessment and an average calculation of dose reduction.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    This was not a MRMC comparative effectiveness study in the typical sense of evaluating human reader performance with and without AI assistance on diagnostic tasks. Instead, it was an image quality equivalency study where radiologists evaluated the visual equivalence of images generated at reduced doses compared to standard doses.

    • Effect Size: Not applicable as it was not a human-in-the-loop diagnostic study with AI assistance. The study focused on whether images produced with reduced dose were deemed visually equivalent to standard dose images by radiologists.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone technical evaluation of image quality was performed using Detective Quantum Efficiency (DQE). This is a measure of the imaging device's ability to preserve the signal-to-noise ratio. The DQE for the CsBr CR and CsI DR was found to be more than double that of the predicate device (CR MD4.0), indicating superior dose efficiency.

    7. The Type of Ground Truth Used

    • For the image quality evaluation by radiologists, the ground truth was expert consensus (visual equivalence) against a reference image exposed at a fixed condition. The "ground truth" for the dose reduction claim was established by the radiologists determining when a reduced-dose image was visually equivalent to a reference image, and then calculating the dose difference.

    8. The Sample Size for the Training Set

    The document does not specify a training set sample size. The study described is a performance evaluation of the imaging system and its dose reduction capability, not an AI algorithm that requires a separate training set. The "MUSICA image processing software" is mentioned as having "optional image processing algorithms" that were "previously cleared," but no details about their training are provided in this document.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is described for the dose reduction study, this information is not provided in the document. If referring to the MUSICA image processing software, the document states these algorithms were "previously cleared," implying their ground truth and development were established in prior submissions, but details are not given here.

    Ask a Question

    Ask a specific question about this device

    K Number
    K122736
    Date Cleared
    2013-03-11

    (173 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D IMAGING PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy for adult, pediatric and neonatal examinations. The DX-D Imaging Package may be used wherever conventional screen-film systems, CR or DR systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    The device is a direct radiography imaging system of similar design and construction to the original (predicate) version of the device. Agfa's DX-D Imaging Package uses the company's familiar NX workstation with MUSICA2™ image processing and flat panel detectors of the scintillator-photodetector type. Flat panel detectors with scintillators of both Cesium Iodide (Csl) and Gadolinium Oxysulfide (GOS) are available. The device is used to capture and directly digitize x-ray images without a separate digitizer. This new version includes optional image processing algorithms for adult, pediatric and neonatal images that were previously cleared for use in Agfa's computed radiography systems.

    The device uses a direct conversion process to convert x-rays into a digital signal. X-rays incident on the scintillator layer of the detector generate light that is absorbed by photo-detectors, converted to a digital signal and sent to the workstation the data is processed by Agfa's MUSICA image processing software. The acronym MUSICA stands for Multi-Stage-Image-Contrast-Amplification. MUSICA acts on the acquired images to preferentially enhance the diagnostically relevant, moderate and subtle contrasts.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    While the provided text mentions that "Performance data including laboratory image quality measurements and image comparison studies by independent radiologists are adequate to ensure equivalence," it does not provide specific acceptance criteria or detailed results of these studies. Therefore, a table of acceptance criteria and reported device performance cannot be generated with the given information.

    Here's an analysis of what can be extracted and what is missing:


    Acceptance Criteria and Study Details (Based on Provided Text)

    The document states that the performance data from "laboratory image quality measurements" and "image comparison studies by independent radiologists" were "adequate to ensure equivalence." However, no specific metrics, targets, or results for these studies are provided.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Not specified in the document. The document states that performance data, including laboratory image quality measurements and image comparison studies by independent radiologists, were "adequate to ensure equivalence," but it does not detail these criteria or the specific performance results against them.Not specified in the document. The document attests to the adequacy of the data without providing quantitative results or metrics.

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: Not specified.
    • Data Provenance: The document mentions "In-hospital image quality comparisons," implying diagnostic images from a clinical setting. It does not specify the country of origin.
    • Retrospective/Prospective: Not specified.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Not specified. The document mentions "independent radiologists."
    • Qualifications of Experts: The document states "qualified independent radiologists." Specific experience (e.g., "10 years of experience") is not provided.

    4. Adjudication Method for Test Set

    • Adjudication Method: Not specified.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was it done?: Not explicitly stated as an MRMC study in the standard terminology, but the document mentions "image comparison studies by independent radiologists" and "In-hospital image quality comparisons have been conducted with qualified independent radiologists." This suggests a human reader component. However, it's unclear if this was a comparative effectiveness study involving AI assistance.
    • Effect Size of Human Readers with AI vs. Without AI Assistance: Not applicable, as the document doesn't describe AI-assisted reading or its effect size. The new version includes "optional image processing algorithms," but the study described is for the device's imaging quality, not human-in-the-loop performance with new algorithms.

    6. Standalone (Algorithm Only) Performance Study

    • Was it done?: No. The document describes the device as an imaging system, not a standalone AI algorithm for interpretation. The "optional image processing algorithms" are part of the overall imaging package, and the performance validation is for the "complete system," not the algorithm in isolation for diagnostic accuracy.

    7. Type of Ground Truth Used

    • Type of Ground Truth: Implied to be expert consensus/radiologist interpretation. The document refers to "image comparison studies by independent radiologists" and "in-hospital image quality comparisons." Pathology or outcomes data are not mentioned.

    8. Sample Size for Training Set

    • Sample Size: Not applicable. This device is an X-ray imaging system, not a machine learning algorithm that requires a "training set" in the conventional sense for diagnostic image analysis. The "new version includes optional image processing algorithms for adult, pediatric and neonatal images that were previously cleared." These algorithms would have been developed and validated, but the document doesn't provide details on their training data.

    9. How Ground Truth for Training Set Was Established

    • How Ground Truth for Training Set Was Established: Not applicable (see point 8).
    Ask a Question

    Ask a specific question about this device

    K Number
    K121095
    Date Cleared
    2012-08-16

    (127 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D IMAGING PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    The device is a direct radiography imaging system of similar design and construction to the predicate. Agfa's DX-D Imaging Package uses the company's familiar NX workstation with MUSICA2 TM image processing and flat panel detectors of the scintillator-photodetector type. Flat panel detectors with scintillators of both Cesium lodide (Csl) and Gadolinium Oxysulfide (GOS) are available. The device is used to capture and directly digitize x-ray images without a separate digitizer common to computed radiography systems. This new version uses a previously cleared detector with wireless communication capability.

    The device uses a direct conversion process to convert x-rays into a digital signal. X-rays incident on the scintillator layer of the detector generate light that is absorbed by photo-detectors, converted to a digital signal and sent to the workstation the data is processed by Agfa's MUSICA image processing software. The acronym MUSICA stands for Multi-Stage-Image-Contrast-Amplification. MUSICA-acts on the acquired images to preferentially enhance the diagnostically relevant, moderate and subtle contrasts.

    AI/ML Overview

    The provided 510(k) summary does not contain specific acceptance criteria for device performance, nor does it detail a study proving the device meets particular quantitative metrics. Instead, it focuses on demonstrating substantial equivalence to a predicate device.

    Here's an analysis based on the available information:

    1. Table of Acceptance Criteria and Reported Device Performance

    As noted, the document does not specify quantitative acceptance criteria or performance metrics in the way your request implies (e.g., sensitivity, specificity, accuracy). The evaluation is based on comparison to a predicate device and adherence to industry standards for safety and image quality.

    Acceptance CriteriaReported Device PerformanceComments
    Substantial Equivalence (General)"Descriptive characteristics and performance data are adequate to ensure equivalence."The primary 'acceptance criterion' is demonstrating equivalence to the predicate device (K092669) in terms of intended use, technological characteristics, and safety.
    Image Quality"Image quality measurements have been completed. Image quality comparisons between the new and predicate devices have been performed as well. Sample images have been provided."No specific quantitative metrics (e.g., CNR, MTF, DQE) are provided in this summary, but the general statement indicates evaluation was done.
    System Performance Validation"Performance of the complete system has been validated."Broad statement indicating functional validation, but no specific performance targets are given.
    Conformance to Product StandardsConforms to IEC 60601-1, IEC 60601-1-2, ACR/NEMA PS3.1-3.18 (DICOM).Device adheres to relevant industry standards for medical electrical equipment safety, EMC, and digital imaging communication.
    Conformance to Management StandardsConforms to ISO 14971, ISO 13485.Device development and manufacturing processes conform to risk management and quality management system standards.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: Not specified. The document mentions "Image quality measurements have been completed" and "Image quality comparisons between the new and predicate devices have been performed as well. Sample images have been provided." This implies a set of images was used for comparison, but the size is not disclosed.
    • Data Provenance: Not specified. Given the nature of a 510(k) demonstrating substantial equivalence for an imaging system, these would likely be technical image quality test images (e.g., phantoms) rather than clinical patient data. The summary states, "No clinical testing was performed in the development of the DX-D Imaging Package."

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    This information is not provided in the 510(k) summary. Since "No clinical testing was performed," it is unlikely that human experts were involved in establishing ground truth for a clinical test set from patient data. The "ground truth" for technical image quality assessments would be derived from the known properties of the phantoms used and objective image quality metrics.

    4. Adjudication Method for the Test Set

    This information is not provided. Given the lack of clinical testing and expert involvement, an adjudication method for a clinical test set is not applicable here.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The document explicitly states: "No clinical testing was performed in the development of the DX-D Imaging Package." Therefore, no effect size of AI assistance could be reported.

    6. If a Standalone (algorithm only without human-in-the-loop performance) Study was Done

    The device is an imaging system (hardware and software for image acquisition and processing), not an AI algorithm intended for diagnostic interpretation. Therefore, a "standalone algorithm only" performance study in the context of interpretation accuracy (e.g., sensitivity/specificity for disease detection) is not applicable as it's outside the scope of this device's intended use or claim. The "performance" assessment focuses on image quality and system functionality.

    7. The Type of Ground Truth Used

    Based on "No clinical testing was performed," the ground truth for any image quality measurements would likely be based on physical phantom measurements and objective image quality metrics (e.g., spatial resolution, contrast-to-noise ratio, modulation transfer function, detective quantum efficiency) rather than expert consensus, pathology, or outcomes data from human subjects.

    8. The Sample Size for the Training Set

    This information is not applicable/not provided. The device is an X-ray imaging system with image processing (MUSICA2). While MUSICA2 is an image processing algorithm, the document describes it as enhancing contrast rather than performing diagnostic interpretation based on a trained model in the current AI sense. There is no mention of a "training set" for a machine learning model. The focus is on the physics of image acquisition and standard image processing.

    9. How the Ground Truth for the Training Set was Established

    This information is not applicable/not provided for the same reasons as #8. If MUSICA2 involves learned parameters, their derivation is not disclosed, but it's unlikely to involve a "ground truth" in the diagnostic context.

    Ask a Question

    Ask a specific question about this device

    K Number
    K092669
    Date Cleared
    2009-11-06

    (67 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    DX-D IMAGING PACKAGE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in providing diagnostic quality images to aid the physician with diagnosis. Systems can be used with MUSICA2 image processing to create radiographic images of the skeleton (including skull, spinal column and extremities) chest, abdomen and other body parts. Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    The new device is a direct radiography imaging system of similar design and construction to the predicates. Agfa's DX-D Imaging Package uses Agfa's familiar NX workstation with MUSICA2TM image processing and flat panel detectors of the scintillator-photodetector type. Flat panel detectors with scintillators of both Cesium Iodide (Csl) and Gadolinium Oxysulfide (GOS) are available. The device is used to capture and digitize x-ray images without a separate digitizer common to computed radiography systems.

    AI/ML Overview

    Please note: The provided text is a 510(k) summary for a radiographic imaging system. It primarily focuses on demonstrating substantial equivalence to predicate devices and does not contain detailed information about specific acceptance criteria, a dedicated study proving performance against those criteria, or the methodology typically associated with AI/CAD device evaluation (e.g., sample sizes for test/training sets, ground truth establishment, expert adjudication, or MRMC studies).

    Therefore, many of the requested sections will be marked as "Not provided in the document."

    Here's the breakdown of the information based on the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    Not provided in the document. The 510(k) summary focuses on substantial equivalence to predicate devices based on similar technology and intended use, rather than presenting specific quantitative acceptance criteria and device performance metrics against them. It mentions "laboratory testing and an image comparison study" but does not detail their results or acceptance thresholds.


    2. Sample Size Used for the Test Set and Data Provenance

    Not provided in the document. The document refers to "laboratory testing and an image comparison study," but no details are given regarding the sample size of images/cases used in these studies, nor their provenance (e.g., country of origin, retrospective/prospective nature).


    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    Not provided in the document. The document does not describe the establishment of a ground truth for a test set, nor the involvement or qualifications of experts for such a purpose.


    4. Adjudication Method for the Test Set

    Not provided in the document. No information on adjudication methods for a test set is present.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    Not provided in the document. The text does not mention an MRMC comparative effectiveness study or any effect size related to human readers improving with AI assistance. The device described is a direct radiography imaging system, not an AI/CAD system.


    6. If a Standalone (Algorithm Only) Performance Study Was Done

    Not applicable. The device is an imaging system (hardware and associated software for image processing), not a standalone algorithm or AI. The document refers to "image processing" (MUSICA2™) but not as an AI-driven diagnostic algorithm that would require standalone performance evaluation.


    7. The Type of Ground Truth Used

    Not provided in the document. No information on the type of ground truth used is present. The document focuses on demonstrating that the new imaging package produces "diagnostic quality images," implying clinical interpretability, but does not detail how this was quantitatively assessed with a ground truth.


    8. The Sample Size for the Training Set

    Not applicable/Not provided. The device is a direct radiography imaging system. While its image processing (MUSICA2™) would have been developed and "trained" in a broader sense, the document does not describe it as an AI/machine learning model with a distinct "training set" in the context of device approval. The 510(k) relies on substantial equivalence to existing technology, not on novel AI performance evaluation.


    9. How the Ground Truth for the Training Set Was Established

    Not applicable/Not provided. As above, the device's nature and the content of the 510(k) summary do not include details about ground truth establishment for a training set in the context of AI approval.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1