Search Filters

Search Results

Found 26 results

510(k) Data Aggregation

    K Number
    K180589
    Date Cleared
    2018-04-05

    (30 days)

    Product Code
    Regulation Number
    892.1650
    Why did this record match?
    Applicant Name (Manufacturer) :

    Agfa HealthCare N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DR 800 system is indicated for performing dynamic imaging examinations (fluoroscopy and/or rapid sequence) of the following anatomies/procedures:

    • · Positioning fluoroscopy procedures
    • · Gastro-intestinal examinations
    • · Urogenital tract examinations
    • · Angiography

    It is intended to replace fluoroscopic images obtained through image intensifier technology. In addition, the system is intended for project radiography of all body parts.

    The DR 800 is not intended for manimography applications.

    Device Description

    Agfa HealthCare's DR 800 is an image-intensified fluoroscopic x-ray system (product code JAA) intended to capture images of the human body. The DR 800 is a floor-mounted R/F system that consists of a tube and operator console with a motorized tilting patient table and bucky with optional wall stand, FLFS overlay and ceiling suspension. The new device uses Agfa's NX workstation with MUSICA Dynamic™ image processing and flat-panel detectors for digital and wide dynamic range image capture. It is capable of replacing other direct radiography, image intensifying tubes and TV cameras, including computed radiography systems with conventional or phosphorous film cassettes.

    AI/ML Overview

    Here's an analysis of the provided text to extract information about acceptance criteria and the supporting study, structured as requested:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Acceptance CriteriaReported Device Performance
    Usability and FunctionalitySupport a radiographic and fluoroscopic workflow, including dynamic and static imaging, continuous and rapid sequence exams, calibration, and positioning."The results of these tests fell within the acceptance criteria for the DR 800 R/F X-ray system and some improvements will be implemented based on these results; the DR 800 supports a radiographic and fluoroscopic workflow including dynamic and static imaging, continuous and rapid sequence exams, calibration, and positioning."
    Full Leg Full Spine (FLFS)Mount stitch grid, imaging ranges of a certain tolerance, and transversal collimation, and medical ruler exposure. The FLFS software for NX Luna compared to current FLFS software on the market should be "equal to or better". FLFS landscape functional design meets user needs."The FLFS clinical validation for the mount stitch grid, imaging ranges of a certain tolerance and transversal collimation, and medical ruler exposure fulfilled the acceptance criteria and passed the assessment with minor fails that will be solved."
    "The results of the FLFS comparison test for NX Luna concluded that the FLFS software is equal to or better than the current FLFS software currently on the market. The results of the FLFS landscape validation for the NX Luna concluded that the FLFS landscape functional design meets to user needs."
    Dose ControlNone of the detector doses would measure higher than the DIN-norm or exceed the dose limit curve for adult and pediatric phantoms with pulsed and continuous fluoroscopy exams."The results fulfilled the acceptance criteria that none of the detector doses would measure higher than the DIN-norm or exceed the dose limit curve."
    Image Quality (Dynamic)Pulsed and continuous fluoroscopy imaging with MUSICA Dynamic should be between "good and excellent" and pass acceptance criteria."The test results indicated that the pulsed and continuous fluoroscopy imaging of the DR 800 R/F X-ray system with MUSICA Dynamic was between good and excellent and passed the acceptance criteria."
    Image Quality (Static)MUSICA3 Abdomen+ images should be suitable for diagnosis with overall higher image quality. Static images made with the R/F flat-panel detector (FL4343) should demonstrate clinical acceptability."The test results showed MUSICA3 Abdomen+ images were suitable for diagnosis with an overall higher image quality. The test results proved clinical acceptability for static images made with the R/F flat-panel detector (FL4343)."
    Software Risk (NX4.0)No risks identified in the "Not Acceptable Region"; medical risk no greater than conventional x-ray film."For the NX4.0 (NX Luna) there are a total of 274 risks in the broadly acceptable region and 26 risks in the ALARP region with only three of these risks identified. Zero risks were identified in the Not Acceptable Region. Therefore, the device is assumed to be safe, the benefits of the device are assumed to outweigh the residual risk. The medical risk is no greater than with conventional x-ray film previously released to the field." (Note: This is a statement of compliance with risk assessment findings rather than a performance metric.)
    Electrical Safety & EMCCompliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-2-54, and FDA Subchapter J (21 CFR 1020.30 – 1020.32)."The DR 800 with MUSICA Dynamic is compliant to the FDA Subchapter J mandated performance standard 21 CFR 1020.30 – 1020.32." (Implied compliance with IEC standards through testing.)

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not specify exact numerical sample sizes for the test sets used in the various evaluations (usability, FLFS, dose control, image quality). It mentions the use of "anthropomorphic phantoms" for image quality testing.
    • Data Provenance: The data is described as "Laboratory data" and "image quality evaluations conducted with independent radiologists."
      • Country of Origin: Not explicitly stated, but the company is Agfa HealthCare N.V. (Belgium), and the submission is to the FDA (USA), implying development might be international but for the US market. The use of "DIN-norm" suggests European origin or influence for some standards.
      • Retrospective or Prospective: Not explicitly stated. The nature of the "bench testing" and "clinical image quality evaluations" using phantoms suggests a controlled, prospective testing environment rather than retrospective analysis of patient data. "No clinical trials were performed in the device. No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld." confirms prospective, non-clinical study design.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts:
      • Usability and Functionality: "qualified independent radiographers and internal experts." (Number not specified)
      • FLFS Clinical Validation: "qualified internal radiographer." (One identified)
      • FLFS Comparison Test: "several qualified internal experts." (Number not specified, but more than one)
      • Dose Control Validation: "qualified internal expert." (One identified)
      • Image Quality Validation: "qualified independent radiographers and internal experts." (Number not specified)
    • Qualifications of Experts: The experts are consistently referred to as "qualified independent radiographers" or "qualified internal experts" (or radiographer). Specific years of experience are not provided.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (e.g., 2+1, 3+1) for resolving discrepancies among expert opinions. Evaluations often involved "qualified independent radiographers and internal experts," implying consensus or individual assessment, but a specific arbitration process is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly stated as being performed to compare human readers with and without AI assistance.
    • The study focused on showing equivalence or improvement of the new device (DR 800 with MUSICA Dynamic) compared to reference images and predicate devices/software, and its ability to meet acceptance criteria for performance, not on AI-assisted human reading performance. The "MUSICA Dynamic" is described as software for image processing, not necessarily an AI for diagnostic assistance to human readers.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, the studies described are primarily standalone evaluations of the device's technical performance and image quality. These evaluations assessed the output of the DR 800 system with MUSICA Dynamic processing (which is an algorithm) directly, using phantoms and comparing its output to reference images and predicate devices. There is no indication of a human-in-the-loop performance study where the algorithm's output is then interpreted by a human and compared to human interpretation without the algorithm.

    7. Type of Ground Truth Used

    The ground truth used appears to be:

    • Expert Consensus/Opinion: For image quality, usability, and FLFS validation, the assessments by "qualified independent radiographers and internal experts" served as the ground truth for determining acceptability.
    • Technical Standards/Benchmarks: For dose control, "DIN-norm" and "dose limit curve" served as the objective ground truth.
    • Reference Images/Predicate Device Performance: For image quality validation, comparisons were made to "reference images using anonymized phantoms" and against predicate device performance, implying these served as a comparative ground truth.
    • Pre-defined Requirements: For usability and functionality, the 'acceptance criteria' themselves served as the ground truth of what the device needed to achieve.

    There is no mention of pathology or long-term outcomes data as ground truth.

    8. Sample Size for the Training Set

    • The document does not provide information on the sample size for a training set. This is likely because the device is an X-ray imaging system with image processing software (MUSICA Dynamic), and the focus of the 510(k) submission is on demonstrating substantial equivalence and validation of its performance, not on a machine learning model that would require a distinct training set. The "MUSICA Dynamic" algorithms are described as "similar to those previously cleared" or "identical to the predicate device" in terms of dynamic image processing.

    9. How the Ground Truth for the Training Set Was Established

    • As no training set is explicitly mentioned or detailed for a machine learning model, there is no information on how its ground truth would have been established. The image processing algorithms are likely based on established signal processing techniques, rather than learned directly from a labeled dataset in the way a modern AI model might be.
    Ask a Question

    Ask a specific question about this device

    K Number
    K172784
    Date Cleared
    2017-10-13

    (28 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Agfa HealthCare N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture general radiography images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DR14e and DR17e Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. Agfa's DR 14e and DR 17e wireless panels are currently marketed by Innolux as RIC 35C/G and RIC 43C/G, which is one of the predicates for this submission.

    Principles of operation and technological characteristics of the new and predicate devices are the same. There are no changes to the intended use/indications of the device. The new device is physically and electronically identical to both predicates, K161368 and K162344. It uses the same workstation as predicate K161368 and the same flat panel detectors to capture and digitize the images as predicate K162344.

    AI/ML Overview

    1. A table of acceptance criteria and the reported device performance

    Performance CharacteristicsAcceptance Criteria (Implicit: Equivalence to Predicates)Reported Device Performance (DR 14e & DR 17e)
    Image QualityAt least equivalent to other Agfa flat-panel detectors currently on the market (DX-D 10 and DX-D 20), including the predicate K161368.At least the same if not better image quality than other flat-panel detectors currently on the market (DX-D 10 and DX-D 20).
    Usability/FunctionalitySupports a radiographic workflow including calibration, compatibility, linear dose, and dynamic ranges.Results fell within the acceptance criteria for all flat-panel detectors, supporting a radiographic workflow.
    Grid EvaluationConsistent with other Agfa HealthCare flat-panel detectors currently on the market, including the predicate K161368, and fulfills intended use.Results remained consistent with other Agfa HealthCare flat-panel detectors, including the predicate K161368. Intended use fulfilled.
    Software ValidationNo risks identified in the "Not Acceptable Region" after mitigation for XRDi18 and NX9000. Benefits of the device outweigh residual risks.For XRDi18: Zero risks identified in the Not Acceptable Region. For NX9000: No identified residual risks in the ALARP region, only three in the Broadly Acceptable Region. Device assumed safe, benefits outweigh residual risks.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Test Set Sample Size: Not explicitly stated with a specific number of images or cases. The document mentions "anthropomorphic phantoms" for Image Quality Validation and that "When patient images were utilized, they were first anonymized." This suggests a mix of phantom images and anonymized retrospective patient images, but no specific count is provided for either.
    • Data Provenance: Not specified. The document does not mention the country of origin for any patient data. The studies appear to be internal laboratory tests.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Image Quality Validation: Evaluated by "qualified independent radiographers." No specific number of experts or their years of experience are provided.
    • Usability and Functionality Evaluations: Conducted with a "qualified internal radiographer." No specific number of experts or their years of experience are provided.
    • Grid Evaluation: Conducted with a "qualified internal radiographer." No specific number of experts or their years of experience are provided.
    • Software Validation (Risk Analysis): Performed by a "risk management team." No specific number or qualifications are given for this team.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    No formal adjudication method (like 2+1 or 3+1 consensus) is described for any of the evaluations. The image quality and usability/functionality tests were evaluated by "qualified independent radiographers" or "qualified internal radiographer," suggesting individual assessment rather than a multi-reader consensus process.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No MRMC comparative effectiveness study was conducted. The study performed was for substantial equivalence with predicate devices, focusing on technical performance and image quality compared to existing devices, not on human reader improvement with or without AI assistance. The device is a digital radiography imaging package, not explicitly described as an AI-powered diagnostic aid.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the primary evaluation was a standalone assessment of the device's image acquisition and processing capabilities. The studies focused on the performance of the DX-D Imaging Package (DR 14e & DR 17e detectors in combination with the NX workstation) itself, confirming its image quality, usability, and functionality against established internal benchmarks and predicate devices. While radiographers were involved in evaluating image quality, their role was to assess the output of the device, not to measure their own diagnostic performance with or without the device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Image Quality Validation: The ground truth for image quality was based on expert assessment by "qualified independent radiographers" comparing images from the new device to established quality standards and images from existing Agfa flat-panel detectors (DX-D 10 and DX-D 20, and predicate K161368). The phrasing "at least the same if not better image quality" implies a subjective expert evaluation against a benchmark.
    • Usability and Functionality: Ground truth was adherence to predefined acceptance criteria for workflow, calibration, compatibility, linear dose, and dynamic ranges, assessed by a "qualified internal radiographer."
    • Grid Evaluation: Ground truth was consistency with other Agfa HealthCare flat-panel detectors and fulfillment of intended use, assessed by a "qualified internal radiographer."
    • Software Validation: Ground truth for software safety and risk was established against internal risk management frameworks and relevant product/quality management standards (IEC 60601-1, ISO 14971, ISO 13485, ISO 62366, ISO 62304).

    8. The sample size for the training set

    The document states, "No animal or clinical studies were performed in the development of the new device." This implies that there wasn't a separate training set of clinical images in the context of developing a new algorithm or AI model for diagnostic purposes. The device is hardware (detectors) and associated software, and development would involve engineering and performance testing rather than machine learning training sets.

    9. How the ground truth for the training set was established

    Not applicable, as no specific training set for an AI algorithm appears to have been used or described for this device submission. The device is described as an imaging package with new detectors integrating into an existing workstation, not a novel AI diagnostic tool requiring extensive clinical training data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K170434
    Date Cleared
    2017-07-03

    (140 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Agfa HealthCare N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa HealthCare's Enterprise Imaging XERO Viewer 8.1 is a software application used for reference and diagnostic viewing of multi-specialty medical imaging and non-imaging data with associated reports and, as such, fulfills a key role in the Enterprise Imaging solution. XERO Viewer 8.1 enables healthcare professionals, including (but not limited to) physicians, surgeons, nurses, and administrators to receive and view patient images, documents and data from multiple departments and organizations within one multi-disciplinary viewer. XERO Viewer 8.1 allows users to perform image manipulations (including window/level, markups, 3D visualization) and measurements.

    When images are reviewed and used as an element of diagnosibility of the trained physician to determine if the image quality is suitable for their clinical application. Lossy compressed mammography images and digitized film images should not be used for primary image interpretation. Uncompressed "for presentation" images may be used for diagnosis or screening on monitors that are FDA-cleared for their intended use.

    XERO Viewer 8.1 can optionally be configured for Full Fidelity Mobile, which is intended for mobile diagnostic use, review and analysis of CR, DX, CT, MR, US, ECG images and medical reports. XERO Viewer Full Fidelity Mobile is not intended to replace full diagnostic workstations and should only be used when there is no access to a workstation. XERO Viewer Full Fidelity Mobile is not intended for the display of mammography images for diagnosis.

    Device Description

    Enterprise Imaging XERO Viewer is a web based software application used for reference and diagnostic viewing of multi-specialty medical imaging and non-imaging data with associated reports and documents. It is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the display, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for review and diagnostic purposes within the system and across computer networks. XERO Viewer enables authenticated users to search for and display patient studies (reports and images) using a web browser, user's do not need to download or install any additional software or plug-ins to use XERO Viewer.

    It is the successor to Agfa's ICIS View predicate (K143397) and adds the following new functionality: it adds Xtended study viewing) desktop diagnostic support for additional modalities, supports Xtend 3D visualization and ECG mobile (Full Fidelity) diagnostic support utilizing the newer iPad version.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details from the provided document, addressing your specific questions:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not specify quantitative acceptance criteria (e.g., minimum accuracy percentages, specific measurement tolerances) typically found in performance studies for diagnostic devices. Instead, the "acceptance criteria" are implied to be the successful demonstration of equivalence in image quality and functionality compared to predicate devices, as assessed by qualified medical professionals.

    Acceptance Criterion (Implied)Reported Device Performance
    Equivalent Diagnostic Image QualityQualified radiologists provided an overall score evaluating diagnostic image quality as equivalent when comparing XERO Viewer to the Enterprise Imaging Diagnostic Desktop (K142316).
    Equivalent ECG Viewing and Functionality on Mobile (iPad)Validation and non-clinical (bench) testing confirmed that ECG viewing (DICOM and PDF), layout changes, waveform adjustments, and tools (zoom, measurements) performed consistently between desktop and iPad mobile platform.
    Met Performance, Safety, Usability, and Security RequirementsVerification and validation testing confirmed the device meets these requirements. (No specific metrics provided).
    Conformance to Standards (ISO, IEC, ACR/NEMA)Agfa's in-house standard operating procedures, used for development, conform to listed standards (ISO 13485:2003, ISO 14971:2012, ISO 27001:2013, ISO 62366:2007, IEC 62304:2006, ACR/NEMA PS3.1-3.20: 2011 DICOM).

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Image Quality Evaluation: An average of 5 or 6 imaging studies per modality were evaluated for comparing diagnostic image quality. The modalities are listed as CR, DX, CT, MR, US, ECG implicitly through the device's stated capabilities and predicate comparisons.
    • Sample Size for ECG Viewing: Not specified beyond "users were asked to display ECG's."
    • Data Provenance: Not explicitly stated, but it is implied to be retrospective as the evaluation involved existing "imaging studies" and the document explicitly states "No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated how many "qualified radiologists" or "qualified medical professionals" were involved. The phrasing "qualified radiologists were asked to provide an overall score" suggests multiple, but a specific number is not given.
    • Qualifications of Experts: Described as "qualified medical professionals" and "qualified radiologists." No specific years of experience or sub-specialties are mentioned.

    4. Adjudication Method for the Test Set

    • The document states that "Qualified radiologists were asked to provide an overall score when comparing the diagnostic image quality..." It does not describe an explicit adjudication method (e.g., 2+1, 3+1). The "overall score" suggests a consensus or individual assessment without a detailed adjudication process mentioned for resolving discrepancies.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and Effect Size

    • No, an MRMC comparative effectiveness study was not explicitly done in the traditional sense. The study compared the image quality of the XERO Viewer against a predicate diagnostic desktop using qualified radiologists, but it was to establish equivalence, not to measure an improvement in human reader performance with AI assistance versus without AI assistance. The device itself is a viewer, not an AI-assisted diagnostic tool. Therefore, there is no effect size reported for human readers improving with AI vs. without AI assistance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done

    • Not applicable in the context of typical AI device standalone performance. The device is a "Picture archiving and communications system (PACS)" and a "software application used for reference and diagnostic viewing." It's an image viewer, not a standalone diagnostic algorithm that would typically generate its own interpretations without human intervention. The performance evaluations focused on the quality and functionality of the viewer itself for human use.

    7. The Type of Ground Truth Used

    • The "ground truth" for the image quality evaluation was based on the expert judgment of qualified radiologists comparing the diagnostic image quality between the new viewer and an existing FDA-cleared diagnostic desktop. For the ECG viewing, the ground truth was implied by the consistent performance and functionality when compared to the desktop version. No pathology, outcomes data, or other objective "ground truth" standards were mentioned.

    8. The Sample Size for the Training Set

    • Not applicable. The document explicitly states, "No clinical trials were performed in the development of the device," and "No animal or clinical studies were performed in the development of the new device." This device is a PACS viewer, not an AI/machine learning algorithm that typically requires a discrete training set.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable. As a viewing device and not an AI/ML algorithm requiring a training set, the concept of establishing ground truth for a training set does not apply here.
    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    Applicant Name (Manufacturer) :

    Agfa HealthCare N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DR10s and DR14s Flat Panel Detectors to Agfa's DX-D Imaging Package portfolio. The DX-D Imaging Package with the DR 10s and DR 14s wireless panels will be labeled as the Pixium 2430EZ and Pixium 3543EZ. DR 10s and DR 14s are commercial trade names used by Agfa HealthCare for marketing purposes only.

    Principles of operation and technological characteristics of the new and predicate device are the same. There are no changes to the intended use/indications of the new device is physically and electronically identical to the predicate, K142184. It uses the same workstation and the similatorphotodetector flat panel detectors to capture and digitize the images as predicate K142184.

    AI/ML Overview

    This document describes the 510(k) summary for Agfa's DX-D Imaging Package, focusing on the newly added DR10s and DR14s Flat Panel Detectors. The submission aims to demonstrate substantial equivalence to a predicate device (K142184).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a dedicated "acceptance criteria" table with specific quantitative thresholds. Instead, the acceptance criteria are implied to be equivalence to the predicate device (K142184) and performance falling within expected parameters for radiographic systems. The reported device performance is presented through comparison with other Agfa flat-panel detectors on the market, including the predicate.

    Below is a table summarizing the performance characteristics of the new detectors (DR 10s, DR 14s) and the predicate (represented by DX-D 10, DX-D 20, DX-D 40 from the comparison table, as the predicate K142184's individual detector specs aren't explicitly broken out separately):

    CharacteristicDX-D 10 Flat-Panel Detector (Predicate Example)DR 10s Wireless Detector (New Device)DR 14s Wireless Detector (New Device)Acceptance Criteria (Implied)Reported Device Performance (Summary)
    ScintillatorCsI, GOSCsICsI, GOSEquivalent to predicate (CsI, GOS)DR 10s uses CsI, DR 14s uses CsI, GOS. Deemed equivalent.
    Cassette size35x43cm/14x17in24x30cm35x43cm/14x17inAppropriate for general radiography.Different sizes, but appropriate for general radiography.
    Pixel Size139 µm148 µm148 µmComparable to predicate (139-140 µm).Slightly larger pixel size but deemed equivalent.
    A/D Conversion14 bits16 bits16 bitsComparable to predicate (14 bits).Higher (16 bits) – considered an improvement.
    InterfaceEthernetAED & SynchronizedAED & SynchronizedReliable interface.AED & Synchronized.
    CommunicationTetheredWirelessWirelessReliable communication.Wireless (new feature).
    PowerI/O Interface Box: 100-240 VAC, 47-63 HzBattery: replaceable & rechargeableBattery: replaceable & rechargeableReliable power.Battery-powered for wireless operation.
    Weight3.9 kg (8.6 lbs)1.6 kg (3.53 lbs)2.8 kg (6.17 lbs)Ergonomically acceptable.Lighter than predicate examples (due to wireless nature).
    DQE @ 1lp/mm0.530/0.6080.5230.521/0.292Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    DQE @ 2lp/mm0.219/0.2980.4760.449/0.189Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    DQE @ 3lp/mm0.092/0.1470.2950.296/0.071Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 1lp/mm0.205/0.4560.6370.638/0.526Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 2lp/mm0.106/0.3040.3600.363/0.208Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    MTF @ 3lp/mm0.092/0.1470.1990.198/0.081Equivalent to predicate.Comparable values, "equivalent to other flat-panel detectors."
    Image Acquisition/hr.150240240At least equivalent to predicate (150).Higher (240) – considered an improvement.

    The overall acceptance criteria for the study is "Substantial Equivalence" to the predicate device (K142184), demonstrated through:

    • Identical Indications for Use.
    • Same principles of operation and technological characteristics (despite some hardware differences).
    • Performance data (laboratory and clinical evaluations) ensuring equivalence.

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • For laboratory image quality (DQE, MTF) comparisons and grid evaluation: The document does not specify a numerical sample size in terms of images or measurements. It states "equivalent test protocols as used for the cleared detectors" were used and the results "confirmed that the DX-D Imaging Package with DR 10sC, DR14sC, and DR14sG flat-panel detectors was equivalent to other flat-panel detectors Agfa currently markets including the predicate (K142184)."
      • For usability and functionality evaluations: Not specified.
      • For Image Quality Validation testing (using anthropomorphic phantoms): Not specified.
      • For in-hospital image quality comparisons ("clinical evaluations"): "anonymized" patient images were utilized, but the number of images or cases is not specified.
    • Data Provenance: The data appears to be retrospective (for human image data, implied from "anonymized to remove all identifying patient information" and "No animal or clinical studies were performed in the development of the new device. No patient treatment was provided or withheld.") and laboratory-generated (for DQE, MTF, grid, usability, and phantom studies). The country of origin is not explicitly stated.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • For laboratory image quality, grid evaluation, usability/functionality:
      • "qualified individuals employed by the sponsor" conducted these evaluations.
      • "qualified independent radiographers" conducted usability and functionality evaluations.
      • "a qualified internal radiographer" conducted the grid evaluation.
      • "qualified independent radiographers" evaluated anthropomorphic phantoms.
    • For in-hospital image quality comparisons:
      • "qualified independent radiologists" conducted these comparisons.
    • Qualifications: "Qualified independent radiographers" and "qualified independent radiologists" are mentioned. Specific experience levels (e.g., "10 years of experience") are not provided. The term "qualified" implies they possess the necessary expertise for the task.

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). The "clinical evaluations" and "in-hospital image quality comparisons" mention "qualified independent radiologists" in plural, suggesting a consensus or comparison approach among them, but details are not provided.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No MRMC comparative effectiveness study was done in the sense of measuring human reader improvement with vs. without AI assistance.
    • The studies conducted were focused on demonstrating that the new device's image quality and performance were equivalent to the predicate device and other established systems, meaning human readers would perform similarly with the new device as with the predicate.

    6. Standalone (Algorithm Only) Performance Study

    • No standalone algorithm performance (AI-only) study was done for diagnostic interpretation. The device is an imaging package (hardware detectors and workstation) for capturing and displaying images, not an AI diagnostic algorithm.

    7. Type of Ground Truth Used

    • For laboratory image quality (DQE, MTF, grid): The "ground truth" is based on physical measurements and standardized test protocols.
    • For usability and functionality: The "ground truth" is based on expert assessment by radiographers against pre-defined workflow and compatibility requirements.
    • For image quality validation (phantoms): The "ground truth" is based on expert assessment by radiographers of the generated images, likely comparing features to expected phantom characteristics and established image quality standards.
    • For in-hospital image quality comparisons: The "ground truth" is implicitly based on radiological expert consensus (potentially with existing patient reports as a reference, though this is not specified), primarily for qualitative comparison against images produced by predicate devices.

    8. Sample Size for the Training Set

    The document does not mention a training set, as this device (an X-ray imaging package) is not an AI diagnostic device that requires a training set in the typical machine learning sense. The "software validation testing" refers to verification and validation of the software components against predefined requirements, not training a machine learning model.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for an AI algorithm is mentioned or implied.

    Ask a Question

    Ask a specific question about this device

    K Number
    K161061
    Date Cleared
    2016-06-22

    (68 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AGFA HEALTHCARE N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Volume Viewing software is a visualization package for PACS workstations. It is intended to support the medical professional in the reading, analysis and diagnosis of DICOM compliant volumetric medical datasets. The software is intended as a general purpose digital image processing tool, with optional functionality to facilitate visualization and measurement of vessel features.

    Other optional functionality is intended for the registration of anatomical (CT) on a second CT dataset or on functional volumetric image data (MR) to facilitate the comparison of various lesions. Volume and distance measurements are intended for evaluation and quantification of tumour measurements, and evaluation of both hard and soft tissues. The software also supports interactive segmentation of a region of interest (ROI), has a dedicated tool set for lung lesion segmentation, quantification and follow-up of lesions selected by the user and provides tools to define and edit paths such as centerlines through structures, which may be used to analyze cross-sections of structures, or to provide flythrough visualizations rendered along such centerline.

    Caution: Web-browser access is available for review purposes. Images accessed through a web-browser (via a mobile device or by other means) should not be used to create a diagnosis, treatment plan, or other decision that may affect patient care.

    Device Description

    IMPAX Volume Viewing is a general purpose medical image processing tool for the reading and analysis of 3D image datasets. It is also intended for the registration of anatomical (CT) image data onto functional (MR) data to facilitate the comparison of various lesions. Volume and distance measurements facilitate the quantification of lesions and the analysis of both soft and hard tissue.

    A variant of the software also provides web-browser access for review purposes. Images accessed through a web-browser (via a mobile device or by other means) should not be used to create a diagnosis, treatment plan, or other decision that may affect patient care.

    The new device is similar to the predicate devices. All are PACS system accessories that allow the user to view and manipulate 3D image data sets. This new version adds a dedicated tool set for lesion management and flythrough visualizations rendered along a centerline for endoscopic view of vessels and airways.

    Principles of operation and technological characteristics of the new and predicate devices are the same.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study that proves the device meets those criteria for the IMPAX Volume Viewing 4.0 system.

    1. Table of Acceptance Criteria and Reported Device Performance:

    Feature/AspectAcceptance CriteriaReported Device Performance
    Measurement Accuracy (diameters, areas, volumes)+/- scanner resolution (for dataset uncertainty)Results met the established acceptance criteria of +/- scanner resolution (for dataset uncertainty).
    Crosshair Position Checks (viewport linking)Half a voxel (for rounding differences across graphic video cards)Results met the established acceptance criteria of half a voxel (for rounding differences across graphic video cards).
    New Functionality Evaluation (Endoscopic Viewing, Lung Nodule Segmentation, Lesion Management Module)Clinical utility and performance deemed adequate by experts. Substantially equivalent to predicate devices.Endoscopic viewing of tubular structures (vessels and airways): Found to be substantially equivalent to the predicate iNtuition 4.4.11.
    Accuracy of lung nodule segmentation and capabilities of the lesion management module: Found to be adequate to segment lesions, analyze them, and follow-up on their growth over time.
    General Performance, Safety, Usability, SecurityMeets requirements established by in-house SOPs conforming to various ISO and IEC standards.Verification and validation testing confirmed the device meets performance, safety, usability, and security requirements. (Specific metrics for these are not detailed beyond "met requirements" but implicitly covered by the standards listed: ISO 13485, ISO 14971, ISO 27001, ISO 62366, IEC 62304).

    2. Sample size used for the test set and the data provenance:

    • Sample Size: The document states that "representative clinical datasets" were selected and loaded by the radiologists. The exact number of cases/datasets in the test set is not specified.
    • Data Provenance: The radiologists were invited to Agfa's facilities, implying the clinical datasets were likely from retrospective cases, although this is not explicitly stated. The country of origin of the data is not specified but given the location of the testing (Belgium), it is plausible the data also originated from Belgium or nearby European countries.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: 3 radiologists
    • Qualifications of Experts: The document states "3 radiologists from several Belgian hospitals." Specific years of experience or subspecialty certification (e.g., neuroradiologist, interventional radiologist) are not provided.

    4. Adjudication method for the test set:

    • The document states that the radiologists "executed typical workflows and scored the features under investigation. A scoring scale was implemented and acceptance criteria established." This implies a consensus-based scoring or independent scoring followed by a determination of whether acceptance criteria were met. An explicit adjudication method like "2+1" or "3+1" is not mentioned.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, an MRMC comparative effectiveness study was not done. The study involved radiologists evaluating the functionality of the device, rather than comparing human reader performance with and without AI assistance on specific diagnostic tasks. The focus was on demonstrating the functionality and subjective adequacy of the new features and equivalence to predicate devices, not on improving human reader performance. This device is primarily a visualization and processing tool, not an AI-powered diagnostic aid that would typically be evaluated in an MRMC study for reader improvement.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Yes, in part. The "Verification" section describes tests on the algorithms themselves:
      • Regression testing on measurement algorithms to ensure they provide the same output as the previous version.
      • Crosshair position checks to verify viewport linking.
      • The "accuracy of the lung nodule segmentation" was scored, suggesting a standalone performance aspect of this algorithm was evaluated against some reference.
    • The "Validation" section involved human readers evaluating the new functionality, but the underlying measurements and segmentations are performed by the algorithms. So, the algorithms' standalone performance was assessed for accuracy and functionality, and then confirmed by human interaction during validation.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For Measurement Accuracy: "Reference values" were used, which implies a pre-established, highly accurate measurement for the specific datasets (e.g., from a precisely measured phantom or a highly accurate prior measurement). The exact nature of these reference values is not explicitly stated.
    • For Crosshair Position Checks: The ground truth was based on expected precise pixel/voxel alignment ("half a voxel").
    • For New Functionality (Endoscopic Viewing, Lung Nodule Segmentation, Lesion Management Module): The ground truth was expert evaluation/consensus by the 3 radiologists regarding the adequacy and equivalence of the functionality.

    8. The sample size for the training set:

    • The document does not provide any information about a training set. This device is presented more as an advanced image processing and visualization tool rather than a machine learning/AI diagnostic algorithm that typically requires a large training set. While some algorithms (like segmentation) may inherently involve learned parameters, no details on their training are given.

    9. How the ground truth for the training set was established:

    • As no information on a training set is provided, the method for establishing its ground truth is also not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K152639
    Device Name
    DR 600
    Date Cleared
    2015-12-11

    (87 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AGFA HEALTHCARE N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DR 600 is a GenRad X-ray imaging system used in hospitals, clinics and medical practices by radiographers, radiologists and physicists to make, process and view static X-ray radiographic images of the skeleton (including skull, spinal column and extremities), chest, abdomen and other body parts on adults and pediatric patients. Applications can be performed with the patient in sitting, standing or lying position.

    The DR 600 is not intended for use in Mammography applications.

    Device Description

    Agfa's DR 600 is a solid state x-ray system, a direct radiography (DR) system (product code MOB) intended to capture images of the human body. The device is a combination of a conventional x-ray system with digital image capture. The DR 600 is a ceiling mounted tube and operator console with a motorized patient table and/or wall stand. The DR 600 uses Agfa's NX workstation with MUSICA2 ™ image processing and flat-panel detectors of the scintillatorphotodetector type (Cesium Iodide - CsI or Gadolinium Oxysulfide - GOS). It is compatible with Agfa's computed radoigraphy systems as well.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the Agfa DR 600, a stationary X-ray system. The core of this submission is to demonstrate substantial equivalence to previously cleared devices, not to demonstrate clinical efficacy or a specific performance benchmark for an AI model.

    Therefore, the acceptance criteria and study proving an AI device meets those criteria, as typically understood in Machine Learning or AI product development, are not explicitly present in this document. The document focuses on demonstrating that the DR 600, as an imaging system, performs equivalently to its predicates based on physical and technological characteristics, and that its image quality is comparable.

    However, I can extract the closest analogous information to the requested points, interpreting "acceptance criteria" through the lens of demonstrating "substantial equivalence" and "device performance" in terms of physical and image quality comparisons.

    Here's how the document addresses the spirit of your request, adapted for a medical imaging device without a specific AI component being evaluated for a diagnostic task:


    Acceptance Criteria and Study for Agfa DR 600 (Interpreted for a Standard X-Ray System)

    The primary "acceptance criterion" for this 510(k) submission is demonstrating substantial equivalence to legally marketed predicate devices. This means proving that the DR 600 is as safe and effective as the predicate devices and does not raise different questions of safety and effectiveness.

    1. Table of Acceptance Criteria and Reported Device Performance

    Since this is a substantial equivalence claim for an X-ray system, the "acceptance criteria" are implied by comparing the new device's technical specifications and image quality to well-established predicates. No numerical "performance metrics" in the typical AI sense (e.g., AUC, sensitivity, specificity) for a diagnostic task are provided, as the device itself is the imaging system, not a diagnostic algorithm.

    Parameter/CharacteristicAcceptance Criteria (Implied by Predicates)Reported DR 600 Performance
    Image QualityEquivalent to or better than predicate devices (DR 400 & DX-M) based on visual assessment by qualified radiologists. No artifacts influencing image quality.Laboratory image quality comparison of DR 600 (with flat-panel & CR cassettes) and DR 400 (predicate K141192) and DX-M (CR System) anthropomorphic phantoms were performed. "The study confirmed that the Agfa DR 600 system... was equivalent to or better in performance than the DR 400 and DX-M." No artifacts detected using DR 600 that could influence image quality.
    Usability & FunctionalitySupports radiographic workflow, design, functionality, and usability within a hospital environment. FLFS workflow rated positive. Fulfilled intended use.Usability and functionality evaluations conducted. "The results of the usability test fell within the acceptance criteria for all components; therefore, the DR 600 supports a radiographic workflow. The usability and functionality of Full Leg Full Spine (FLFS) workflow for DR was rated positive as well. The intended use is fulfilled using different flat-panel detectors."
    Grid Tests ConsistencyConsistent with predicate DR 400 results. Varian (DX-D 10) and Vieworks (DX-D 40) detectors work well.Flat field, chest, and skull phantoms created using all grids for DR 600. "The results of the grid tests remained consistent with the DR 400 (predicate K141192) results. The Varian (DX-D 10) and Vieworks (DX-D 40) detectors worked well and had positive results. The intended use is fulfilled using different flat-panel detectors and/or CR cassettes and plates."
    Technological Characteristics"Same" or "similar" to predicates for key components (e.g., communication, detector material, pixel size, dynamic range, workstation, image processing)."Principles of operation and technological characteristics of the new and predicate devices are the same." (Detailed comparisons in pages 6-7). Minor differences like ceiling vs. floor mounted tube, and specific generator/tube models, are deemed not to alter intended diagnostic effect.
    Regulatory ComplianceMeets relevant safety and performance standards (e.g., IEC 60601 series, ISO 14971, ISO 13485).Compliance demonstrated through laboratory testing and software verification/validation against standards such as IEC 60601-1, IEC 60601-1-2, IEC 60601-1-3, IEC 60601-2-28, IEC 60601-2-54, ACR/NEMA PS3.1-3.20 (DICOM), ISO 14971, ISO 13485 (Page 8).
    Risk AssessmentRisks are broadly acceptable or ALARP (As Low As Reasonably Practicable), with zero in the "Not Acceptable Region.""For the DR 600 there are a total of 97 risks in the broadly acceptable region and eight risks in the ALARP region. Zero risks were identified in the Not Acceptable Region." (Page 8).

    2. Sample Size and Data Provenance for Test Set

    • Sample Size: The document mentions "anthropomorphic phantoms" for image quality comparison and "all grids" for grid tests. It does not specify a specific number of phantom images or actual patient data. For usability, "qualified independent radiographers" and "qualified internal radiographer" participated.
    • Data Provenance: The data appears to be prospective laboratory testing using phantoms rather than retrospective patient data. The country of origin for the data is not specified but would presumably be Agfa's testing facilities (likely in Belgium or the US, given the submission details).

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts:
      • Image Quality: "Qualified independent radiologists" (plural, so at least two).
      • Usability & Functionality: "Qualified independent radiographers" (plural, so at least two).
      • Grid Tests: "Qualified internal radiographer" (singular, presumably one).
    • Qualifications: "Qualified" is a general term. Specific years of experience or board certifications are not provided in this document.

    4. Adjudication Method for the Test Set

    • Image Quality: "Performed in pairs by the qualified independent radiologists." This suggests a consensus or comparison method, possibly side-by-side. The specific adjudication rule (e.g., majority vote, forced consensus) is not detailed.
    • Usability & Functionality / Grid Tests: No specific adjudication method is mentioned beyond the conduct of the studies by qualified individuals.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study was not done in the context of human readers improving with AI vs. without AI assistance. This document describes an X-ray imaging system, not an AI software intended for diagnostic assistance. The "image quality evaluations conducted with independent radiologists" were for demonstrating primary image quality equivalence, not for evaluating radiologist performance with or without an AI.

    6. Standalone Performance (Algorithm Only)

    • Not applicable in the context of an AI algorithm. This device is a complete X-ray imaging system. Its performance (image quality, technical specifications) is evaluated as a standalone product. The "laboratory image quality comparison" can be considered as the "algorithm only" type of evaluation in the sense that the system's output (images) were assessed directly.

    7. Type of Ground Truth Used

    • For image quality, the ground truth appears to be expert visual consensus/comparison on anthropomorphic phantom images.
    • For usability and functionality, the ground truth is based on user feedback and assessment against predefined usability criteria.
    • For grid tests, the ground truth is the consistency of the results with predicate device expectations and visual assessment by an expert.
    • No pathology or outcomes data were used as ground truth, as this is a technical equivalence submission for an imaging device, not a diagnostic AI.

    8. Sample Size for the Training Set

    • Not applicable. This document describes a medical device, not a machine learning model. There is no concept of a "training set" for the DR 600 system itself. Its internal image processing (MUSICA2™) is mentioned as being identical to that used in the predicate devices, implying it's a pre-existing, validated algorithm rather than one trained specifically for this submission.

    9. How Ground Truth for the Training Set Was Established

    • Not applicable. As there is no training set for the DR 600 as a whole, this question is not addressed. The MUSICA2™ image processing is a pre-existing component validated with prior devices.

    Summary: This 510(k) submission successfully demonstrates substantial equivalence of the DR 600 to its predicate devices through rigorous bench testing, including technical specifications comparison, image quality evaluation with phantoms by qualified radiologists, usability studies, and compliance with relevant safety and performance standards. The evaluations performed align with the requirements for showing that a new hardware medical imaging device is functionally and safely comparable to existing ones, rather than testing a novel AI diagnostic algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K143397
    Device Name
    ICIS View
    Date Cleared
    2015-06-01

    (187 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Agfa HealthCare N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    ICIS® View is a software application used for reference viewing of medical images and associated reports and, as such, fulfills a key role in Agfa HealthCare's Imaging Clinical Information System (ICIS). ICIS® View enables healthcare professionals, including (but not limited to) physicians, surgeons, nurses, and administrators to receive and view patient images and data from multiple departments and organizations within one multi-disciplinary viewer.

    Users may access the product directly via a web-browser, select mobile devices, healthcare portal or within the Electronic Medical Record (EMR). ICIS® View allows users to perform basic image manipulations and measurements (for example window/level, rotation, zoom, and markups).

    ICIS® View can optionally be configured for Full Fidelity mode, which is intended for diagnostic use, review and analysis of CR, DX, CT, MR, US images and medical reports. ICIS® View Full Fidelity is not intended to replace full diagnostic workstations and should only be used when there is no access to a workstation. ICIS® View full fidelity is not intended for the display of digital mammography images for diagnosis.

    Device Description

    Agfa's ICIS® View system is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the display, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for review and diagnostic purposes within the system and across computer networks.

    The new device is substantially equivalent to the predicate devices (K103785, K022292, & K111164). It is a multidisciplinary viewer that allows the user to securely access patient images and reports from any PACS or vendor-neutral archive. Images and reports can be viewed directly via a web-browser, select mobile device, healthcare portal or Electronic Medical Record (EMR). The new device includes some of the clinical tools of the predicate devices specifically the functionality to retrieve original lossless renditions of stored images for diagnostic purposes.

    The optional Full Fidelity functionality allows the retrieval of original lossless renditions of stored CR, DX, CT, MR, and US images for diagnostic purposes on select mobile devices or FDA cleared display monitors when there is no access to a full workstation.

    AI/ML Overview

    The provided text describes the 510(k) submission for AGFA Healthcare's ICIS® View device. This device is a Picture Archiving and Communications System (PACS) software intended for reference viewing of medical images and associated reports. The submission aims to demonstrate substantial equivalence to previously marketed predicate devices.

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly present a formal "acceptance criteria table" with specific quantitative metrics (e.g., sensitivity, specificity, or numerical performance thresholds for image quality). Instead, the acceptance is based on demonstrating substantial equivalence to predicate devices, primarily through qualitative evaluations of image quality and functional parity.

    The core acceptance criterion is that the ICIS® View system performs equivalently to the predicate devices in terms of diagnostic image quality when used in "Full Fidelity" mode.

    Acceptance Criterion (Implicitly from substantial equivalence)Reported Device Performance
    Diagnostic Image Quality equivalent to predicate PACS workstation (IMPAX 6.6.1) for CR, DX, CT, MR, US modalities on both desktop and mobile (iPad® 3, iPad® 4).Qualified radiologists evaluated a sample set of images across 3 platforms: ICIS® View Desktop (with FDA cleared diagnostic monitor), ICIS® View Mobile (with calibrated iPad® 3 and iPad® 4), and Agfa's full diagnostic PACS workstation IMPAX 6.6.1 (predicate). They provided an "acceptable" or "unacceptable" score when comparing diagnostic quality, including evaluations of contrast, sharpness, artifacts, and overall image quality. The overall finding was that "Performance data including resolution testing and image quality evaluations by qualified radiologists are adequate to ensure equivalence to the predicates." This implies the performance was "acceptable" and equivalent.
    Compliance with TG18 Image Quality Assessment Parameters.The "Assessment of Display Performance for Medical Imaging Systems" (TG18-QC, TG18-BR, TG18-LP) was used for display device assessment. "All results met acceptance criteria."
    Functional Equivalence/Parity with Predicate Devices.The device demonstrates functional parity with predicate devices in terms of communication (DICOM), no mammographic use, support for CR, DX, CT, MR, US modalities, operating systems (Windows & iOS), mobile device support, transfer/storage/display of images, network access, user authentication, window/level, rotate/pan/zoom, measurements, and annotations. The differences (server vs. app-based) do not alter the intended diagnostic effect.
    Product and manufacturing processes conform to relevant standards.The product, manufacturing, and development processes conform to ACR/NEMA PS3.1-3.20: 2011 DICOM, ISO 14971:2012 (Risk Management), and ISO 13485:2003 (Quality Management Systems).
    Risk assessment demonstrates acceptable residual risk."During the final risk analysis meeting, the risk management team concluded that the medical risk is no greater than with a conventional PACS system previously released to the field. For ICIS® View there are a total of 20 risks in the broadly acceptable region and two risks in the ALARP region. Zero risks were identified in the Not Acceptable Region." The overall benefits are determined to outweigh the residual risks.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: A "sample set of an average of 6 imaging studies per modality (CR, DX, CT, MR, US)" was evaluated. This means approximately 30 studies in total (6 studies * 5 modalities).
    • Data Provenance: The document does not explicitly state the country of origin or whether the data was retrospective or prospective. It refers to "laboratory data" and "image quality evaluations conducted with qualified radiologists," suggesting existing image datasets were used for testing, which typically points to retrospective data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: The document states "qualified radiologists" were used. The exact number of radiologists is not specified.
    • Qualifications of Experts: The experts are described as "qualified radiologists," indicating they are medical professionals specializing in radiology. No specific experience levels (e.g., "10 years of experience") are provided in the text.

    4. Adjudication Method for the Test Set

    The document states that qualified radiologists "were asked to provide an acceptable or unacceptable score when comparing the diagnostic quality...to the IMPAX predicate." This suggests a comparative evaluation rather than a direct adjudication for "ground truth" of disease presence. The adjudication here seems to be on the equivalence of display quality for diagnostic interpretation rather than agreement on specific findings. No specific multi-reader adjudication method (e.g., 2+1, 3+1) is described for establishing a definitive "ground truth" diagnosis for each case within the test set itself, as the assessment was comparative to a predicate display rather than a diagnostic accuracy study measuring against a reference standard.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    • The study described is a comparative evaluation between the new device and a predicate device by "qualified radiologists." While radiologists evaluated multiple cases, the study's primary goal was to establish display equivalence for diagnostic purposes, rather than a formal MRMC study aimed at quantifying the effect size of AI assistance on human reader performance.
    • Effect Size of AI vs. Without AI Assistance: This study did not involve AI assistance. The ICIS® View device is a PACS viewer, not an AI diagnostic tool. Therefore, an MRMC comparative effectiveness study regarding AI assistance was not applicable and not conducted. The comparison was between two different display systems (ICIS® View vs. IMPAX Workstation).

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • As ICIS® View is a PACS viewer software, not a diagnostic algorithm, the concept of "standalone performance" in the context of an algorithm's diagnostic accuracy is not directly applicable.
    • The performance evaluation focused on the visual equivalence of the images displayed by the device compared to a predicate, as interpreted by human radiologists. It is a human-in-the-loop assessment of the display quality.

    7. The Type of Ground Truth Used

    • The "ground truth" in this context is not a pathological diagnosis or clinical outcome, but rather the "diagnostic quality" provided by the predicate system (Agfa's IMPAX 6.6.1 workstation). Radiologists were asked to assess whether the ICIS® View's display quality was "acceptable" when compared to the established diagnostic quality of the predicate. This is a form of expert consensus/comparison against an established standard (the predicate's display) for the purpose of demonstrating substantial equivalence of image rendering.

    8. The Sample Size for the Training Set

    The document does not mention a "training set" for the ICIS® View device, as it is a PACS viewing software, not a machine learning or AI model that requires a data-driven training phase. The development and testing focused on software functionality and image display fidelity.

    9. How the Ground Truth for the Training Set was Established

    Since there is no mention of a training set or an AI/ML component, this question is not applicable based on the provided text.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142316
    Device Name
    IMPAX Agility
    Date Cleared
    2015-01-06

    (140 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AGFA HEALTHCARE N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    IMPAX Agility is a Picture Archiving and Communications System (PACS). It provides an interface for the acquisition. display, digital processing, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for diagnostic purposes within the system and across computer networks. IMPAX Aglity is intended to be used by trained healthcare professionals including, but not limited to physicians, radiologists, orthopaedic surgeons, cardiologists, mammographers, technologists, and clinicians for diagnosis and treatment planning using DICOM compliant medical images and other healthcare data.

    MPR, MIP and 3D rendering functionality allows the user to view image data from perspectives different from that in which it was acquired. Other digital image processing functionality such as multi-scale window leveling and image registration can be used to enhance image viewing. Automatic spine labelling provides the capability to semiautomatically label vertebrae or disks.

    As a comprehensive imaging suite, IMPAX Agility integrates with servers, archives. Radiology Information Systems (RIS), Hospital Information Systems (HIS), reporting and 3rd party applications for customer specific workflows.

    Lossy compressed mammography images and digitized film images should not be used for primary image interpretation. Uncompressed or non-lossy compressed "for presentation" images may be used for diagnosis or screening on monitors that are FDA-cleared for mammographic use.

    Device Description

    Agfa's IMPAX Agility system is a picture archiving and communication system (PACS), product code LLZ, intended to provide an interface for the acquisition, digital processing, annotation, review, printing, storage and distribution of multimodality medical images, reports and demographic information for diagnostic purposes within the system and across computer networks.

    The new device is substantially equivalent to the predicate devices (K111945, K133135, & K123920). It is a comprehensive PACS system that allows the user to view and manipulate 3D image data sets. The new device includes some of clinical tools of the predicate devices specifically the functionality to perform image registration, and automatic spine labeling.

    The image registration functionality allows comparison studies to be registered with active study data to align them for reading. Registration only works for volumetric CT and MR data.

    Segmentation of volumetric datasets allows the automatic removal of bones and the CT table. Bone and table removal is only available for CT datasets. Users can also manually define parts of the volume which should be removed, as well as highlight certain structures in volumes.

    Automatic spine labeling tools provide the ability to label the vertebrae or the intervertebral discs of the spine. Automatic spine labeling automatically calculates the position of the vertebrae or discs after the user selects and labels an initial starting point. The user is required to confirm the automatic placement of the labels.

    Principles of operation and technological characteristics of the new and predicate devices are the same. There is no change to the intended use of the device vs. the predicate devices. Laboratory data, stability and performance assessments, usability tests, and functionality evaluations conducted with qualified radiologists confirm that performance is equivalent to the predicates.

    AI/ML Overview

    The provided document is a 510(k) summary for the IMPAX Agility Picture Archiving and Communications System (PACS). This document details the product's features and its substantial equivalence to predicate devices, but it does not contain the specific detailed acceptance criteria or a comprehensive study report with quantitative performance metrics for the new features (automatic spine labeling, segmentation, image registration).

    The document focuses on demonstrating that the new features are substantially equivalent to those of the predicate devices and that the overall device performs as expected for a PACS system.

    Here's a breakdown of the information that is available related to your request:


    1. Table of acceptance criteria and the reported device performance

    The document mentions that "All results met acceptance criteria" for the tests conducted. However, the specific quantitative acceptance criteria are not explicitly detailed in the provided text. The performance is reported qualitatively as meeting these unspecified criteria.

    Feature AreaAcceptance Criteria (Not explicitly stated in document)Reported Device Performance
    Segmentation Accuracy(Implied: equivalent to predicate K133135)All results met acceptance criteria.
    Automatic Spine Labeling(Implied: accurate placement, user confirmation ability)All results met acceptance criteria.
    3D Registration(Implied: linked viewports, aligned data, linked navigation)All results met acceptance criteria.

    2. Sample size used for the test set and the data provenance

    • Segmentation: Not explicitly stated. The algorithm was reused from a predicate device (K133135), and "a simple regression test to confirm the algorithm was integrated correctly" was performed. No sample size for images or cases is given for this regression test.
    • Automatic Spine Labeling and 3D Registration: Not explicitly stated. The testing involved "anonymized studies" but the number of studies or images is not provided.
    • Data Provenance: The studies used for testing were "anonymized studies" and "Laboratory data." The country of origin for the data is not specified beyond "Agfa's testing lab in Belgium" for the spine labeling and 3D registration tests. The data appears to be retrospective due to the use of "anonymized studies" for validation.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Segmentation: Regression testing for bone removal was performed by "an Agfa HealthCare employee who is a qualified medical professional." The number of professionals is not specified, but it implies one.
    • Automatic Spine Labeling and 3D Registration: "Validation was carried out by three medical professionals at Agfa's testing lab in Belgium." Their specific qualifications (e.g., "radiologist with 10 years of experience") are not detailed beyond "medical professionals."

    4. Adjudication method for the test set

    The document does not describe a formal adjudication method (like 2+1 or 3+1 consensus). The testing for spine labeling and 3D registration involved three medical professionals, but it doesn't specify if their results were adjudicated in case of discrepancies. The segmentation validation mentions a single "qualified medical professional" performing the regression test, implying no multi-reader adjudication for that specific test.


    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    The document does not describe an MRMC comparative effectiveness study where human readers' performance with and without AI assistance was evaluated. The studies mentioned are primarily focused on validating the functionality of the new features (image registration, segmentation, automatic spine labeling) themselves rather than physician performance improvement.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, standalone performance was assessed for the new features.

    • Segmentation: The "accuracy of segmentation bone removal" was compared to the predicate device. This implies an evaluation of the algorithm's output.
    • Automatic Spine Labeling: The "accuracy of the semi-automatically placed spine labels" was evaluated. This is a standalone assessment of the algorithmic output, noting the user's requirement to confirm.
    • 3D Registration: The evaluation focused on the technical performance of the registration (e.g., whether viewports link and data aligns), which is a standalone algorithm assessment.

    7. The type of ground truth used

    The ground truth implicitly used for the validation of the new features appears to be:

    • Segmentation Accuracy: The performance of the predicate device's (K133135) segmentation algorithm.
    • Automatic Spine Labeling: The accurate placement as determined by the "medical professionals" performing the validation. This is expert consensus/judgment acting as the ground truth.
    • 3D Registration: The expected technical behavior as defined by the product requirements and judged by the "medical professionals."

    The document states "No animal or clinical studies were performed," "No patient treatment was provided or withheld," and refers to "laboratory data" and "anonymized studies." This suggests that ground truth was established by expert review of existing anonymized medical images, rather than pathology, follow-up outcomes data, or prospective clinical trials.


    8. The sample size for the training set

    The document does not provide any information about the sample size used for training the algorithms. It mostly refers to "reused" algorithms or functionality validation.


    9. How the ground truth for the training set was established

    The document does not provide any information on how the ground truth for the training set was established. It primarily focuses on the validation of the new features post-development.

    Ask a Question

    Ask a specific question about this device

    K Number
    K142184
    Date Cleared
    2014-10-16

    (69 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AGFA HEALTHCARE N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy. The DX-D Imaging Package may be used wherever conventional screen-film systems may be be used.

    Agfa's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa's DX-D Imaging Package is a solid state flat panel x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa's NX workstation and one or more flat-panel detectors.

    This submission is to add the DX-D40C/G Flat Panel Detector to Agfa's DX-D Imaging Package portfolio. Agfa's DX-D40C/G is currently marketed by Vieworks as the ViVIX-S Wireless Panel (K122865), which is one of predicates for this submission.

    Principles of operation and technological characteristics of the new and predicate devices are the same. The new device is physically and electronically identical to both predicates, K121095 and K122865. It uses the same workstation as predicate K121095 and the same scintillatorphotodetector flat panel detectors to capture and digitize the images as predicate K122865.

    AI/ML Overview

    I am sorry but I can't fulfill your request. The document describes that the new device, Agfa's DX-D Imaging Package, is substantially equivalent to two predicate devices (K121095 and K122865) and does not provide explicit acceptance criteria with specific numerical thresholds for performance metrics. This makes it difficult to directly populate the "Acceptance Criteria" column of the table you requested with quantitative values. Also, the document states "image quality clinical evaluations" were done but lacks the details of such a study. Without additional information, I am unable to describe the acceptance criteria and study as requested.

    Ask a Question

    Ask a specific question about this device

    K Number
    K141602
    Date Cleared
    2014-09-12

    (88 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    AGFA HEALTHCARE N.V.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Agfa HealthCare's DX-D Imaging Package is indicated for use in general projection radiographic applications to capture for display diagnostic quality radiographic images of human anatomy for adult, pediatic and neonatal examinations. The DX-D Imaging Package may be used wherever conventional screen-film systems, CR or DR systems may be used.

    Agfa HealthCare's DX-D Imaging Package is not indicated for use in mammography.

    Device Description

    Agfa HealthCare's DX-D Imaging Package is a solid state x-ray system, a direct radiography (DR) system (product code MQB) intended to capture images of the human body. It is a combination of Agfa HealthCare's NX Workstation, one or more flat-panel detectors, needlephosphor detectors for direct radiography (DR) applications.

    DX-D Imaging Package uses the NX Workstation to process data utilizing Agfa HealthCare's MUSICA image processing software, which includes optional image processing algorithms for adult, pediatric and neonatal images that were previously cleared for use in Agfa HealthCare's DX-D Imaging Package (K122736). The acronym MUSICA stands for Multi-Scale-Image-Contrast-Amplification. MUSICA acts on the acquired images to preferentially enhance the diagnostically relevant, moderate and subtle contrasts.

    This submission is to obtain clearance for Agfa HealthCare to market the DX-D Imaging Package using a minimum of 50% dose reduction marketing claims.

    Principles of operation and technological characteristics of the DX-D Imaging Package and predicate devices are the same. The DX-D Imaging Package is physically and electronically identical to the predicate K122736 since it is the same device; however, Agfa HealthCare would like to include a minimum of 50% dose reduction claims for marketing purposes. It uses the same workstation and same scintillator-photodetector flat panel detectors, needle-phosphor detectors and cassettes, or photo-stimulable imaging plates to capture and digitize the image.

    AI/ML Overview

    The provided text describes the acceptance criteria and the study conducted for the Agfa HealthCare DX-D Imaging Package, focusing on its dose reduction claims.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Minimum 50% dose reduction (primary endpoint)58% dose reduction with DX-D30C DR detector
    60% dose reduction with HD5.0 CR plates
    Image quality equivalent to predicate devicesConfirmed by laboratory data and image quality evaluations with board-certified radiologists

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size:
      • Images: 195 images (13 exposures for each of 5 phantoms using 3 detector types).
      • Phantoms: 5 different anatomical phantoms (skull, chest, abdomen, hand, neonatal).
    • Data Provenance: The study was a laboratory evaluation using phantoms, not human patient data. Therefore, country of origin or retrospective/prospective classification isn't directly applicable in the conventional sense for patient data. It's an experimental study conducted by the sponsor.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Five (5)
    • Qualifications of Experts: Board-certified radiologists.

    4. Adjudication Method for the Test Set

    The radiologists were asked to match test images to a reference image by scrolling through a series of test images until the closest match was found. There was a consistency check where image positions were reversed and a second reading session was done with a different reference image. However, the text does not describe an adjudication method for reconciling disagreements among multiple readers; rather, it implies individual assessment and an average calculation of dose reduction.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    This was not a MRMC comparative effectiveness study in the typical sense of evaluating human reader performance with and without AI assistance on diagnostic tasks. Instead, it was an image quality equivalency study where radiologists evaluated the visual equivalence of images generated at reduced doses compared to standard doses.

    • Effect Size: Not applicable as it was not a human-in-the-loop diagnostic study with AI assistance. The study focused on whether images produced with reduced dose were deemed visually equivalent to standard dose images by radiologists.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, a standalone technical evaluation of image quality was performed using Detective Quantum Efficiency (DQE). This is a measure of the imaging device's ability to preserve the signal-to-noise ratio. The DQE for the CsBr CR and CsI DR was found to be more than double that of the predicate device (CR MD4.0), indicating superior dose efficiency.

    7. The Type of Ground Truth Used

    • For the image quality evaluation by radiologists, the ground truth was expert consensus (visual equivalence) against a reference image exposed at a fixed condition. The "ground truth" for the dose reduction claim was established by the radiologists determining when a reduced-dose image was visually equivalent to a reference image, and then calculating the dose difference.

    8. The Sample Size for the Training Set

    The document does not specify a training set sample size. The study described is a performance evaluation of the imaging system and its dose reduction capability, not an AI algorithm that requires a separate training set. The "MUSICA image processing software" is mentioned as having "optional image processing algorithms" that were "previously cleared," but no details about their training are provided in this document.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is described for the dose reduction study, this information is not provided in the document. If referring to the MUSICA image processing software, the document states these algorithms were "previously cleared," implying their ground truth and development were established in prior submissions, but details are not given here.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 3