Search Filters

Search Results

Found 51 results

510(k) Data Aggregation

    K Number
    K251168
    Device Name
    Image Suite
    Date Cleared
    2025-09-04

    (142 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K241505
    Date Cleared
    2024-12-10

    (196 days)

    Product Code
    Regulation Number
    892.1720
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is designed to perform radiographic x-ray examinations on all pediatric and adult patients, in all patient treatment areas.

    Device Description

    The DRX-Revolution Mobile X-ray System is a mobile diagnostic x-ray system that utilizes digital technology for bedside or portable exams. Key components of the system are the x-ray generator, a tube head assembly (includes the x-ray tube and collimator) that allows for multiple axes of movement, a maneuverable drive system, touchscreen user interface(s) for user input. The system is designed with installable software for acquiring and processing medical diagnostic images outside of a standard stationary X-ray room. It is a mobile diagnostic system intended to generate and control X-rays for examination of various anatomical regions.

    AI/ML Overview

    The provided text describes a 510(k) premarket notification for the DRX-Revolution Mobile X-ray System, which includes changes such as the addition of Smart Noise Cancellation (SNC) functionality and compatibility with a new detector (Lux 35). The study focuses on demonstrating the substantial equivalence of the modified device to a previously cleared predicate device (DRX-Revolution Mobile X-ray System, K191025).

    Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided information:

    1. A table of acceptance criteria and the reported device performance

    Acceptance Criteria (for SNC)Reported Device Performance
    At least 99% of all image pixels were within ± 1 pixel valueAchieved. The results demonstrated that at least 99% of all image pixels were within ± 1 pixel value.
    Absolute maximum difference across all test images should be ≤ 10-pixel valuesAchieved. The absolute maximum difference seen across all test images was 3-pixel values, meeting the acceptance criterion of a maximum allowable difference of 10-pixel values.
    Noise ratio values computed for every pixel of the test images should be
    Ask a Question

    Ask a specific question about this device

    K Number
    K233381
    Date Cleared
    2024-03-12

    (162 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DRX-Evolution Plus is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging. This device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging. This device also supports Dual Energy chest imaging. The Dual Energy feature is not to be used for imaging pediatric patients.

    Device Description

    The DRX-Evolution Plus is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, various models of tube support devices, x-ray tube, and collimator (beam-limiting device). In addition to general radiography applications, the system also includes the optional Dual Energy functionality. The DRX-Evolution Plus can be used with digital radiography (DR) and computed radiography (CR) receptors. "Smart" Features are added to the DRX-Evolution Plus system to provide remote exam set-up capabilities for existing functions of the DRX-Evolution Plus system. These remote capabilities simplify exam set up and improve workflow for the operator while preparing for the patient exposure. The "smart" features, described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. Implementation of these features does not change the intended use of the system and does not affect the Dual Energy functionality.

    AI/ML Overview

    The provided FDA 510(k) document for the Carestream Health, Inc. DRX-Evolution Plus System (K233381) does not contain the detailed information required to describe the acceptance criteria and the study that proves the device meets those criteria, specifically regarding AI/algorithm performance.

    The document discusses the substantial equivalence of the DRX-Evolution Plus system to a predicate device (K190330), focusing on hardware components, new integrated detectors, and workflow enhancements referred to as "Smart" features (Real-time Video Assistance, Long Length Imaging, Collimation from User Interface, Patient Picture).

    The "Smart" features described are workflow improvements that seem to involve remote control and visualization, not an AI/algorithm that performs diagnostic or detection tasks requiring rigorous performance criteria and clinical validation studies per the questions asked. The document explicitly states: "The 'smart' features, described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. Implementation of these features does not change the intended use of the system and does not affect the Dual Energy functionality."

    Therefore, I cannot extract the information requested about acceptance criteria for an AI/algorithm's diagnostic performance, sample sizes used for test sets, expert ground truth establishment, MRMC studies, or standalone algorithm performance from this specific document.

    The document indicates:

    • Non-clinical testing was performed for the "Smart" Feature user options, and these tests "indicated that the subject device as described in this submission meets the predetermined safety and effectiveness criteria." However, it does not specify what those criteria were for these workflow enhancements beyond general safety and effectiveness.
    • Detector integration testing involved "functional testing, installation testing, media verification tests, performance tests, regression tests, risk mitigation testing, and serviceability testing." For the Lux 35 detector, "comprehensive image quality tests, vacuum testing to validate its liquid ingress (IP57) requirement, and Dual Energy functionality and performance testing" were done.

    Given the nature of the device (a general diagnostic X-ray system with workflow enhancements), it's highly probable that the acceptance criteria and validation studies are related to hardware performance, image quality, electrical safety, usability, and compliance with recognized standards (IEC, ISO), rather than the diagnostic accuracy of an AI algorithm.

    In summary, the provided text does not contain the information requested to answer the questions about AI/algorithm acceptance criteria and performance studies because the "Smart" features described are workflow enhancements, not diagnostic AI algorithms.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223842
    Device Name
    DRX - Compass
    Date Cleared
    2023-01-20

    (29 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The device is not indicated for use in mammography

    Device Description

    The DRX-Compass System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wall-mounted image receptors/detectors for upright imaging, various models of tube support devices, x-ray tube, and collimator (beam-limiting device). The DRX-Compass can be used with digital radiography (DR) and computed radiography (CR) receptors. Smart Features are added to the DRX-Compass system to provide remote capabilities for existing functions of the DRX-Compass system. These remote capabilities simplify exam set up and improve workflow for the operator while preparing for the patient exposure. The "smart features", described below, are designed to reduce the technologist's manual tasks and to speed up workflow for existing features of the system. These improvements are referred to as "smart features" in the product documentation. Implementation of these "smart features" does not change the intended use of the system.

    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria and a study that comprehensively proves the device meets those criteria for the DRX-Compass system. The document is a 510(k) summary for the FDA, which focuses on demonstrating substantial equivalence to a predicate device rather than a comprehensive efficacy study for new features.

    However, based on the information provided, I can extract the relevant details that are present and explain why some requested information is not available in this document.

    Here's a breakdown of what can be inferred and what is missing:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document mentions that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device." However, the specific acceptance criteria themselves (e.g., specific thresholds for DQE/MTF, or performance metrics for the "smart features") are not explicitly detailed in this 510(k) summary. Similarly, the reported device performance values against those specific criteria are also not provided.

    The closest information related to performance is:

    Acceptance Criteria (Inferred/General)Reported Device Performance (Inferred/General)
    Image quality of additional detectors equivalent to predicate.Flat panel detector DQE/MTF data shows the additional detectors (DRX Plus 2530, Focus HD 35, Focus HD 43, Lux 35) are equivalent in image quality to DRX Plus detectors cleared with the predicate.
    Compliance with electrical safety standards (IEC 60601-1, IEC 60601-1-2, IEC 60601-2-54).Device complies with listed electrical safety standards.
    Compliance with usability standards (IEC 60601-1-6, IEC 62366).Device complies with listed usability standards.
    No new risks identified that raise additional questions of safety and performance (ISO 14971).All product risks have been mitigated; no changes to risk control measures; testing indicates substantial equivalence.
    "Smart Features" (Real-time Video, LLI, Collimation, Patient Picture) simplify exam setup and improve workflow without changing intended use.These features are designed to reduce manual tasks and speed up workflow. (No specific quantitative performance metrics provided in this document).

    2. Sample Size Used for the Test Set and Data Provenance

    This information is not provided in the 510(k) summary. The document states "Non-clinical testing such as standards testing are the same as that of the predicate. The verification and validation testing of the modified device demonstrates that the modified device performs as well as the predicate and is substantially equivalent." without detailing the specific sample sizes or data provenance for these tests. For imaging performance, it mentions DQE/MTF data for detectors, but not the sample size of images or patients used for performance evaluation of the overall system or its new "smart features."

    3. Number of Experts Used to Establish Ground Truth and Qualifications

    This information is not provided in the 510(k) summary. The document focuses on technical verification and validation, and comparison to a predicate device, rather than a clinical study requiring expert consensus on ground truth.

    4. Adjudication Method for the Test Set

    This information is not provided in the 510(k) summary. Given the absence of specific clinical study details or expert ground truth establishment, no adjudication method would be mentioned.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    A MRMC comparative effectiveness study is not mentioned in this document. The submission's focus is on demonstrating substantial equivalence through technical testing and compliance with recognized standards, particularly for the "smart features" which are described as workflow enhancements rather than diagnostic AI tools requiring reader performance studies. There is no mention of AI assistance for human readers or associated effect sizes.

    6. Standalone (Algorithm Only) Performance Study

    A standalone performance study of an algorithm without human-in-the-loop is not explicitly mentioned in this document. The "smart features" are described as functionalities to assist the operator, implying human-in-the-loop operation, rather than a standalone diagnostic algorithm. The document mentions "Flat panel detector DQE/MTF data shows that the additional detectors supported by the modified device (DRX-Compass) are equivalent in image quality to that of the DRX Plus detectors cleared with the predicate," which is a technical performance metric for the detector component, not an algorithm's diagnostic performance.

    7. Type of Ground Truth Used

    The type of ground truth used for any performance evaluation is not explicitly stated. For the detector performance, DQE/MTF data refers to physical image quality metrics rather than a diagnostic ground truth (like pathology or clinical outcomes). For the "smart features," their evaluation appears to be based on functional verification and validation of their workflow enhancement capabilities, rather than comparison to a ground truth for diagnostic accuracy.

    8. Sample Size for the Training Set

    This information is not provided in the 510(k) summary. The document does not describe the use of machine learning algorithms that would typically require a training set. The "smart features" appear to be rule-based or real-time processing functionalities rather than learning algorithms.

    9. How Ground Truth for the Training Set Was Established

    Since there is no mention of a training set or machine learning, details on establishing its ground truth are not provided.


    In summary, the 510(k) submission for the DRX-Compass focuses on demonstrating substantial equivalence to a predicate device by:

    • Ensuring the modified device's indications for use are identical.
    • Confirming compliance with recognized electrical safety and performance standards (AAMI ES60601-1, IEC 60601-1-6, IEC 60601-1-3, IEC 60601-2-54, IEC 62366).
    • Applying risk management (ISO 14971) to ensure no new risks are introduced.
    • Showing that new components (e.g., additional detectors) maintain equivalent image quality (e.g., DQE/MTF data).
    • Asserting that new "smart features" improve workflow without changing the device's intended use or safety profile.

    The document does not provide the kind of detailed clinical study data often found for AI/ML-based diagnostic devices, including specific acceptance criteria values, sample sizes for test or training sets, expert qualifications, or adjudication methods, as these may not be typically required for modifications to a stationary X-ray system primarily focused on workflow enhancements and component upgrades.

    Ask a Question

    Ask a specific question about this device

    K Number
    K213568
    Manufacturer
    Date Cleared
    2022-03-23

    (134 days)

    Product Code
    Regulation Number
    892.1720
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is designed to perform radiographic x-ray examinations on all pediatric and adult patient treatment areas.

    Device Description

    The DRX-Rise Mobile X-ray System is a diagnostic mobile X-ray system utilizing digital radiography technology. The DRX-Rise consists of a self-contained X-ray generator, image receptor(s), imaging display and software for acquiring medical diagnostic images outside of a standard stationary X-ray room. These components are mounted on a motorized cart that is battery powered to enable the device to be driven from location to location by user interaction. The DRX-Rise system incorporates a flat-panel detector that can be used wirelessly for exams such as in-bed chest projections. The device acquires images using Carestream's clinical acquisition software platform (ImageView) and digital flat panel detectors. ImageView is considered software that is of Moderate Level of Concern and not intended for manipulation of medical images. The DRX-Rise Mobile X-ray System is designed for digital radiography (DR) with Carestream detectors.

    AI/ML Overview

    The provided document is a 510(k) premarket notification for the DRX-Rise Mobile X-ray System, asserting its substantial equivalence to a predicate device (DRX-Revolution Mobile X-ray System, K191025). The document does not describe a study involving acceptance criteria for an AI/CADe device's performance when assisting human readers or evaluating standalone AI performance.

    Instead, the document focuses on demonstrating that the DRX-Rise Mobile X-ray System itself, as a physical medical device, is substantially equivalent to an already cleared device. This is achieved through comparisons of technological characteristics and compliance with consensus standards.

    Therefore, I cannot provide the requested information regarding acceptance criteria and studies for an AI/CADe device's performance (points 2-9) because the submission does not pertain to such a device or study.

    Here's a breakdown of what can be extracted from the provided text, related to the device itself:

    1. A table of acceptance criteria and the reported device performance:

    The document doesn't present acceptance criteria in the typical "performance target" vs. "achieved performance" format for an AI/CADe. Instead, it compares the modified device's specifications to the predicate device's specifications, arguing that any differences do not impact safety or performance.

    Criterion (Feature)Predicate Device Performance (DRX-Revolution Mobile X-ray System K191025)Modified Device Performance (DRX-Rise Mobile X-ray System K213568)Impact Assessment (Implicit Acceptance Criterion)
    Indications for UseThe device is designed to perform radiographic X-ray examinations on all pediatric and adult patients, in all patient treatment areas.SameSubstantially equivalent (Same indications for use is an explicit statement of acceptance)
    Imaging Device CompatibilityDigital Radiography (DR)SameSubstantially equivalent
    Digital Radiography Imaging Device (Detector)DRX Plus Detectors (K150766), (K153142), (K183245)SameSubstantially equivalent
    X-ray Generator Rating32kWSameSubstantially equivalent
    mAs Range (Generator)0.1-320 mAs0.1 mAs~630 mAsThe DRX Rise (modified device) provides more power in generator output. No impact to safety/performance. (Implicitly accepted if no safety/performance impact)
    X-ray Tube Voltage Range40-150kV (1kV steps)40-125kV (1kV steps)40-125kV is the most commonly used kV range in clinical imaging. No impact to safety/performance. (Implicitly accepted if no safety/performance impact)
    X-ray Tube ModelCanon/XRR-3336XCanon/E7242 (X / FX / GX)Same supplier but different tube model is used with the modified device. No impact to safety/performance. (Implicitly accepted if no safety/performance impact)
    X-ray Tube Focal Spot Size0.6mm and 1.2mm0.6 mm and 1.5 mmSmall focal spot size is same as predicate. Large focus spot size is 20% larger but within expected range for clinical imaging. No impact to safety/performance or to image quality. (Implicitly accepted if no safety/performance impact or to image quality)
    System Power for ChargingSingle Phase AC: 50/60 Hz, 1440 VA Voltage:100-240VSameSubstantially equivalent
    Application System Software (Operator Console X-ray Control)Carestream ImageView System software with image processing capability (K191025)SameSubstantially equivalent
    Collapsible ColumnYesNoThe column is fixed on the modified device. No impact to safety/performance. (Implicitly accepted if no safety/performance impact)
    Column Height2193mm-1390mm1930mm (fixed column)No impact to safety/performance. (Implicitly accepted if no safety/performance impact)
    Column Rotation Range+/- 270 degreesSameSubstantially equivalent
    Travel MethodElectric motor (battery powered)SameSubstantially equivalent

    2. Sample sized used for the test set and the data provenance: Not applicable. This submission concerns a hardware medical device, not a performance study on a test set of images. The "test set" in this context refers to the device itself being tested against its specifications and existing standards.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not applicable. Ground truth for image interpretation by experts is not relevant to this submission.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not applicable.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. No AI assistance mentioned.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Not applicable.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not applicable. The "ground truth" for this device's acceptance is its compliance with recognized consensus standards and its functional equivalence to a predicate device.

    8. The sample size for the training set: Not applicable. There is no mention of a training set as this is not an AI/ML device submission.

    9. How the ground truth for the training set was established: Not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K213307
    Date Cleared
    2022-01-14

    (102 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.

    Device Description

    Eclipse software runs inside the Image View product application software (not considered stand-alone software). Smart Noise Cancellation is an optional feature (module) that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Digital Radiography (DR). Eclipse II with Smart Noise Cancellation supports the Carestream DRX family of detectors which includes all CR and DR detectors.

    The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions.

    Eclipse II with Smart Noise Cancellation incorporates enhanced noise reduction prior to executing Eclipse image processing software. The software has the capability to lower dose up to 50% when processed through the Eclipse II software with SNC, resulting in improved image quality. A 50% dose reduction for CSI panel images and 40% dose reduction for GOS panel images when processed with Eclipse II and SNC results in image quality as good as or better than nominal dose images

    AI/ML Overview

    The provided document describes the modification of the Eclipse II software to include a Smart Noise Cancellation (SNC) module. The primary goal of this modification is to enable lower radiation doses while maintaining or improving image quality. The study discussed is a "concurrence study" involving board-certified radiologists to evaluate diagnostic image quality.

    Here's the breakdown of the acceptance criteria and study details:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" in a table format with specific numerical thresholds for image quality metrics. Instead, it describes the objective of the study which effectively serves as the performance goal for the device.

    Acceptance Criterion (Implicit Performance Goal)Reported Device Performance
    Diagnostic quality images at reduced dose.Statistical test results and graphical summaries demonstrate that the software delivers diagnostic quality images at 50% dose reduction for CsI panel images and 40% dose reduction for GOS panel images.
    Image quality at reduced doseImage quality with reduced radiation doses is equivalent to or exceeds the quality of nominal dose images of exams.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size for Test Set: Not explicitly stated. The document mentions "clinical images" and "exams, detector types and exposure levels" were used, but a specific number of images or cases for the test set is not provided.
    • Data Provenance: Not explicitly stated. The document refers to "clinical images," but there is no information about the country of origin or whether the data was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:

    • Number of Experts: Not explicitly stated. The study was performed by "board certified radiologists." The number of radiologists is not specified.
    • Qualifications of Experts: "Board certified radiologists." No information is given regarding their years of experience.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not explicitly stated. The document mentions a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for evaluating overall diagnostic capability. However, it does not describe how multiple expert opinions were combined or adjudicated if there were disagreements (e.g., 2+1, 3+1).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance:

    • MRMC Study: The study appears to be a multi-reader study as it was "performed by board certified radiologists." However, it is not a comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance. The study's aim was to determine if the software itself (Eclipse II with SNC) could produce diagnostic quality images at reduced dose, assessed by human readers. It's evaluating the output of the software, not the improvement of human readers using the software as an assistance tool.
    • Effect Size: Not applicable, as it's not an AI-assisted human reading study.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Standalone Performance: No, a standalone (algorithm only) performance evaluation was not done. The evaluation involved "board certified radiologists" assessing the diagnostic quality of the images processed by the software. This is a human-in-the-loop assessment of the processed images, not a standalone performance of the algorithm making diagnoses.

    7. The Type of Ground Truth Used:

    • Type of Ground Truth: The ground truth for image quality and diagnostic capability was established by expert consensus (or at least expert assessment), specifically "board certified radiologists," using a 5-point visual difference scale and a 4-point RadLex scale. This is a subjective assessment by experts, rather than an objective ground truth like pathology or outcomes data.

    8. The Sample Size for the Training Set:

    • Sample Size for Training Set: Not explicitly stated. The document mentions that the Convolutional Network (CNN) was "trained using clinical images with added simulated noise." However, no specific number of images or cases used for training is provided.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: The document states the CNN was "trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This implies that the ground truth for training likely revolved around distinguishing actual image data from added simulated noise. This is an intrinsic ground truth generated by the method of simulating noise on known clean clinical images, rather than a clinical ground truth established by expert review for diagnostic purposes.
    Ask a Question

    Ask a specific question about this device

    K Number
    K202441
    Date Cleared
    2021-04-02

    (219 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatric x-ray images. This excludes mammography applications.

    Device Description

    Eclipse software runs inside the ImageView product application software (also namely console software). The Eclipse image processing software II with Smart Noise Cancellation is similar to the predicate Eclipse image processing software (K180809). Eclipse with Smart Noise Cancellation is an optional feature that enhances projection radiography acquisitions captured from digital radiography imaging receptors (Computed Radiography (CR) and Direct Radiography (DR). The modified software is considered an extension of the software (it is not stand alone and is to be used only with the predicate device supports the Carestream DRX family of detectors, this includes all CR and DR detectors. The primary difference between the predicate and the subject device is the addition of a Smart Noise Cancellation module. The Smart Noise Cancellation module consists of a Convolutional Network (CNN) trained using clinical images with added simulated noise to represent reduced signal-to-noise acquisitions. Eclipse with Smart Noise Cancellation (modified device) incorporates enhanced noise reduction prior to executing Eclipse II image processing software.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Based on the provided text, the device Eclipse II with Smart Noise Cancellation is considered substantially equivalent to its predicate Eclipse II (K180809) due to modifications primarily centered around an enhanced noise reduction feature. The acceptance criteria and the study that proves the device meets these criteria are inferred from the demonstrated equivalence to the predicate device and the evaluation of the new Smart Noise Cancellation module.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly tied to the performance of the predicate device and the new feature's ability to maintain or improve upon key image quality attributes without introducing new safety or effectiveness concerns.

    Acceptance Criteria (Implied)Reported Device Performance
    Diagnostic Quality Preservation/Improvement: The investigational software (Eclipse II with Smart Noise Cancellation) must deliver diagnostic quality images equivalent to or exceeding the predicate software (Eclipse II).Clinical Evaluation: "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels."
    No Substantial Residual Image Artifacts: The noise reduction should not introduce significant new artifacts.Analysis of Difference Images: "The report focused on the analysis of the residual image artifacts. In conclusion, the images showed no substantial residual edge information within regions of interest."
    Preservation/Improvement of Detectability: The detectability of lesions should not be negatively impacted and ideally improved.Ideal Observer Evaluation: "The evaluation demonstrated that detectability is preserved or improved with the investigational software for all supported detector types and exposure levels tested."
    No New Questions of Safety & Effectiveness: The modifications should not raise new safety or effectiveness concerns.Risk Assessment: "Risks were assessed in accordance to ISO 14971 and evaluated and reduced as far as possible with risk mitigations and mitigation evidence."
    Overall Conclusion: "The differences within the software do not raise new or different questions of safety and effectiveness."
    Same Intended Use: The device must maintain the same intended use as the predicate.Indications for Use: "The software performs digital enhancement of a radiographic image generated by an x-ray device. The software can be used to process adult and pediatic x-ray images. This excludes mammography applications." (Stated as "same" for both predicate and modified device in comparison chart)

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated. The text mentions "a range of exams, detector types and exposure levels" for the clinical evaluation, and "clinical images with added simulated noise" for the CNN training.
    • Data Provenance: Not explicitly stated. The text mentions "clinical images," implying real-world patient data, but does not specify the country of origin or whether it was retrospective or prospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Not explicitly stated. The text mentions a "clinical evaluation was performed by board certified radiologists." It does not specify the number involved.
    • Qualifications of Experts: "Board certified radiologists." No specific years of experience are provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not explicitly stated. The text mentions images were evaluated using a "5-point visual difference scale (-2 to +2) tied to diagnostic confidence" and a "4-point RadLex scale" for overall diagnostic capability. It does not describe a method for resolving discrepancies among multiple readers, such as 2+1 or 3+1.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Comparative Effectiveness Study: Yes, a clinical evaluation was performed by board-certified radiologists comparing the investigational software to the predicate software. While it doesn't explicitly use the term "MRMC," the description of a clinical evaluation by multiple radiologists comparing two versions of software suggests this type of study was conducted.
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance: The text states, "The statistical test results and graphical summaries demonstrate that the investigational software delivers diagnostic quality images that exceed the quality of the predicate software over a range of exams, detector types and exposure levels." This indicates an improvement in diagnostic image quality with the new software (which incorporates AI - the CNN noise reduction), suggesting that human readers benefit from this enhancement. However, a specific effect size (e.g., AUC improvement, percentage increase in accuracy) is not provided in the summary.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Partially. The "Ideal Observer Evaluation" seems to be a more objective, algorithm-centric assessment of detectability, stating that "detectability is preserved or improved with the investigational software." Also, the "Analysis of the Difference Images" checked for artifacts without human interpretation as the primary outcome. However, the overall "diagnostic quality" assessment was clinical, involving human readers.

    7. The Type of Ground Truth Used

    • Type of Ground Truth: The text implies a human expert consensus/evaluation as the primary ground truth for diagnostic quality. The "5-point visual difference scale" and "4-point RadLex scale" evaluated by "board certified radiologists" serve as the basis for assessing diagnostic image quality. For the "Ideal Observer Evaluation," the ground truth likely involved simulated lesions.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: Not explicitly stated. The text mentions "clinical images with added simulated noise" were used to train the Convolutional Network (CNN).

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: The ground truth for training the Smart Noise Cancellation module (a Convolutional Network) was established using "clinical images with added simulated noise to represent reduced signal-to-noise acquisitions." This suggests that the model was trained to learn the relationship between noisy images (simulated low SNR) and presumably clean or less noisy versions of those clinical images to perform noise reduction. The text doesn't specify how the "clean" versions were obtained or verified, but it implies a supervised learning approach where the desired noise-free output served as the ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203159
    Device Name
    Lux 35 Detector
    Manufacturer
    Date Cleared
    2020-12-02

    (40 days)

    Product Code
    Regulation Number
    892.1680
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications

    Device Description

    The modified DRX Plus 3543C is a scintillator-photodetector device (Solid State X-ray Imager) utilizing an amorphous silicon flat panel image sensor. The modified detector is redesigned with the intent to reduce weight and increase durability, while utilizing a non-glass substrate material and cesium iodide scintillator. The modified detector, like the predicate is designed to interact with Carestream's DRX-1 System (K090318).

    The modified DRX Plus 3543C Detector, like the predicate, creates a digital image from the x-rays incident on the input surface during an x-ray exposure. The flat panel imager absorbs incident x-rays and converts the energy into visible light photons. These light photons are converted into electrical charge and stored in structures called "pixels." The digital value in each pixel of the image is directly related to the intensity of the incident x-ray flux at that particular location on the surface of the detector. Image acquisition software is used to correct the digital image for defective pixels and lines on the detector, perform gain and offset correction and generate sub-sampled preview images

    AI/ML Overview

    The provided text describes a 510(k) submission for a medical device, the Lux 35 Detector, which is a digital X-ray flat panel detector. The submission aims to demonstrate substantial equivalence to a predicate device (DRX Plus 3543 Detector). The information focuses on design modifications and non-clinical testing.

    Here's an analysis of the acceptance criteria and study details based on the provided text, highlighting where information is present and where it is not:

    Device: Lux 35 Detector (Carestream Health, Inc.)

    Study Type: Non-clinical (bench) testing, specifically a Phantom Image Study, to demonstrate substantial equivalence of image quality to a predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly state "acceptance criteria" for image quality in a tabular format with pass/fail thresholds. Instead, it provides a qualitative comparison of image attributes. The closest interpretation of "acceptance criteria" is that the modified device's image quality needed to be "equivalent to just noticeably better than" the predicate.

    Acceptance Criterion (Inferred)Reported Device Performance (Lux 35 Detector vs. Predicate)
    Image Detail PerformanceRatings for detail were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Image Sharpness PerformanceRatings for sharpness were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Image Noise PerformanceRatings for noise were "significantly greater than 0," indicating images were equivalent to or better than predicate.
    Appearance of ArtifactsQualitative assessment, results not numerically quantified but implied to be equivalent or better given overall conclusion.
    DQE (Detective Quantum Efficiency)55% (RQA-5, 1 cycle/mm, 2.5 µGy) for Lux 35 vs. 26% (RQA-5, 1 cycle/mm, 3.1 µGy) for Predicate. This represents "improved image quality."
    MTF (Modulation Transfer Function)62% (RQA-5, 1 cycle/mm) for Lux 35 vs. 54% (RQA-5, 1 cycle/mm) for Predicate. This represents "improved image quality."
    Overall Image Quality Comparison"Greater than 84% of all responses were rated 0 or higher in favor of the modified DRX Plus 3543C panel." "All ratings for the attributes (detail contrast, sharpness and noise) were significantly greater than 0 indicating that the modified DRX Plus 3543C images were equivalent to just noticeably better than the predicate images." "The image quality of the modified device is at least as good as or better than that of the predicate device."

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: Not explicitly stated. The text mentions "a Phantom Image Study" but does not quantify the number of images or runs.
    • Data Provenance: This was a non-clinical bench testing study using phantoms. Therefore, there is no patient data or geographical provenance. The study was likely conducted at Carestream's facilities. It is a prospective study in the sense that the testing was performed specifically for this submission.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Experts:

    • Number of Experts: Not specified. The text mentions "Greater than 84% of all responses were rated 0 or higher," implying a group of evaluators, but their number is not provided.
    • Qualifications of Experts: Not specified. It's unclear if these were radiologists, imaging scientists, or other relevant personnel.

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not specified. The phrase "Greater than 84% of all responses were rated 0 or higher" suggests individual ratings were collected, but how conflicts or multiple ratings were aggregated or adjudicated is not detailed.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:

    • Answer: No. The study was a "Phantom Image Study" focused on technical image quality attributes, not human reader performance.
    • Effect Size of Human Readers: Not applicable, as no MRMC study was performed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:

    • Answer: Yes, in a sense. The evaluation of DQE and MTF are standalone technical performance metrics of the detector itself, independent of human interpretation. The "Phantom Image Study" also evaluates the output of the device (images) based on technical attributes, rather than a human diagnostic task.

    7. The Type of Ground Truth Used:

    • Type of Ground Truth: For the phantom image study, the "ground truth" for evaluating image quality attributes (detail, sharpness, noise, artifacts) is based on technical image quality metrics (DQE, MTF) and potentially expert consensus on visual assessments of phantom images against known ideal phantom characteristics. It is not based on patient outcomes, pathology, or clinical diagnoses.

    8. The Sample Size for the Training Set:

    • Sample Size for Training Set: Not applicable. This device is a hardware component (X-ray detector) and the study described is a non-clinical evaluation of its image quality, not an AI/algorithm that requires a training set of data.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth Establishment for Training Set: Not applicable, as this is not an AI/algorithm that requires a training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K201373
    Device Name
    DRX-Compass
    Date Cleared
    2020-06-26

    (31 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is indicated for use in obtaining diagnostic images to aid the physician with diagnosis. The system can be used to perform radiographic imaging of various portions of the human body, including the skull, spinal column, extremities, chest, abdomen and other body parts. The device is not indicated for use in mammography.

    Device Description

    The DRX-Compass System is a general purpose x-ray system used for acquiring radiographic images of various portions of the human body. The system consists of a combination of components including various models of high voltage x-ray generators, control panels or workstation computers, various models of patient support tables, wallmounted image receptors/detectors for upright imaging, a ceiling mounted tube support, x-ray tube, and collimator (beam-limiting device).

    The DRX-Compass can be used with digital radiography (DR) and computed radiography (CR) receptors. Systems equipped with DR or CR receptors can also be configured to include a workstation computer that is fully integrated with the x-ray generator.

    The modified (subject) device, DRX-Compass, is the previously cleared Q-Rad System stationary x-ray system which has been modified as follows:

    • New marketing names DRX-Compass and DR-Fit will be used depending upon regional marketing strategies.
    • Implementation of a new wall stand that provides options for automated vertical motion and vertical to horizontal manual tilt (90 degrees).
    • Implementation of a different Overhead Tube Crane (OTC): This OTC is ceiling suspended and provides x-y movement capability for the tube head with respect to the detector. The tube head is capable of three options for alignment with the image acquisition device (detector) as follows: 1) manual alignment by moving the x-ray tube support, 2) manual alignment using the "tube-up/tube-down" switch on the tube support, or 3) automatic alignment using the "Auto Position" switch to activate motors on the tube support in x, y, z, and alpha directions
    • Focus 35C and Focus 43C Detectors are added as additional optional detector selections for customers ordering a DRX-Compass system.
    • X-Ray Generator: Several Carestream designed generators are available with the system depending on power requirements and regional configurations. These generators are functionally identical to the generators currently offered for sale with the Q-Rad System.
    AI/ML Overview

    This looks like a 510(k) summary for a medical device called DRX-Compass, an X-ray system. The document does not contain the acceptance criteria or results of a study (like an AI model performance study) that would typically involve statistical metrics, ground truth establishment, or expert reviews.

    Instead, this document describes:

    • Device Name: DRX-Compass
    • Regulatory Information: Product Code, Regulation Number, Class, etc.
    • Predicate Device: Q-Rad System (K193574)
    • Device Description: Components of the DRX-Compass system, including generator models, patient support tables, wall-mounted receptors, ceiling-mounted tube support, X-ray tube, and collimator. It also mentions the new additions/modifications compared to the predicate device (new marketing names, new wall stand, different Overhead Tube Crane (OTC), added detectors, and available generators).
    • Indications for Use: Obtaining diagnostic images for various body parts.
    • Substantial Equivalence: The primary claim is that the DRX-Compass is substantially equivalent to the predicate Q-Rad System, stating that modifications do not raise new issues of safety and effectiveness.
    • Discussion of Testing: It briefly mentions "non-clinical (bench) testing" to evaluate performance, workflow, function, verification, and validation, and that "Predefined acceptance criteria were met." However, it does not specify what those acceptance criteria were or how they were met in terms of specific performance metrics. It's focused on demonstrating equivalence to the predicate device, not on proving performance against a detailed set of criteria that would typically be described for an AI/CAD device.

    Therefore, based only on the provided text, I cannot extract the detailed information requested in the prompt. The document is a regulatory submission summary, not a clinical or performance study report.

    If this were a submission for an AI/CAD device, the "Discussion of Testing" section would typically elaborate on a clinical study including:

    1. A table of acceptance criteria and the reported device performance: This would list metrics like sensitivity, specificity, AUC, etc., and the target performance values.
    2. Sample size used for the test set and the data provenance: Details on number of cases, patient demographics, and origin of data.
    3. Number of experts used to establish the ground truth for the test set and their qualifications: Information about the radiologists/pathologists.
    4. Adjudication method: How disagreements among experts were resolved.
    5. Multi-reader multi-case (MRMC) comparative effectiveness study: If conducted, the effect size (e.g., improvement in reader performance with AI).
    6. Standalone performance: The algorithmic performance without human intervention.
    7. Type of ground truth used: e.g., pathology, clinical follow-up.
    8. Sample size for the training set: Number of cases used for model development.
    9. How the ground truth for the training set was established: Similar to the test set, but for the training data.

    In summary, the provided document does not contain the information requested because it pertains to a traditional X-ray system's substantial equivalence claim, not the performance evaluation of an AI/CAD (Computer-Aided Detection/Diagnosis) algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K192894
    Date Cleared
    2020-02-18

    (130 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Applicant Name (Manufacturer) :

    Carestream Health, Inc.

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Vita Flex CR System is intended for digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LL is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine.

    Device Description

    The Vita Flex CR System with LLI is a Computer Radiography (CR) acquisition scanner, which includes mechanical and software interface to the LLI cassette. The device is constructed from a Man Machine Interface panel, a CR scanner and infrastructure, which enables connection to external applications, i.e. to import command messages, to export images and provide status messages. The LLI is a CR cassette, which is used for Long Length Imaging X Ray examinations of long areas of anatomy.

    The Vita Flex CR system with LLI accepts an x-ray cassette with a screen. An X-ray cassette is a light-resistant container that protects the screen from exposure to daylight, and allows the passage of X-rays through the front cover on to the phosphor layer of the screen. When stroked by radiation the intensifying screen fluoresces emitting a light that creates the image.

    Our Vita Flex CR system take a cassette as an input and it extracts an exposed screen and scans in the image off the screen. The image is stored on the computer system attached to the Vita Flex CR system. Once the scan is complete the screen data is erased and the screen is placed back inside the cassette to be used again by the customer.

    When a cassette is properly inserted into the scanner, the scanner will lock the cassette in place. Once locked into place the cassette door can be opened to allow the scanner to feed the screen into the unit.

    The operation of the scanning of the LLI cassette and screen will be done exactly as the predicate. Since the size of a long length imaging screen and cassette is large, the operation consists of 2 scans – scanning one half of the image, then turning the cassette around and scanning the second half of the image.

    AI/ML Overview

    The document describes the regulatory submission for the Vita Flex CR System with LLI, a digital radiography system. The key argument for its clearance is its substantial equivalence to a previously cleared predicate device. Therefore, the "acceptance criteria" and "study that proves the device meets the acceptance criteria" are framed within the context of demonstrating this substantial equivalence through non-clinical testing, rather than a clinical trial with human subjects.

    Here's the breakdown of the information requested:

    1. A table of acceptance criteria and the reported device performance

    The acceptance criteria are implicitly defined by the demonstration of equivalent or improved performance compared to the predicate device across various features and operational parameters. The reported device performance is presented as a comparison between the modified device (Vita Flex CR System with LLI) and the predicate device (Point of Care including LLI).

    Feature / Acceptance Criteria CategoryPredicate Device (Performance Baseline)Vita Flex CR System with LLI (Reported Performance)Met/Exceeds Criteria (Demonstrates Substantial Equivalence or Improvement)
    Intended Use / Indications for Use"digital radiography using a phosphor storage screen for standard radiographic diagnostic images. The LLI is indicated for Long Length Imaging examinations of long areas of anatomy such as the leg and spine."IdenticalMet - Unchanged
    Safety StandardsIEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser)IEC60601-1, IEC60601-1-2, IEC 60825-1 (Class 1 Laser)Met - Conformance verified by an OSHA approved test lab
    Working EnvironmentAmbient: +10 to +40°C, RH: 30-70%Ambient: +5 to +45°C, RH: 25-81%, Atmospheric pressure: 700-1060 hPaExceeds/Broader - Improved operational range
    Physical Size658mm x 735mm x 358mm, 45KG Weight668mm x 675mm x 385mm, 30KG WeightDifferent but within acceptable range for function, lighter weight (Improvement)
    Power InputMultiple profiles (90-250VAC, 50/60Hz)Unified profile (100-240VAC, 50/60Hz, 1.5A)Improvement - Streamlined power input
    Power ModuleInternal AC/DC converterExternal AC/DC converterDifferent - No impact on safety or effectiveness
    Cassette LoadingManual loadingManual loadingMet - Unchanged
    Screen AccessAutofeed by Driving Roller in Screen Transportation unitAutofeed by Driving Roller in Screen Transportation UnitMet - Unchanged
    Imaging ModuleLaser Platen Scanning (Vertical & Horizontal Direction)Laser Platen Scanning (Vertical & Horizontal Direction)Met - Unchanged fundamental technology
    Laser Beam WavelengthRed Light: 655 ± 10 nmRed Light: 660 ± 7nmMet - "Negligible difference," "no impact to safety or effectiveness"
    Laser Output Power (mW)22~2530 ± 2Met - "Slight increase," "no impact to safety or effectiveness"
    Laser LevelClass 3BClass 3BMet - Unchanged
    Screen Erase ModuleAchromatic Light Eraser, Fluorescent LampsMonochromatic Light Eraser, Red LED Light SourceImprovement - "More stable over longer period," "no impact to safety or effectiveness"
    Console ConnectorUSB 2.0USB 2.0Met - Unchanged
    Software Development KitUltra Lite SDKUltra Lite SDKMet - Unchanged
    Long Length Imaging SoftwareCR Long-Length Imaging System (K021829)DR Long Length Imaging Software (K130567) (FDA cleared, K100094, for Carestream Image Suite Software)Met - Uses newer, also cleared software, deemed "no impact to safety or effectiveness"
    DICOM3.03.0Met - Unchanged
    Image Pixel Depth (Bit)1212Met - Unchanged
    Phosphor Screen & Cassette Spec.14x17", 10x12", 8x10", 24x30cm, 14x14", 14x33" (LLI), 15x30cm (Dental) and some not supported (10x10", 9.5x9.5")Same supported sizes, plus 10x10" (Dental Vet) newly supported; 9.5x9.5" still not supported.Exceeds/Improvement - Broader compatibility with some cassette sizes
    Throughput Tolerance ±5% (PPH)Examples specific values (e.g., 14x17" @ 21; 14x33" @ 2.5)Examples specific values (e.g., 14x17" @ 30 and higher; 14x33" @ 2.5)Exceeds/Improvement - Higher PPH for some configurations
    Max Spatial Resolution (LP/mm)Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 3.5)Examples specific values (e.g., 8x10" @ 4.2; 10x12" @ 4.2)Exceeds/Improvement - Higher resolution for some configurations
    Min Pixel Pitch ($\mu$m)Examples specific values (e.g., 14x33" @ 173; 8x10" @ 100)Examples specific values (e.g., 14x33" @ 160; 8x10" @ 86)Exceeds/Improvement - Smaller pixel pitch for some configurations

    2. Sample size used for the test set and the data provenance

    The document explicitly states: "Given the differences from the predicate device, clinical testing is not necessary for the subject device. Bench testing alone is sufficient to demonstrate substantial equivalence."

    Therefore, there was no "test set" in the sense of a dataset of patient images. The evaluation was based on bench testing of the device's hardware and software components. The "sample size" would refer to the number of devices tested, or the number of tests performed on a device, not a patient image sample size. No specific numbers are provided for the quantity of bench tests or units tested, beyond the general statement that "Bench testing was performed."

    Data Provenance: Not applicable as no clinical or image data was used for testing.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    Not applicable. As no clinical testing or image-based test set was used, there was no need for expert radiologists to establish ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    Not applicable. No image-based test set where adjudication would be relevant.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No. An MRMC study was not performed. The device is a digital radiography system, not an AI-powered diagnostic aid meant to assist human readers. The submission focuses on the safety and performance of the imaging equipment itself in comparison to its predicate.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    No. This describes the performance of the imaging system and its included components, not a standalone algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    Not applicable. The "ground truth" for this submission was the established performance and safety characteristics of the predicate device and relevant industry standards (IEC, etc.). The modified device was evaluated against these benchmarks using non-clinical (bench) testing.

    8. The sample size for the training set

    Not applicable. This device is a hardware system with integrated software, not a machine learning model that requires a "training set."

    9. How the ground truth for the training set was established

    Not applicable. No training set was used.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 6