Search Filters

Search Results

Found 6 results

510(k) Data Aggregation

    K Number
    K183245
    Date Cleared
    2019-02-08

    (79 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Carestream DRX-1 System with DRX Plus 2530 Detectors

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can be configured to register and use any of the two new DRX Plus 2530 and DRX Plus 2530C Detectors. Images captured with a flat panel digital detector can be communicated to the operator console via tethered or wireless connection.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document mentions that predefined acceptance criteria were met for a range of aspects. While specific numeric targets for all criteria are not explicitly stated, the summary indicates successful performance against these criteria.

    Acceptance Criteria CategorySpecific Criteria (where mentioned)Reported Device Performance
    Non-Clinical (Bench) Testing- WeightMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Pixel sizeMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - ResolutionMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Pixel pitchMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Total pixel areaMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Usable pixel areaMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - MTF (at various spatial resolutions)Met predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - DQE (at various spatial resolutions)Met predefined acceptance criteria; demonstrated to deliver quality images equivalent to the predicate.
    - SensitivityMet predefined acceptance criteria; demonstrated to deliver quality images equivalent to the predicate.
    - GhostingMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Boot-up timeMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Operating temperatureMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Exposure latitudeMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Signal uniformityMet predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Dark noise (ADC)Met predefined acceptance criteria, demonstrated safety, effectiveness, and performance as good as or better than predicate.
    - Image QualityDemonstrated to deliver quality images equivalent to the predicate.
    - Intended UseConformed to specifications.
    - Workflow-related performanceConformed to specifications.
    - Shipping performanceConformed to specifications.
    - General functionality and reliability (hardware and software)Conformed to specifications.
    Clinical Study (Reader Study)- Diagnostic image quality (RadLex rating)Mean RadLex rating for both subject devices and predicate device were "Diagnostic (3)" with very little variability.
    - Equivalence to predicate deviceStatistical tests confirmed equivalence between the mean ratings of the subject devices and the predicate, and equivalence in beam detect mode ("On" and "Off").
    - Percentage of Diagnostic/Exemplary images (subject devices)98% of DRX Plus 2530 responses were Diagnostic (3) or Exemplary (4). 96% of DRX Plus 2530C responses were Diagnostic (3) or Exemplary (4).
    - Comparative performance to predicate (subject vs. predicate)71% of DRX Plus 2530 responses were equivalent to or favored the predicate. 68% of DRX Plus 2530C responses were equivalent to or favored the predicate.
    Regulatory Compliance & Safety- Conformance to specificationsConformed to its specifications.
    - Safety and EffectivenessDemonstrated to be as safe and effective as the predicate.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: 162 acquired images (cadaver and pediatric phantom).
    • Data Provenance: The images were acquired at the University of Rochester Medical Center in Rochester, NY. The study used adult cadavers (2) and pediatric phantoms. This indicates a prospective data acquisition specifically for the study.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: Three (3)
    • Qualifications: Board certified radiologists. (No specific years of experience are mentioned).

    4. Adjudication Method for the Test Set

    The document states that the images were "evaluated by three (3) board certified radiologists using a graduated 4 point scale based on diagnostic image quality." However, it does not explicitly describe an adjudication method (like 2+1 or 3+1 consensus). It sounds like individual ratings were collected, and then the mean RadLex rating was calculated, implying that each individual rating contributed rather than a formal consensus being reached for each image.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • Was an MRMC study done? Yes, a reader study was performed comparing the investigational devices (DRX Plus 2530 and DRX Plus 2530C Detectors) to the predicate device (DRX Plus 3543 Detector) using three board-certified radiologists.
    • Effect Size of human readers improve with AI vs without AI assistance: This information is not applicable as the study described is for a digital radiography detector system, not an AI-assisted diagnostic tool. The study focuses on comparing the image quality of different hardware detectors.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    This is not applicable as the device is a hardware detector system, not a standalone algorithm. The study evaluated the image quality produced by the hardware, which then humans interpret.

    7. The Type of Ground Truth Used

    The ground truth for the clinical study was established through expert consensus (implicit in the rating by multiple radiologists) on the "diagnostic image quality" using a RadLex rating scale. It's important to note this is not "pathology" or "outcomes data" but rather a subjective assessment of image quality by qualified experts.

    8. The Sample Size for the Training Set

    The document does not provide information regarding a training set size. This is expected as the submission is for a hardware device (detector), and the testing described focuses on its performance characteristics and image quality, not the training of an AI algorithm.

    9. How the Ground Truth for the Training Set Was Established

    Since no training set is mentioned (as this is a hardware device submission), this information is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K183474
    Date Cleared
    2019-01-16

    (30 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Carestream DRX-1 System with DRX Core Detectors

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    Carestream Health, Inc. is submitting this Special 510(k) premarket notification for a modification to the cleared Carestream DRX-1 System with DRX Plus 3543 Detectors (K150766). The product will be marketed as the Carestream DRX-1 System with DRX Core Detectors.

    Consistent with the original system, the Carestream DRX-1 System with DRX Core 3543 Detectors are flat panel digital imagers utilizing a stationary digital radiography (DR) x-ray system.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Important Note: The provided text is a 510(k) summary for a modified device, where the primary change is the detector. The study focuses on demonstrating equivalence to a predicate device, rather than establishing absolute performance targets for a new invention. Therefore, the "acceptance criteria" here largely revolve around demonstrating equivalent or superior image quality compared to the predicate.


    1. Table of Acceptance Criteria and the Reported Device Performance

    Acceptance Criteria (Implicit for Equivalence)Reported Device Performance
    Bench Testing ConformancePredefined acceptance criteria were met, demonstrating the device conforms to its specifications, is as safe, as effective, and performs as well as or better than the predicate device in terms of workflow, performance, function, shipping, verification, validation, and reliability.
    Image Quality EquivalenceThe average preference for all image quality attributes (detail contrast, sharpness, and noise) demonstrates that the image quality of the investigational device (exposed with 30% less exposure) is the same as that of the predicate device.
    Beam Detect Mode EquivalenceThe two-sample equivalence tests confirm that the beam detect mode has no effect on preference (indicating equivalent performance whether beam detect is used or not).
    Artifact AbsenceNo unique artifacts associated with either detector were evident in the resultant images during image comparison. (Common artifacts attributed to external factors like dust were deemed inconsequential as they appeared on both).
    Compliance with StandardsDevice was tested and found compliant to IEC 60601-1, IEC 60601-1-2, and IEC 62321. Adherence to FDA Guidance Documents for Solid State Imaging and Pediatric Information was also stated.

    2. Sample Size Used for the Test Set and the Data Provenance

    • Sample Size for Test Set: Thirty image pairs were obtained for the phantom imaging study (investigational vs. predicate). This involved adult and pediatric anthropomorphic phantoms.
    • Data Provenance: The data was generated through an experimental phantom imaging study in a controlled environment, comparing the investigational device with the predicate device. The text does not specify the country of origin, but given it's a FDA submission for a US company, it's likely US-based or internally generated by the manufacturer. It is a prospective study designed specifically for this comparison.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    • Number of Experts: Three expert observers.
    • Qualifications of Experts: "former radiographic technologists with extensive experience in resolving image quality related service issues."

    4. Adjudication Method for the Test Set

    The observers evaluated images in a blinded pairwise study using a 5-point preference scale for image quality attributes. The text does not specify a formal adjudication method like "2+1" or "3+1" for discordant readings. The "average preference" suggests that individual expert preferences were aggregated rather than going through a consensus-building process for each image pair.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC study was done in the context of AI assistance. This submission is for a hardware modification (detector change) within an existing X-ray system, not for an AI device. The study performed was a phantom imaging study with human observers assessing image quality attributes, primarily to demonstrate equivalence between two detector types.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • N/A. This submission is not for an AI algorithm. The device is an X-ray system with a digital detector. The "performance" being evaluated relates to the image quality produced by the hardware, not an algorithm's diagnostic capabilities.

    7. The Type of Ground Truth Used

    • "Ground truth" was implicitly established by comparing the investigational device's image output against the predicate device's image output, as evaluated by expert observers using a preference scale. This is a comparative "ground truth" based on expert perception of image quality. Since it's a phantom study, there isn't disease pathology involved in the same way as a clinical study. The phantoms themselves represent anatomical structures.

    8. The Sample Size for the Training Set

    • N/A. The provided text does not describe a "training set" in the context of machine learning or AI. This is a hardware device 510(k) submission, not an AI/ML device. The "training" for the device would be its engineering design and manufacturing processes, not data training.

    9. How the Ground Truth for the Training Set Was Established

    • N/A. As above, there is no mention of a training set or its associated ground truth in this submission, as it's not an AI/ML device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K153142
    Date Cleared
    2015-11-25

    (26 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Carestream DRX-1 System with DRX Plus 4343 Detectors

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for disgraphic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can be configured to register and use any of the two new DRX-Plus 4343 and DRX Plus 4343C Detectors. Images captured with a flat panel digital detector can be communicated to the operator console via tethered or wireless connection.

    AI/ML Overview

    The provided text describes the Carestream DRX-1 System with DRX Plus 4343 Detectors, which is a diagnostic imaging system utilizing digital radiography (DR) technology. The submission is a 510(k) premarket notification, indicating the device is substantially equivalent to a predicate device and therefore does not require a full PMA application.

    Here's an analysis of the acceptance criteria and study information provided:

    Acceptance Criteria and Reported Device Performance

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document states that "Acceptance criteria were identified for weight, pixel size, resolution, pixel pitch, total pixel area, usable pixel area, MTF (at various spatial resolutions), DQE (at various spatial resolutions), sensitivity, ghosting, boot-up time, operating temperature, exposure latitude, signal uniformity, and dark noise (ADC)."

    However, the document does not provide a specific table detailing the acceptance values for these criteria and the exact reported performance values. It only states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."

    Without the specific numerical values for acceptance criteria and reported performance, a precise table cannot be generated. The text indicates a qualitative assessment of meeting acceptance criteria.

    Study Information Summary

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

    • Sample Size: Not explicitly mentioned in terms of the number of images or cases. The testing appears to be non-clinical (bench testing) rather than a study involving a patient test set.
    • Data Provenance: The study was non-clinical bench testing. There is no mention of patient data, country of origin, or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Not applicable. This was non-clinical bench testing, not a study evaluating diagnostic performance against expert-established ground truth.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable. This was non-clinical bench testing.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No. The document explicitly states: "Clinical studies were not submitted because the non-clinical data was sufficient to demonstrate substantial equivalence to the predicate device." The system is a DR imaging system, not an AI-assisted diagnostic tool for interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • Yes, implicitly. The testing described is "non-clinical (bench) testing" focused on the physical and image quality characteristics of the device itself (detector and system), not its performance in a diagnostic workflow with human readers. The performance evaluation is of the device's technical specifications.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):

    • For the non-clinical bench testing, the "ground truth" was established by "desired performance with respect to image quality, intended use, workflow related performance, shipping performance, and general functionality and reliability, including both hardware and software requirements." These are based on industry standards and specified device requirements, rather than clinical ground truth (like pathology or expert consensus on patient cases).

    8. The sample size for the training set:

    • Not applicable. This document describes the testing of a medical imaging device (DR system), not an AI algorithm that requires a training set.

    9. How the ground truth for the training set was established:

    • Not applicable. (See answer to #8).
    Ask a Question

    Ask a specific question about this device

    K Number
    K150766
    Date Cleared
    2015-06-24

    (92 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    Carestream DRX-1 System

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projections wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can operate with either the Carestream DRX-1 System Detector (GOS), the DRX-2530C Detector (CsI), the DRX Plus 3543 (GOS) Detector, or the DRX Plus 3543C (CsI) Detector and can be configured to register and use any of the detectors. Images captured with a flat panel digital detector can be communicated to the operator console via tethered or wireless connection.

    AI/ML Overview

    The provided text describes the Carestream DRX-1 System with DRX Plus Detectors, which is a diagnostic imaging system utilizing digital radiography (DR) technology. The document indicates that this system is substantially equivalent to the predicate device, the Carestream DRX-1 System (with DRX 2530C Detector), based on non-clinical testing and a clinical image concurrence study.

    Here's the breakdown of the information you requested based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes acceptance criteria but does not provide a direct table of each criterion alongside its specific performance value. Instead, it states that "Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."

    It lists the types of criteria considered for non-clinical testing:

    • Image quality
    • Intended use
    • Workflow related performance
    • Shipping performance
    • General functionality and reliability (including both hardware and software requirements)

    It also details specific parameters for which acceptance criteria were identified:

    Acceptance Criteria CategorySpecific ParametersReported Device Performance
    Image QualityMTF (at various spatial resolutions), DQE (at various spatial resolutions), Sensitivity, Ghosting, Exposure Latitude, Signal Uniformity, Dark Noise (ADC), Resolution, Pixel Pitch, Total Pixel Area, Usable Pixel AreaMet predefined criteria; statistically equivalent to or better than the predicate device. Image quality parameters such as DQE, sensitivity, and MTF of the DRX Plus detectors demonstrate this superior performance.
    Physical/TechnicalWeight, Pixel Size, Boot-up time, Operating temperatureMet predefined criteria; new detectors are lighter and thinner (1.47cm vs 1.55cm).
    EnvironmentalIPX7 liquid resistanceImproved from IPX1 to IPX7 liquid resistance.
    GeneralFunctionality, reliability (hardware & software), shipping performance, workflow related performanceMet predefined criteria; works as intended.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document mentions a "concurrence study of clinical image pairs" was performed but does not specify the sample size (number of images or cases) used for this clinical image concurrency study.
    • Data Provenance: The document does not specify the country of origin for the data. The study involved "clinical image pairs," implying the use of patient data. It is a retrospective study since the images are "pairs" that were already existing.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • The document implies a "Reader Study" was conducted but does not specify the number of experts or the "readers" involved.
    • Qualifications of Experts: The document does not specify the qualifications of the experts/readers.

    4. Adjudication Method for the Test Set

    • The document does not specify the adjudication method used (e.g., 2+1, 3+1, none) for the test set in the reader study.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done, and Effect Size

    • MRMC Study: A "Reader Study" was performed, which is typically a form of MRMC study, as it involved human readers assessing images. The text states: "Results of the Reader Study indicated that the diagnostic capability of the Carestream DRX-1 System with DRX Plus Detectors is statistically equivalent to or better than that of the predicate device."
    • Effect Size: The document states the diagnostic capability is "statistically equivalent to or better than" the predicate device, but it does not provide a specific effect size or quantifiable improvement of human readers with AI (as this is a detector, not an AI system) or relative to the predicate device. It evaluates the detector's impact on diagnostic capability.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was Done

    • This device is an imaging detector system, not an AI algorithm. Therefore, the concept of "standalone (algorithm only)" performance without human-in-the-loop is not directly applicable in the same way it would be for an AI diagnostic tool. However, the non-clinical (bench) testing, which evaluated various technical parameters like MTF, DQE, sensitivity, etc., can be considered the "standalone" or intrinsic performance evaluation of the device itself, independent of human interpretation.

    7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)

    • For the clinical image concurrence study, the ground truth source is not explicitly stated. It refers to "clinical image pairs" and the "diagnostic capability," suggesting that the ground truth would inherently be based on established clinical diagnoses or reference standards against which the image interpretations were compared.

    8. The Sample Size for the Training Set

    • This document describes a new detector system and a comparative study against a predicate device. It is not an AI model that requires a "training set" in the conventional sense. The "training" here refers to the engineering, design, and calibration processes during the detector's development, for which a sample size is not specified or relevant in the context of typical AI model training.

    9. How the Ground Truth for the Training Set Was Established

    • As this is a hardware device (detector) and not an AI model requiring a training phase with labeled data, the concept of "ground truth for the training set" is not applicable. The "ground truth" for the device's design and manufacturing would be its engineering specifications and the physical laws of X-ray detection.
    Ask a Question

    Ask a specific question about this device

    K Number
    K130464
    Date Cleared
    2013-06-07

    (105 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CARESTREAM DRX-1 SYSTEM WITH DRX 2530C DETECTOR

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can operate with either the Carestream DRX-1 System Detector (GOS) or the DRX-2530C Detector (Csl) and can be configured to register and use both detectors. Images captured with the flat panel digital detector can be communicated to the operator console via tethered or wireless connection.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information based on the provided text for K130464:

    Please note that the provided 510(k) summary is for a device upgrade (new detector for an existing system) and focuses on demonstrating substantial equivalence to the predicate device. Therefore, a full-blown AI performance study with detailed metrics like sensitivity, specificity, or AUC for a diagnostic AI algorithm is not present, as the device itself is an imaging acquisition system, not an AI diagnostic tool. The "Reader Study" mentioned is a comparative effectiveness study to show that the new detector produces diagnostically equivalent images to the predicate.


    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategoryReported Device Performance
    Non-clinical (bench) testing"The performance characteristics and operation / usability of the Carestream DRX-1 System with DRX 2530C Detector were evaluated in non-clinical (bench) testing. These studies have demonstrated the intended workflow, related performance, overall function, shipping performance, verification and validation of requirements for intended use, and reliability of the system including both software and hardware requirements. Non-clinical test results have demonstrated that the device conforms to its specifications. Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
    Clinical Concurrence (Diagnostic Capability)"A concurrence study of clinical image pairs was performed... to demonstrate the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector. Results of the Reader Study indicated that the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector is statistically equivalent to or better than that of the predicate device. These results support a substantial equivalence determination."

    Study Details

    Detailed information regarding sample sizes, expert qualifications, and adjudication methods for the clinical study is very limited in this 510(k) summary. The document primarily states that a "concurrence study of clinical image pairs" and a "Reader Study" were conducted to demonstrate diagnostic equivalence.

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not explicitly stated in the provided document. The text mentions "clinical image pairs."
      • Data Provenance: Not explicitly stated (e.g., country of origin). The study implicitly uses "clinical image pairs," suggesting prospective or retrospective collection of patient images. The nature of a "concurrence study" implies that images were acquired using both the new device and the predicate device (or comparable standard) for direct comparison.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Number of Experts: Not explicitly stated. The term "Reader Study" implies multiple readers.
      • Qualifications of Experts: Not explicitly stated (e.g., specific experience or subspecialty).
    3. Adjudication Method for the Test Set:

      • Not explicitly stated. A "concurrence study" and "Reader Study" typically involve readers independently evaluating images, and then their findings are compared to each other, to predefined criteria, or to a ground truth. Common adjudication methods (like 2+1 or 3+1) are not detailed in this summary.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • Was it done? Yes, a "Reader Study" was performed. While not explicitly termed "MRMC," the description "Results of the Reader Study indicated that the diagnostic capability... is statistically equivalent to or better than that of the predicate device" strongly suggests a comparative study involving multiple readers assessing images. The purpose was to show diagnostic capability of the new detector in comparison to the predicate.
      • Effect size of human readers improvement with AI vs without AI assistance: This information is not applicable as this study is not evaluating an AI diagnostic tool or AI assistance. It is evaluating the diagnostic equivalence of an imaging acquisition device (a new X-ray detector). The "improvement" is in the context of the new detector's image quality being diagnostically equivalent or better than the predicate detector, not AI assistance.
    5. Standalone Performance (algorithm only without human-in-the-loop performance):

      • Was it done? This is not applicable in the context of this 510(k). The device is an X-ray detector, which is an image acquisition component, not an standalone algorithm. Its performance is intrinsically linked to human interpretation of the images it produces.
    6. Type of Ground Truth Used:

      • The document implies that the ground truth for the "concurrence study" would have been established by expert interpretation of the images (either from the new device or the predicate) to determine if the new device's images offered equivalent diagnostic information. Pathology or outcomes data are not mentioned. It's likely based on expert consensus or comparison against established diagnostic images from the predicate.
    7. Sample Size for the Training Set:

      • Not applicable. This submission is for an imaging acquisition device (X-ray detector), not a machine learning algorithm. Therefore, there is no "training set" in the AI/ML sense. The device's performance is driven by its physical and electronic characteristics, and its "training" would be its design, engineering, and manufacturing process.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable, as there is no training set for an AI/ML algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K090318
    Date Cleared
    2009-04-06

    (56 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    CARESTREAM DRX-1 SYSTEM

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Carestream DRX-1 System is intended to capture for display radiographic images of human anatomy. It is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, tomography and angiography applications.

    Device Description

    The Carestream DRX-1 System is a digital imaging system to be used with diagnostic x-ray systems. It includes a Carestream DRX-1 System Detector (flat panel digital detector), Carestream DRX-1 System Console (operator console) and Carestream DRX-1 System Interface Box (generator interface or Interface Box). Images captured with the flat panel digital detector can be communicated to the operator console via tethered connection or wireless.

    AI/ML Overview

    The provided document is limited in the detail it offers regarding acceptance criteria and the comprehensive study design. However, I can extract the information that is present.

    Here's a breakdown based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states: "Performance testing was conducted to verify the design input requirements and to validate the Carestream DRX-1 System conformed to the defined user needs and intended uses. Predefined acceptance criteria was met and demonstrated that the Carestream DRX-1 System is as safe, as effective, and performs as well as or better than the predicate device." And for clinical testing: "Results of clinical testing demonstrated there were no significant differences observed between the Kodak DirectView CR 850 System and Carestream DRX-1 System with respect to clinical acceptance or the ability to diagnose."

    However, the specific "predefined acceptance criteria" (e.g., specific metrics like SNR, spatial resolution improvements, or diagnostic accuracy thresholds) and their corresponding numerical results for the Carestream DRX-1 System are not detailed in this document. The reported performance is a general statement of meeting criteria and no significant difference from the predicate.

    Acceptance Criteria (Specifics Not Provided)Reported Device Performance
    SafetyMet (as safe as predicate)
    EffectivenessMet (as effective as predicate)
    Performance (bench testing)Met (performs as well as or better than predicate)
    Clinical AcceptanceNo significant difference from predicate
    Ability to DiagnoseNo significant difference from predicate

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not specified in the provided text.
    • Data Provenance: The document does not specify the country of origin of the data or whether it was retrospective or prospective. It only mentions "Clinical Testing" was performed.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not specified. The term "clinical testing" implies expert evaluation, but the number or specific roles (e.g., radiologists) are not detailed.
    • Qualifications of Experts: Not specified.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? Implied by the "no significant differences observed... with respect to clinical acceptance or the ability to diagnose" statement when comparing to the predicate, which suggests human readers evaluated both systems. However, a formal MRMC study design (e.g., number of readers, specific protocols) and its details are not explicitly described.
    • Effect size of human readers with AI vs. without AI assistance: Not applicable/Not mentioned. This device is a digital radiography system, not an AI-assisted diagnostic tool. The comparison is between two digital imaging systems.

    6. Standalone (Algorithm Only) Performance Study

    • Was a standalone study done? Not applicable. The Carestream DRX-1 System is described as a complete digital imaging system (detector, console, interface). It's not an algorithm intended for standalone performance evaluation in the context of AI. The performance testing was for the integrated system.

    7. Type of Ground Truth Used

    • Type of Ground Truth: The document refers to "clinical acceptance" and "ability to diagnose," implying a clinical assessment by human experts (likely radiologists) as the ground truth for comparison. There is no mention of pathology or outcomes data.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable/Not specified. This device is a digital radiography system, not a machine learning algorithm that requires a separate training set. Performance was evaluated for its imaging capabilities.

    9. How the Ground Truth for the Training Set Was Established

    • How Ground Truth for Training Set Was Established: Not applicable. See point 8.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1