Search Filters

Search Results

Found 8 results

510(k) Data Aggregation

    K Number
    K141837
    Device Name
    DRX-EVOLUTION
    Date Cleared
    2015-03-11

    (247 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is a permanently installed diagnostic x-ray system for general radiographic x-ray imaging including tomography. The tomography feature is not to be used for imaging pediatric patients.

    Device Description

    The DRX-Evolution is a diagnostic x-ray system utilizing digital radiography (DR) technology. The DRX-Evolution is designed for horizontal and upright projection exams. The system consists of a high voltage x-ray generator, overhead tube crane with x-ray tube assembly, radiographic table with detector tray, Bucky image receptor on an upright wall stand, and x-ray controls containing a power distribution unit and operator PC (user interface).

    AI/ML Overview

    This document describes the Carestream DRX-Evolution, a diagnostic x-ray system. The modifications to the device include firmware and mechanical changes to facilitate linear tomography exams, the addition of a new generator option, and software updates such as Bone Suppression and Pneumothorax Visualization.

    Here's an analysis of the acceptance criteria and the studies that prove the device meets them, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Feature/FunctionalityAcceptance CriteriaReported Device Performance
    Overall Safety & EffectivenessDevice is as safe, as effective, and performs as well as or better than the predicate device."Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
    Workflow, Performance, Function, Verification/ValidationIntended workflow, related performance, overall function, verification and validation of requirements for intended use, shipping performance, and reliability of the DRX-Evolution system (both software and hardware) are demonstrated."These studies demonstrated the intended workflow, related performance, overall function, verification and validation of requirements for intended use, shipping performance, and reliability of the DRX-Evolution system including both software and hardware requirements. Nonclinical test results have demonstrated that the device conforms to its specifications."
    Linear Tomography AccuracyAccurate movement of the x-ray tube head with respect to the image capture device."Test results using the tool phantom were as expected, demonstrating accuracy of the tube head movement with respect to the capture device."
    Linear Tomography Diagnostic QualityLinear tomography images acquired are of acceptable diagnostic quality."linear tomography images were generated using four different anthropomorphic phantoms (chest, hand, knee and pelvis). The images were evaluated by a board-certified radiologist for diagnostic quality. Results of this evaluation demonstrated that the linear tomography images acquired using the DRX-Evolution system are of acceptable diagnostic quality."
    DRX 2530C Detector Image QualityEquivalent or superior image quality to the DRX-1 Detector (predicate device)."Results of these studies demonstrated equivalent or superior image quality to the DRX-1 Detector (predicate device)."
    DRX-1C Detector Image QualityEquivalent or superior image quality to the DRX-1 Detector (predicate device)."Results of these studies demonstrated equivalent or superior image quality to the DRX-1 Detector (predicate device)."
    DR Long Length Imaging Software Diagnostic CapabilityProduced LLI images with statistically equivalent or better diagnostic capability to the predicate software."Results of the DR Long Length Imaging study demonstrated that the investigational software produced LLI images with statistically equivalent or better diagnostic capability to the predicate software."
    Bone Suppression Software EffectivenessGenerates a companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved as compared to that of the predicate product (standard-of-care image without the bone-suppressed companion image)."Results of the Bone Suppression clinical study demonstrated that the software generates a companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved as compared to that of the predicate product (standardof-care image without the bone-suppressed companion image)."

    2. Sample Size Used for the Test Set and Data Provenance

    • Linear Tomography Accuracy (Bench Testing):
      • Sample Size: Not explicitly stated, but it involved "a tool phantom" and "four different anthropomorphic phantoms (chest, hand, knee and pelvis)."
      • Data Provenance: Bench testing, likely conducted internally by Carestream Health, Inc. The country of origin is not specified but the company is based in Rochester, New York, USA. This is retrospective for the purpose of regulatory submission.
    • Detector Image Quality (Clinical Studies - DRX 2530C and DRX-1C):
      • Sample Size: Not explicitly stated in this document ("Results of these studies demonstrated..."). These studies refer to K130464 for DRX 2530C and K120062 for DRX-1C, where detailed sample sizes would be found.
      • Data Provenance: Clinical studies, in accordance with FDA guidance. Prospective studies are typical for such evaluations. The country of origin is not specified.
    • DR Long Length Imaging Software Diagnostic Capability (Clinical Study):
      • Sample Size: Not explicitly stated in this document ("Results of the DR Long Length Imaging study demonstrated..."). It refers to K130567 for more details.
      • Data Provenance: Clinical study, likely prospective. The country of origin is not specified.
    • Bone Suppression Software Effectiveness (Clinical Study):
      • Sample Size: Not explicitly stated in this document ("Results of the Bone Suppression clinical study demonstrated..."). It refers to K133442 for more details.
      • Data Provenance: Clinical study, likely prospective. The country of origin is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Linear Tomography Diagnostic Quality (Bench Testing):

      • Number of Experts: A single expert ("evaluated by a board-certified radiologist").
      • Qualifications: "board-certified radiologist."
    • For other clinical studies (Detectors, LLI, Bone Suppression): The number and qualifications of experts are not detailed in this summary; they would be in the referenced 510(k) documents (K130464, K120062, K130567, K133442).

    4. Adjudication Method for the Test Set

    • Linear Tomography Diagnostic Quality: No formal adjudication method is described beyond a single board-certified radiologist's evaluation.
    • For other clinical studies (Detectors, LLI, Bone Suppression): Adjudication methods are not detailed in this summary; they would be in the referenced 510(k) documents.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance

    • No explicit MRMC comparative effectiveness study involving AI assistance for human readers is described in this document. The clinical studies mentioned for Bone Suppression and DR Long Length Imaging software compare the investigational software/images to a predicate or standard-of-care, but they don't explicitly state an "AI vs without AI assistance" MRMC study for improved human reader performance with an effect size.
      • The Bone Suppression study mentions a "companion image that, when presented to the physician along with the standard-of-care image, is rated substantially equivalent or improved." This implies a reader study where human performance (or perception of image utility) is assessed with and without the software-generated companion image, but it does not specify an MRMC design or quantify an "effect size" in terms of improved diagnostic accuracy for human readers.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • Yes, standalone performance was evaluated for certain aspects:
      • Linear Tomography Accuracy: The initial bench testing using a "tool phantom" evaluated the mechanical accuracy of the device's linear tomography function in a standalone manner (without human diagnostic interpretation), confirming "accuracy of the tube head movement."
      • Image Quality Metrics: The statement that DRX-1C and DRX 2530C detectors "provide equal or superior image quality with respect to noise and spatial resolution at equivalent doses" suggests that standalone technical image quality metrics were assessed, and these would be algorithm-only or device-only measurements.

    7. The Type of Ground Truth Used

    • Linear Tomography Accuracy: The ground truth for mechanical accuracy was based on the "expected" results from a "tool phantom," implying known physical measurements or standards.
    • Linear Tomography Diagnostic Quality: The ground truth was based on expert consensus/opinion by a "board-certified radiologist" regarding "acceptable diagnostic quality" of anthropomorphic phantom images.
    • Detector Image Quality, DR Long Length Imaging Software, Bone Suppression Software: For these, the ground truth was generally based on comparison to a predicate device or standard-of-care, with evaluation typically performed by qualified experts (e.g., physicians for diagnostic capability). While not explicitly stated as "ground truth," the predicate/standard-of-care serves as the reference for equivalence or improvement claims.

    8. The Sample Size for the Training Set

    • The document does not explicitly state the sample size for any training sets.
    • The software features (Bone Suppression, Pneumothorax Visualization) likely involved machine learning/AI models that would require training data. However, this information is not provided in the 510(k) summary. References to K133442 (Bone Suppression) might contain this information.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide details on how ground truth for any training sets was established. Since training set details are absent, the method for establishing their ground truth is also not mentioned.
    Ask a Question

    Ask a specific question about this device

    K Number
    K132824
    Date Cleared
    2014-02-06

    (150 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CARESTREAM Vue PACS is an image management system whose intended use is to provide completely scalable local and wide area PACS solutions for hospital and related institutions/sites, which will archive, distribute, retrieve and display images and data from all hospital modalities and information systems.

    The system contains interactive tools in order to ease the process of analyzing and comparing three dimensional (3D) images. It is a single system that integrates review, dictation and reporting tools to create a productive work environment for the radiologists and physicians.

    The Vue Motion software program is used for patient management by clinicians in order to access and display patient data, medical reports, and medical images for diagnosis from different modalities including CR. DR, CT, MR, NM and US.

    Vue Motion provides wireless and portable access to medical images for remote reading or referral purposes from web browsers including usage with validated mobile devices. This device is not intended to replace full workstations and should be used only when there is no access to a workstation. For primary interpretation and review of mammography images, only use display hardware that is specifically designed for and cleared by the FDA for mammography.

    Device Description

    CARESTREAM Vue PACS is an image management system whose intended use is to provide completely scalable local and wide area PACS solutions for hospital and related institutions/sites, which will archive, distribute, retrieve and display images and data from all hospital modalities and information systems.

    It is a software only solution that contains interactive tools in order to ease the process of analyzing and comparing three dimensional (3D) images. It is a single system that integrates the review, dictation and reporting tools that creates a productive work environment for the radiologists and physicians.

    Vue PACS provides functionality to allow remote site access to image and patient data enabling diagnostic reading through industry standard interfaces. It is designed using an open architecture that allows for various proprietary and off the shelf software components to be integrated with off the shelf hardware components and configured meeting the user's specific needs in a single-site or multi-site environment.

    CARESTREAM Vue Motion is a Light Viewer designed to provide wireless and portable access to medical images for remote reading or referral purposes from web browsers including enterprise distribution of the radiology images and related data. The needs to provide real time imaging results and imaging related data to the enterprise users' commands that imaging solutions have a simple distribution mechanism through simple and broadly used technology. The patient portfolio is made available to physicians from their offices within their EMR. from home on local PC's or remotely through tablet and other devices. With integration into EMR systems, Vue Motion helps hospital users and healthcare facilities enhance patient care, by bringing the complete patient imaging record and supporting data into the healthcare enterprise. Image storage, viewing and distribution becomes a seamless part of the EMR.

    A "patient search page", including smart Google-like search capabilities is also available for users that have no local EMR/HIS integration.

    Vue Motion is offered as an option for the PACS, Vue Archive (onsite or via Vue Cloud) or The Carestream Vendor Neutral Archive and provides a zero footprint imaging viewer that can be deployed on the fly and accessible by the right user from anywhere, over virtually any operating system or over virtually any browser enabled device. The software technology uses HTML5 which allows any browser enabled device to run the software application.

    CARESTREAM Vue Motion has a simpler GUI for viewing including zoom, pan, windowing, basic measurements, cine etc. It works on any operating system and with virtually any browser enabled device such as PC's, iPad, mobiles etc. It is selfdeployable and performs well over low bandwidth networks. It supports collaboration with other users through the sticky notes mechanism.

    AI/ML Overview

    The provided text describes the CARESTREAM Vue PACS v11.4 Vue Motion, an image management system with mobile access capabilities. It outlines the product's description, intended use, and technological characteristics, as well as its substantial equivalence to a predicate device.

    However, the document does not explicitly state specific acceptance criteria in a quantitative or pass/fail manner, nor does it detail a formal study with a defined test set, ground truth establishment, or expert involvement. Instead, it broadly describes testing activities.

    Here's an analysis based on the information available:


    1. Table of Acceptance Criteria and Reported Device Performance

    As specific quantitative acceptance criteria are not explicitly stated, the table below presents the types of performance aspects tested and the overall reported outcome.

    Acceptance Criteria (Implied)Reported Device Performance
    Bench Performance:
    - Luminance Response (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Device and Display Settings (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Optimal Viewing Angle (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Resolution (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Noise (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Reflectivity (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Battery Life (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    - Exception Handling (on approved devices)Each device (Apple iPad2, Apple iPhone 4S, Galaxy S3, Galaxy Note 10.1) was determined to be acceptable in bench performance testing.
    Clinical Assessment:The Clinical Assessments indicated that Vue Motion images of diagnostic quality can be displayed on each of the devices across all target modalities.
    - Display of diagnostic quality images (on approved devices)The Clinical Assessments indicated that Vue Motion images of diagnostic quality can be displayed on each of the devices across all target modalities.
    Functional QA:Functional QA testing demonstrated that key features of the system operate acceptably on PCs, Mobile, and Tablet Devices.
    - Acceptable operation of key system features (on all platforms)Functional QA testing demonstrated that key features of the system operate acceptably on PCs, Mobile, and Tablet Devices.
    Equivalence to Predicate Device:Bench performance testing results support equivalence to the Carestream PACS predicate. Clinical Assessments support equivalence to the Carestream PACS predicate. No substantial differences that affect safety and efficacy were noted compared to the CARESTREAM PACS predicate (K110919). The new product brings features to additional display devices and performs the same.
    - Performance and safety equivalence to predicateBench performance testing results support equivalence to the Carestream PACS predicate. Clinical Assessments support equivalence to the Carestream PACS predicate. No substantial differences that affect safety and efficacy were noted compared to the CARESTREAM PACS predicate (K110919). The new product brings features to additional display devices and performs the same.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated as a numerical count of cases or images. The document refers to "each of the devices" (Apple iPad2, Apple iPhone 4S, Galaxy S3, and Galaxy Note 10.1) being tested.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The document mentions "Vue Motion images of diagnostic quality" were used but doesn't detail their source or type.

    3. Number of Experts and Qualifications for Ground Truth Establishment

    • Number of Experts: Not specified.
    • Qualifications of Experts: Not specified. It only states that "Clinical Assessments" were performed, implying involvement of medical professionals, but their number and specific qualifications (e.g., radiologists, years of experience) are not detailed.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not specified. The document does not describe any multi-reader review or consensus process for clinical assessment.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • MRMC Study Done: No. The document does not mention an MRMC study comparing human readers with and without AI assistance. The focus is on demonstrating that the device itself can display images of diagnostic quality and is equivalent to the predicate PACS.
    • Effect Size of Human Readers Improvement with AI vs. without AI: Not applicable, as no MRMC study was performed.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Standalone Performance Done: This device is an image display and management system, not primarily an AI algorithm for diagnostic interpretation. Its performance is tied to its ability to render medical images accurately on specified mobile devices. The "Bench Testing" and "Functional QA testing" could be considered components of standalone performance for the software/device combination, but not in the context of an AI diagnostic algorithm's standalone accuracy. The "Clinical Assessments" evaluate the display capabilities for human diagnostic use, rather than an algorithm's diagnostic output.

    7. Type of Ground Truth Used

    • Type of Ground Truth: For the "Clinical Assessments," the implicit ground truth seems to be the widely accepted understanding of "diagnostic quality" for medical images when viewed on the tested devices, presumably benchmarked against a full diagnostic workstation (the predicate PACS). This is not an explicit ground truth like pathology reports or patient outcomes data, but rather a qualitative assessment of display fidelity suitable for diagnosis by human users.

    8. Sample Size for the Training Set

    • Sample Size for Training Set: Not applicable. This document describes a PACS and mobile viewer, not a machine learning or AI model that requires a "training set" in the conventional sense. The "training" here would refer to the software development and quality assurance processes.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth for Training Set: Not applicable, as it's not an AI/ML system with a conventional training set. The "ground truth" for the software development and testing would be the functional requirements and specifications of the PACS and mobile viewer, ensuring it correctly processes, transmits, and displays DICOM images in accordance with industry standards and clinical needs.
    Ask a Question

    Ask a specific question about this device

    K Number
    K130464
    Date Cleared
    2013-06-07

    (105 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is intended to capture for display radiographic images of human anatomy including both pediatric and adult patients. The device is intended for use in general projection radiographic applications wherever conventional screen-film systems or CR systems may be used. Excluded from the indications for use are mammography, fluoroscopy, and angiography applications.

    Device Description

    The Carestream DRX-1 System is a diagnostic imaging system utilizing digital radiography (DR) technology that is used with diagnostic x-ray systems. The system consists of the Carestream DRX-1 System Console (operator console), flat panel digital imager (detector), and optional tether interface box. The system can operate with either the Carestream DRX-1 System Detector (GOS) or the DRX-2530C Detector (Csl) and can be configured to register and use both detectors. Images captured with the flat panel digital detector can be communicated to the operator console via tethered or wireless connection.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information based on the provided text for K130464:

    Please note that the provided 510(k) summary is for a device upgrade (new detector for an existing system) and focuses on demonstrating substantial equivalence to the predicate device. Therefore, a full-blown AI performance study with detailed metrics like sensitivity, specificity, or AUC for a diagnostic AI algorithm is not present, as the device itself is an imaging acquisition system, not an AI diagnostic tool. The "Reader Study" mentioned is a comparative effectiveness study to show that the new detector produces diagnostically equivalent images to the predicate.


    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategoryReported Device Performance
    Non-clinical (bench) testing"The performance characteristics and operation / usability of the Carestream DRX-1 System with DRX 2530C Detector were evaluated in non-clinical (bench) testing. These studies have demonstrated the intended workflow, related performance, overall function, shipping performance, verification and validation of requirements for intended use, and reliability of the system including both software and hardware requirements. Non-clinical test results have demonstrated that the device conforms to its specifications. Predefined acceptance criteria were met and demonstrated that the device is as safe, as effective, and performs as well as or better than the predicate device."
    Clinical Concurrence (Diagnostic Capability)"A concurrence study of clinical image pairs was performed... to demonstrate the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector. Results of the Reader Study indicated that the diagnostic capability of the Carestream DRX-1 System with DRX 2530C Detector is statistically equivalent to or better than that of the predicate device. These results support a substantial equivalence determination."

    Study Details

    Detailed information regarding sample sizes, expert qualifications, and adjudication methods for the clinical study is very limited in this 510(k) summary. The document primarily states that a "concurrence study of clinical image pairs" and a "Reader Study" were conducted to demonstrate diagnostic equivalence.

    1. Sample Size Used for the Test Set and Data Provenance:

      • Sample Size: Not explicitly stated in the provided document. The text mentions "clinical image pairs."
      • Data Provenance: Not explicitly stated (e.g., country of origin). The study implicitly uses "clinical image pairs," suggesting prospective or retrospective collection of patient images. The nature of a "concurrence study" implies that images were acquired using both the new device and the predicate device (or comparable standard) for direct comparison.
    2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:

      • Number of Experts: Not explicitly stated. The term "Reader Study" implies multiple readers.
      • Qualifications of Experts: Not explicitly stated (e.g., specific experience or subspecialty).
    3. Adjudication Method for the Test Set:

      • Not explicitly stated. A "concurrence study" and "Reader Study" typically involve readers independently evaluating images, and then their findings are compared to each other, to predefined criteria, or to a ground truth. Common adjudication methods (like 2+1 or 3+1) are not detailed in this summary.
    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

      • Was it done? Yes, a "Reader Study" was performed. While not explicitly termed "MRMC," the description "Results of the Reader Study indicated that the diagnostic capability... is statistically equivalent to or better than that of the predicate device" strongly suggests a comparative study involving multiple readers assessing images. The purpose was to show diagnostic capability of the new detector in comparison to the predicate.
      • Effect size of human readers improvement with AI vs without AI assistance: This information is not applicable as this study is not evaluating an AI diagnostic tool or AI assistance. It is evaluating the diagnostic equivalence of an imaging acquisition device (a new X-ray detector). The "improvement" is in the context of the new detector's image quality being diagnostically equivalent or better than the predicate detector, not AI assistance.
    5. Standalone Performance (algorithm only without human-in-the-loop performance):

      • Was it done? This is not applicable in the context of this 510(k). The device is an X-ray detector, which is an image acquisition component, not an standalone algorithm. Its performance is intrinsically linked to human interpretation of the images it produces.
    6. Type of Ground Truth Used:

      • The document implies that the ground truth for the "concurrence study" would have been established by expert interpretation of the images (either from the new device or the predicate) to determine if the new device's images offered equivalent diagnostic information. Pathology or outcomes data are not mentioned. It's likely based on expert consensus or comparison against established diagnostic images from the predicate.
    7. Sample Size for the Training Set:

      • Not applicable. This submission is for an imaging acquisition device (X-ray detector), not a machine learning algorithm. Therefore, there is no "training set" in the AI/ML sense. The device's performance is driven by its physical and electronic characteristics, and its "training" would be its design, engineering, and manufacturing process.
    8. How the Ground Truth for the Training Set Was Established:

      • Not applicable, as there is no training set for an AI/ML algorithm.
    Ask a Question

    Ask a specific question about this device

    K Number
    K120246
    Date Cleared
    2012-08-28

    (214 days)

    Product Code
    Regulation Number
    892.1715
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The KODAK DirectView CR Mammography Feature together with KODAK DirectView CR Mammography Cassette comprise a device which, when used in conjunction with a KODAK DirectView CR System and a mammographic x-ray machine, generates digital mammographic images that can be used for screening and diagnosis of breast cancer. It is intended for use in the same clinical applications as traditional screen-film based mammographic systems. The mammographic images can be interpreted by a qualified physician using either hardcopy film or softcopy display at a workstation.

    Device Description

    The Carestream CR Mammography Cassette with SNP-M1 Screen is a structured needle phosphor detector used in conjunction with the Kodak DirectView CR Mammography Feature on Kodak DirectView CR Systems for generating digital mammographic images. The Carestream CR Mammography Cassette SNP-M1 Screen is used in the same manner as a traditional screen-film cassette when performing radiographic patient exposures and will be used in the DirectView CR Systems: CR850, readers problem on on and Elite CR readers using the Kodak DirectView CR Mammography Feature.

    AI/ML Overview

    The provided text describes a Special 510(k) submission for the Kodak DirectView CR Mammography System by Carestream Health, Inc. The submission is for a modification to an existing device, specifically a new storage phosphor screen (SNP-M1) for mammography. The document focuses on showing substantial equivalence to the predicate device (Kodak DirectView CR Mammography System: P080018/S001).

    However, the provided text does not contain specific acceptance criteria or an explicit study proving performance against such criteria. The document states:

    "Performance testing was conducted to verify the design output met the design input requirements and to validate the device conformed to the defined user needs and intended uses. Non-clinical physical laboratory testing was conducted to assess performance. Clinical image evaluation was also conducted. Predefined acceptance criteria was met and demonstrated that the device is substantially equivalent to and as safe and as effective as the predicate device."

    This statement confirms that performance testing and clinical image evaluation were done, and predefined acceptance criteria were met, but it does not provide the details of those criteria or the study results.

    Therefore, I cannot populate the requested information. The document focuses on demonstrating substantial equivalence through technological characteristics comparison and a general statement that acceptance criteria were met, rather than detailing the specific performance metrics and study design/results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K111423
    Date Cleared
    2012-03-02

    (284 days)

    Product Code
    Regulation Number
    872.1745
    Panel
    Dental
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CS 1600 is indicated for use by health professionals as an aid in the detection of dental caries.

    It is also indicated for use in viewing and capturing intraoral color video images for the purpose of:

    • Allowing practitioners to view and magnify all regions of the oral cavity
    • Assisting in communication with the patient by providing a view of treatment areas before and after a procedure
    • Providing images for documentation in patient records.
    Device Description

    The CS 1600 Intraoral Camera (CS 1600) is an intraoral camera system that also includes an optical caries detection system based on fluorescence imaging with reflectance enhancement. The system is intended for use by healthcare professionals in dental and dental sub-specialty clinical settings.

    The system consists of an intraoral camera assembly which is connected via USB connection to a PC, and associated image acquisition software and accessories. Accessories include hygienic barrier sheaths and a collar attachment for the camera assembly that is used for maintaining an optimal working distance in either caries detection mode.

    AI/ML Overview

    The provided text describes the Carestream CS 1600 Intraoral Camera and its intended use as an aid in detecting dental caries. However, it does not contain the detailed clinical study information or acceptance criteria that would allow for a comprehensive answer to your request.

    Specifically, the document states:

    • "Clinical testing of the system in each caries detection mode has been completed. The conclusion of the study verified the efficacy for the intended use of the device under clinical conditions." (Section 9. Clinical Testing)
    • "Results of testing demonstrate that the device is safe and effective in meeting user requirements in accordance with its intended use." (Section 8. Non-Clinical testing)

    These statements confirm that clinical testing was performed and concluded the device was effective, but they do not provide any of the quantitative details you requested.

    Therefore, I cannot populate the table or answer the specific questions regarding acceptance criteria, sample sizes, ground truth establishment, MRMC studies, or standalone performance. The document focuses on regulatory clearance by demonstrating substantial equivalence to predicate devices, rather than presenting detailed clinical performance data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K112321
    Device Name
    CS 1200
    Date Cleared
    2011-11-10

    (90 days)

    Product Code
    Regulation Number
    872.6640
    Panel
    Dental
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K111566
    Date Cleared
    2011-10-06

    (122 days)

    Product Code
    Regulation Number
    892.2040
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The DRYVIEW CHROMA Imaging System is intended to provide hard copy images from digital imaging source output signals. The device is intended for use with DRYVIEW CHROMA film and reflective media. The device will interface with a variety of digital modalities, including, but not limited to, CR (Computed Radiology), DR (Digital Radiology), CT (Computerized Tomography), MRI (Magnetic Resonance Imaging). The images are to be used for medical diagnosis and referral to physicians and their patients. The DRYVIEW CHROMA Imaging System is not intended for use with FFDM or CR Mammography systems.

    Device Description

    The DRYVIEW CHROMA Imaging System is an inkjet printing system. The DRYVIEW CHROMA Imaging System (CHROMA System) receives medical data including image and clinical report data from a digital modality. This data is received from medical image source devices (modalities) over a network and communicated to the CHROMA device via digital communication standard, DICOM. User control is performed directly by the modality or through the host control. The CHROMA System device prints the information received using piezoelectric inkjet technology. Tiny ink droplets are propelled from pizeoelectric nozzles in inige toenhology. " in media to form the image or report communicated by the digital modality. The CHROMA System device prints on transparent polyester based (film) as well as reflective (paper) media. Media is removed from a cartridge and transported into the CHROMA System device. Print data and media merge within the device. The CHROMA System employs the use of test patterns to verify imaging performance. A test pattern generator is incorporated to assure consistency between input signals and output density. Software is used to control the image management and machine functions. The information sent to the DICOM interface is used by the DICOM Interface to choose the correct set of printing parameters and halftone patterns for optimal image quality.

    AI/ML Overview

    This looks like a 510(k) premarket notification for a medical imaging hardcopy device (an inkjet printer). Typically, for such a device, the "acceptance criteria" and "study" would focus on demonstrating that the output (the printed image) meets certain quality and consistency standards to be fit for medical diagnosis. However, the provided document focuses more on substantial equivalence to a predicate device and adherence to general safety and quality standards rather than detailed performance metrics of image quality that would be established through a clinical study.

    Therefore, the information requested, particularly regarding clinical study details (sample size, ground truth, experts, MRMC studies), is largely not present in this document because this type of device (a printer) is evaluated differently than, for example, a diagnostic AI algorithm.

    Here's a breakdown of what can be extracted and what is not available from the provided text:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Safety and EffectivenessAssured via meeting voluntary standards: DICOM, IEC 62304, UL 60950, IEC 60601-1-2 and ISO 14971.
    Printer FunctionalityReceives medical data (image and clinical report) from digital modalities over a network via DICOM. Uses piezoelectric inkjet technology to print on transparent polyester-based (film) and reflective (paper) media.
    Image ConsistencyTest patterns are incorporated to assure consistency between input signals and output density.
    Image Management & Machine ControlSoftware controls image management and machine functions. DICOM interface uses information to choose correct printing parameters and halftone patterns for optimal image quality.
    Substantial EquivalenceConcluded to be as safe and effective as the predicate device (Codonics Horizon Ci Medical Image Multimedia Imager - K021054) based on similar design to equivalent/comparable safety standards and absence of new safety/efficacy issues.
    Patient ContactNo patient contact.
    Impact on Other DevicesDoes not control, monitor, or otherwise affect any devices directly connected to or affecting the patient.

    2. Sample size used for the test set and the data provenance

    • Not explicitly stated for a "test set" in the context of clinical performance evaluation. The document describes meeting voluntary standards and substantial equivalence, which implies engineering and system-level testing, but not a patient-data-driven "test set" in the diagnostic sense.
    • Data Provenance: Not applicable. The "data" are digital images from various modalities (CR, DR, CT, MRI) that the printer receives. The provenance of these images themselves is not relevant to the printer's performance evaluation as described.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not applicable. This device is a printer. Its performance is assessed through technical specifications and comparison to a predicate device, not by expert interpretation of printed images in a diagnostic study where ground truth would be established.

    4. Adjudication method for the test set

    • Not applicable. See point 3.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No. This is not an AI-assisted diagnostic device. It is a hardcopy printer. MRMC studies are not relevant nor mentioned.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Not applicable. This is a printer, not a diagnostic algorithm. Its function is to reproduce digital images on physical media. Its "performance" is about faithful reproduction and reliability, not diagnostic accuracy.

    7. The type of ground truth used

    • Not applicable. The concept of "ground truth" for diagnostic accuracy (e.g., pathology, outcomes data) does not apply to a medical image hardcopy device like this. The closest analogous concept would be the fidelity of the printed output to the digital input, which is assessed through technical printing parameters and test patterns.

    8. The sample size for the training set

    • Not applicable. This is a hardware device (printer) with embedded software for control and image processing. It is not an AI/machine learning model that undergoes "training" on a dataset in the conventional sense.

    9. How the ground truth for the training set was established

    • Not applicable. See point 8.
    Ask a Question

    Ask a specific question about this device

    K Number
    K081836
    Date Cleared
    2008-07-30

    (30 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Not Found

    Device Description

    Not Found

    AI/ML Overview

    This looks like a 510(k) clearance letter from the FDA. While it confirms the device's substantial equivalence to a predicate device, it does not contain the detailed acceptance criteria or a study summary that proves the device meets those criteria.

    A 510(k) clearance primarily focuses on demonstrating that a new device is as safe and effective as a legally marketed predicate device. It typically relies on comparisons of technological characteristics and, sometimes, performance data to establish this equivalence, rather than setting specific acceptance criteria and then presenting a study to prove those criteria are met in the same way a PMA (Pre-Market Approval) or a more rigorous clinical trial submission might.

    Therefore, I cannot provide the requested information from this document. The information in this letter is insufficient to answer the prompt.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1