Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This software is intended to generate digital radiographic images of the skull, spinal column, extremities, and other body parts in patients of all ages. Applications can be performed with the patient sitting, or lying in the prone or supine position and is intended for use in all routine radiography exams. The product is not intended for mammographic applications.

    This software is not meant for mammography, fluoroscopy, or angiography.

    Device Description

    The I-Q View is a software package to be used with FDA cleared solid-state imaging receptors. It functions as a diagnostic x-ray image acquisition platform and allows these images to be transferred to hard copy, softcopy, and archive devices via DICOM protocol. The flat panel detector is not part of this submission. In the I-Q View software, the Digital Radiography Operator Console (DROC) software allows the following functions:

      1. Add new patients to the system; enter information about the patient and physician that will be associated with the digital radiographic images.
      1. Edit existing patient information.
      1. Emergency registration and edit Emergency settings.
      1. Pick from a selection of procedures, which defines the series of images to be acquired.
      1. Adiust technique settings before capturing the x-ray image.
      1. Preview the image, accept or reject the image entering comments or rejection reasons to the image. Accepted images will be sent to the selected output destinations.
      1. Save an incomplete procedure, for which the rest of the exposures will be made at a later time.
      1. Close a procedure when all images have been captured.
      1. Review History images, resend and reprint images.
      1. Re-exam a completed patient.
      1. Protect patient records from being deleted by the system.
      1. Delete an examined Study with all images being captured.
      1. Edit User accounts.
      1. Check statistical information.
      1. Image QC.
      1. Image stitching.
      1. Provides electronic transfer of medical image data between medical devices.
    AI/ML Overview

    The provided document is a 510(k) summary for the I-Q View software. It focuses on demonstrating substantial equivalence to a predicate device through bench testing and comparison of technical characteristics. It explicitly states that clinical testing was not required or performed.

    Therefore, I cannot provide details on clinical acceptance criteria or a study proving the device meets them, as such a study was not conducted for this submission. The document relies on bench testing and comparison to a predicate device to establish substantial equivalence.

    Here's a breakdown of what can be extracted from the provided text regarding acceptance criteria and the "study" (bench testing) that supports the device:

    1. Table of Acceptance Criteria and Reported Device Performance

    Since no clinical acceptance criteria or performance metrics are provided, this table will reflect the general statements made about the device performing to specifications.

    Acceptance Criteria (Implied)Reported Device Performance
    Device functions as intended for image acquisition.Demonstrated intended functions.
    Device performs to specification.Performed to specification.
    Integration with compatible solid-state detectors performs within specification.Verified integration performance within specification.
    Software is as safe and functionally effective as the predicate.Bench testing confirmed as safe and functionally effective as predicate.

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: Not applicable/not reported. The document describes bench testing, not a test set of patient data.
    • Data Provenance: Not applicable. Bench testing generally involves internal testing environments rather than patient data from specific countries or retrospective/prospective studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Not applicable. As no clinical test set was used, no experts were needed to establish ground truth for patient data. Bench testing typically relies on engineering specifications and verification.

    4. Adjudication method for the test set

    • Not applicable. No clinical test set or human interpretation was involved.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. The document explicitly states: "Clinical Testing: The bench testing is significant enough to demonstrate that the I-Q View software is as good as the predicate software. All features and functionality have been tested and all specifications have been met. Therefore, it is our conclusion that clinical testing is not required to show substantial equivalence." The device is software for image acquisition, not an AI-assisted diagnostic tool.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Yes, in a sense. The "study" described is bench testing of the software's functionality and its integration with solid-state detectors. This is an evaluation of the algorithm/software itself.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • For bench testing, the "ground truth" would be the engineering specifications and expected functional behavior of the software and its interaction with hardware components. It's about verifying that the software performs according to its design requirements.

    8. The sample size for the training set

    • Not applicable. The I-Q View is described as an image acquisition and processing software, not an AI/machine learning model that typically requires a training set of data.

    9. How the ground truth for the training set was established

    • Not applicable, as there is no mention of a training set or AI/machine learning component.

    Summary of the "Study" (Bench Testing) for K203703:

    The "study" conducted for the I-Q View software was bench testing. This involved:

    • Verification and validation of the software.
    • Demonstrating the intended functions and relative performance of the software.
    • Integration testing to verify that compatible solid-state detectors performed within specification as intended when used with the I-Q View software.

    The conclusion drawn from this bench testing was that the software performs to specification and is "as safe and as functionally effective as the predicate software." This was deemed sufficient to demonstrate substantial equivalence, and clinical testing was explicitly stated as not required.

    Ask a Question

    Ask a specific question about this device

    K Number
    K170607
    Device Name
    !M1
    Date Cleared
    2017-07-17

    (138 days)

    Product Code
    Regulation Number
    892.1720
    Why did this record match?
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The device is designed to perform general radiography x-ray examinations on all pediatric and all adult patients, in all patient treatment areas.

    Treatment areas are defined as professional health care facility environments where operators with medical training are continually present during patients' examinations.

    Device Description

    The ModelOne mobile X-ray system is a diagnostic mobile x-ray system utilizing digital radiography technology. The device consists of a self-contained x-ray generator, image receptor(s), imaging display and software for acquiring medical diagnostic images both inside and outside of a standard stationary x-ray room. The ModelOne system incorporates a flat-panel(s) detector that can be used wirelessly for exams as in-bed projections. The system is intended to be marketed with two options with flat-panel digital images from Canon and Konica Minolta.

    AI/ML Overview

    Based on the provided text, the device is an X-ray system, and the "study" described is a non-clinical performance evaluation rather than a traditional clinical study with human patients. The information provided is for regulatory clearance (510(k) summary) rather than a comprehensive research paper on AI performance.

    Therefore, many of the typical acceptance criteria and study details for an AI/ML device (e.g., ground truth establishment for a test set, MRMC studies, standalone AI performance) are not applicable or not provided in this document. The device is a mobile X-ray system, not an AI-powered diagnostic tool. The focus is on the safety and performance of the hardware and integrated previously-cleared digital imagers, demonstrating substantial equivalence to a predicate device.

    Here's an attempt to answer the questions based only on the provided text, noting where information is absent or not relevant for this type of device:


    Acceptance Criteria and Device Performance (Non-AI X-ray System)

    The document describes performance tests for a mobile X-ray system, NOT an AI/ML device. The acceptance criteria are implicit in the performance tests verifying the functionality and safety of the hardware. The "reported device performance" refers to the successful completion of these non-clinical tests.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Test/EvaluationReported Device Performance
    UsabilityAcceptance test on customer site"Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies meeting usability expectations)
    Performance test at hospital by professional personnel"Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies meeting usability expectations)
    Battery PerformanceBeginning of life/end of life test"Performance tests confirm that the device is as effective and performs as well as or better than the predicate device." (Implies battery life meets operational needs)
    MobilityDriving distance test (full to empty battery)"The driving distance test was performed to verify maximum distance of driving from full to empty battery." (Implies meeting or exceeding required driving distance for mobile operation)
    Generator PerformanceComparison of exposure time with competitors"The aim of generator performance test was to compare the time of exposure of !M1 and its competitors." (Implies competitive or equivalent exposure times, contributing to "performs as well as or better than the predicate device.")
    System IntegrationIntegration test with previously cleared flat-panel imagers"Integration test was performed on the previously cleared flat-panel digital imagers in order to demonstrate that all components of the device function in a reproductive way according to the design specifications." (Confirms successful integration and functional operation of the complete system)
    Software RiskSoftware risk classification"The software risk is classified as moderate level of concern device according to the Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." (Acceptance is compliance with software risk guidelines, not a performance metric in this context, but a regulatory requirement met)
    SafetyOverall safety assessment"Technological differences do not raise questions of safety and the device is as safe as legally marketed DRX-Revolution by Carestream." (Overall safety acceptance based on non-clinical tests and comparison to predicate)

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated in terms of number of "cases" or "patients" as this is a device performance test, not a clinical study on diagnostic accuracy. The tests involve the device itself and its components.
    • Data Provenance: The tests are "non-clinical testing" and performed on the device hardware. Usability tests involved "professional personal" at a "hospital," but this is for evaluating the device's operation in a real-world setting, not an evaluation of diagnostic output. It's a "retrospective" view of testing results provided to the FDA.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Not Applicable / Not Provided. This document describes a mobile X-ray system, not an AI/ML diagnostic algorithm that requires expert-established ground truth for image interpretation. The "ground truth" here is the device's functional performance against its design specifications and compared to a predicate, not clinical diagnostic accuracy.

    4. Adjudication Method for the Test Set

    • Not Applicable / None. No adjudication method is mentioned as this is not a study assessing human or AI diagnostic performance based on image interpretation.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No. "No clinical testing was performed on the subject device." Therefore, no MRMC study was conducted to evaluate human readers with or without AI assistance.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance

    • Not Applicable / No. The device itself is an X-ray imaging system. It produces images, but the document does not describe a new AI algorithm for interpreting those images. The "software" mentioned is for acquiring and displaying images, and its risk is classified. The post-processing is defined by protocols from previously cleared Canon and Konica Minolta image software.

    7. Type of Ground Truth Used

    • Functional Performance Specifications and Predicate Comparison. The "ground truth" for this regulatory submission is that the device functions according to its design specifications (e.g., battery life, driving distance, exposure time) and performs "as well as or better than the predicate device" in non-clinical settings.

    8. Sample Size for the Training Set

    • Not Applicable. This is not an AI/ML algorithm that requires a training set of data.

    9. How the Ground Truth for the Training Set Was Established

    • Not Applicable. As above, no AI/ML training set is mentioned or implied.
    Ask a Question

    Ask a specific question about this device

    K Number
    K141271
    Device Name
    AERODR SYSTEM 2
    Date Cleared
    2014-09-26

    (134 days)

    Product Code
    Regulation Number
    892.1680
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K102349, K113248, K120477, K130936

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The AeroDR SYSTEM 2 is indicated for use in generating radiographic images of human anatomy. It is intended to replace radiographic film/screen system in general-purpse diagnostic procedures.

    The AeroDR SYSTEM 2 is not in mammography. fluoroscopy, tomography and angiography applications.

    Device Description

    The AeroDR SYSTEM 2 is a digital imaging system to be used with diagnostic x-ray systems. A new AeroDR Detector (flat panel digital detector: hereafter P-51) and AeroDR Generator Interface Unit2 has been just added to AeroDR SYSTEMS (The predicate devices:K102349, K113248, K120477, K130936) to function together such as with Console CS-7 (operator console), AeroDR Interface Unit, AeroDR Interface Unit2, AeroDR Generator Interface Unit, AeroDR Access Point and AeroDR Battery Charger, AeroDR Battery Charger2 and perform fundamentally same as Aero DR SYSTEMS do in physical and performance characteristics such as in device design, material safety and physical properties. Therefore, images captured with the flat panel digital detector in the AeroDR SYSTEM 2 can be communicated to the operator console via wired connection or wireless, depend on user's choice. The AeroDR SYSTEM 2 is just developed to meet user's compact layout needs without changing fundamental functions of the predicate devices.

    AeroDR SYSTEM 2 is only connected with X-ray devices which are regally marketed in the United States of America and are compatible with XGIF, UEC, XIF Board along with certain electronic requirement, Specific signal controls for hardware and software and accessories described in Operation manual and Installation manual which is also fulfilled how to compatibility test at the time of installation also. In addition, for the use of pediatric, X-ray control system for pediatric are required.

    AI/ML Overview

    The provided document, a 510(k) summary for the AeroDR SYSTEM 2, does not contain detailed information about acceptance criteria and a study proving the device meets those criteria in the format requested. The document focuses on demonstrating substantial equivalence to a predicate device, AeroDR SYSTEMS.

    However, based on the information provided, here's what can be extracted and inferred:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document doesn't explicitly list specific quantitative acceptance criteria or a performance table. Instead, it states that the AeroDR SYSTEM 2 was evaluated for "equivalent evaluation outcome" to the predicate device. The performance characteristics mentioned are qualitative comparisons to the predicate device.

    Acceptance Criteria CategoryReported Device Performance (AeroDR SYSTEM 2)
    Indications for UseIdentical to predicate device.
    BiocompatibilityEvaluated with EN ISO 10993-1, assured safety as same as predicate.
    Electrical SafetyConducted and assured as predicate devices (AAMI / ANSI ES60601-1:2005/(R) 2012 and C1:2009/(R) 2012 and A2:2010/(R) 2012).
    Electromagnetic Compatibility (EMC)Conducted and assured as predicate devices (IEC 60601-1-2).
    Technological Characteristics (Hardware/Software)Verification and validation completed without problem.
    Wireless FunctionEvaluated referencing FDA Guidance.
    Risk ManagementBased on ISO14971, completed without problem.
    Performance Testing (Bench Testing)Concluded and showed equivalent evaluation outcome to predicate.
    Non-clinical TestingConcluded and showed equivalent evaluation outcome to predicate.
    Clinical TestingConcluded and showed equivalent evaluation outcome to predicate.
    Safety and EffectivenessNo safety and effectiveness and performance issue or no differences were found further than the predicate devices.

    2. Sample Size Used for the Test Set and Data Provenance:

    The document mentions "Non clinical and clinical testing" but does not specify the sample size for these tests or the data provenance (e.g., country of origin, retrospective/prospective).

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:

    This information is not provided in the document.

    4. Adjudication Method for the Test Set:

    This information is not provided in the document.

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study:

    The document does not mention an MRMC comparative effectiveness study or any effect size of human readers improving with AI vs. without AI assistance. The AeroDR SYSTEM 2 is a digital radiography system, not an AI-assisted diagnostic tool.

    6. Standalone Performance:

    The document implies standalone performance testing ("Bench Testing," "Non clinical and clinical testing") was conducted to demonstrate equivalence to the predicate device. However, it does not explicitly state "algorithm only without human-in-the-loop performance" as would be relevant for an AI device. As it's a hardware/software system for image generation, its standalone performance refers to its ability to capture and process images equivalently to the predicate.

    7. Type of Ground Truth Used:

    The document does not explicitly state the type of ground truth used for the "clinical testing." Given the context of a diagnostic imaging system, it would typically involve images reviewed against a clinical standard, but the specific nature (e.g., expert consensus, pathology, outcomes data) is not detailed.

    8. Sample Size for the Training Set:

    The document does not mention a training set or its size. This is consistent with a device seeking substantial equivalence to a predicate, where the focus is on verification and validation against established standards and predicate performance rather than training a novel algorithm from scratch.

    9. How the Ground Truth for the Training Set Was Established:

    Not applicable, as no training set is mentioned for an AI model.

    Summary of what is present and absent regarding acceptance criteria and study details:

    The document primarily acts as a 510(k) summary, aiming to prove substantial equivalence to existing predicate devices based on various safety, performance, and technical characteristics. It asserts that "equivalent evaluation outcome" was achieved in performance, non-clinical, and clinical testing, and that there were "no safety and effectiveness and performance issue or no differences were found" compared to the predicate. However, it lacks the detailed quantitative acceptance criteria, specific study designs, sample sizes, and expert qualification information that would be typically found for studies evaluating novel AI algorithms or clinical efficacy with precise endpoints.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1