Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K201119
    Device Name
    ChartCheck
    Manufacturer
    Date Cleared
    2020-06-26

    (60 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    ChartCheck

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ChartCheck device is intended for the quality assessment of radiotherapy treatment plans and on treatment chart review.

    Device Description

    The ChartCheck device is software that enables trained radiation oncology personnel to perform quality assessments of treatment plans and treatment chart review utilizing plan, treatment, imaging, and documentation data obtained from the ARIA Radiation Therapy Management database.

    ChartCheck contains 3 main components:

      1. An agent service that is configured by the user to monitor their ARIA Radiation Therapy Management database. The agent watches for new treatment plans, treatment records, documentation, and imaging data. The agent uploads new data to the cloud based checking service.
      1. A cloud based checking service calculates check states as new records are uploaded from the agent.
      1. A web application accessed via a web browser which contains several components.
      • a. It allows trained radiation oncology personnel to review treatment records, view the check state calculation results, record comments,, and mark the chart checks as approved.
      • b. It allows an administrator to set check state colors, configure default settings, and define check state logic.
    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and supporting study for the ChartCheck device.


    Acceptance Criteria and Device Performance for Radformation, Inc.'s ChartCheck (K201119)

    The provided documentation, a 510(k) premarket notification, indicates that the ChartCheck device did not undergo a traditional clinical study with established acceptance criteria and performance metrics in the way a diagnostic or therapeutic device might. Instead, the submission relies on demonstrating substantial equivalence to a predicate device, ARIA Radiation Therapy Management (K173838), primarily through software verification and validation.

    The "acceptance criteria" in this context refer to the successful completion of verification tests to ensure the software functions as intended and meets its requirements. The "reported device performance" is essentially that the software successfully passed these internal tests and demonstrated functionality comparable to the predicate device.

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Proxy)Reported Device Performance
    Software functionality* Software correctly monitors ARIA database for new treatment plans, records, documentation, and imaging data.
    • Agent service successfully uploads new data to cloud-based checking service.
    • Cloud-based checking service accurately calculates check states.
    • Web application correctly displays treatment records, check state calculation results, allows for comments, and approval marking.
    • Administrator functions (setting check state colors, configuring settings, defining check state logic) work as designed.
    • ChartCheck displays planned and treatment values along with check state indicators.
    • ChartCheck presents control charts. |
      | Substantial Equivalence | * Indications for Use: Substantially Equivalent to Predicate.
    • Pure software: Equivalent to Predicate.
    • Intended users: Equivalent to Predicate.
    • OTC/Rx: Equivalent to Predicate.
    • Input: Equivalent to Predicate.
    • Functionality: Substantially Equivalent to Predicate (utilizes data to calculate pass/fail/override/condition check states, comparable to predicate's pass/fail/override check states).
    • Output: Substantially Equivalent to Predicate. |
      | Safety and Effectiveness | * Verification and Validation testing demonstrated the device is safe and effective.
    • Hazard Analysis performed.
    • No new questions regarding safety and effectiveness raised by its indications for use (which are a subset of the predicate's). |

    2. Sample Size Used for the Test Set and Data Provenance

    The document explicitly states: "As with the Predicate Device, no clinical trials were performed for ChartCheck. Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements."

    Therefore:

    • Sample Size for Test Set: Not specified in terms of patient data or clinical cases. The testing was focused on software verification/validation, likely involving simulated data, test cases, and functional scenarios rather than a clinical dataset.
    • Data Provenance: Not applicable in the context of a clinical test set. The data used for verification would be internally generated or synthetic data used to exercise the software's functions.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    Given that clinical trials were not performed and the focus was on software verification, the concept of "ground truth" derived from expert consensus on medical images or diagnoses isn't directly applicable here. The "ground truth" for the software's functional tests would be the expected output or behavior for a given input, as defined by software requirements and design specifications, and assessed by qualified software and quality assurance personnel.

    4. Adjudication Method for the Test Set

    Not applicable. There was no clinical test set requiring adjudication of findings by medical experts. The verification process would likely involve pass/fail criteria for individual software tests.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. The submission clearly states no clinical trials were performed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The performance described is largely standalone in terms of the algorithm's calculation of "check states." However, the device's purpose is to assist trained radiation oncology personnel in performing quality assessments. The "web application" component explicitly describes human interaction for review, comments, and approval. Therefore, while the core "checking service" operates algorithmically, the overall device is designed for a human-in-the-loop workflow. A standalone study demonstrating the algorithm's performance without any human interaction was not detailed as a separate component of the submitted performance data.

    7. The Type of Ground Truth Used

    The ground truth used for verifying the software's functionality would be defined software requirements and expected outputs for specific test cases. For instance, if the software is designed to flag a plan where a certain dose constraint is violated, the ground truth would be that specific violation. This is a functional truth rather than a clinical truth established by medical experts or pathology.

    8. The Sample Size for the Training Set

    The document does not mention a "training set" in the context of machine learning or AI models. The ChartCheck device appears to be a rule-based or logic-based software rather than an AI/ML model that would require extensive data for training. If there are configurable rules, these are likely defined by administrators rather than learned from a dataset.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set for an AI/ML model is mentioned.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1