Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K173791
    Date Cleared
    2018-02-09

    (57 days)

    Product Code
    Regulation Number
    892.5750
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K160440, K151159

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Leksell GammaPlan® is a computer-based system designed for Leksell Gamma Knife® treatment planning.

    Device Description

    Leksell GammaPlan® is designed for use with the Leksell Gamma Knife and is intended to be used for planning the dosimetry of treatments in stereotactic radiosurgery and stereotactic radiation therapy. It processes the inputs of health care professionals (e.g. neurosurgeons, radiation therapists, radiation physicists) such that the desired radiation dose is delivered by the Leksell Gamma Knife to a precisely defined volume.

    AI/ML Overview

    Here's an analysis of the provided text regarding acceptance criteria and supporting studies:

    The FDA 510(k) summary for Leksell GammaPlan® (K173791) explicitly states that clinical testing on patients was NOT required to demonstrate substantial equivalence to the predicate device (K151666). This means the submission does not contain information about studies designed to prove clinical performance against specific acceptance criteria in a patient population.

    Instead, the submission focuses on non-clinical testing to demonstrate that the new version of the device maintains the same fundamental functionality and technical characteristics as the predicate.

    Therefore, many of the requested elements for describing a study that proves the device meets acceptance criteria (especially those related to clinical performance, sample sizes, expert involvement, and ground truth in a clinical context) are not applicable to this particular 510(k) submission.

    Here's a breakdown of the requested information based on the provided text:


    1. A table of acceptance criteria and the reported device performance

    The document does not explicitly present specific, measurable acceptance criteria in a table format that would typically be used for clinical performance. Instead, it refers to internal verification and validation against "requirement specification" and "user needs."

    Acceptance Criteria (Inferred from non-clinical testing)Reported Device Performance (Summary of Non-Clinical and Performance testing)
    Conformance to applicable technical requirement specifications"Results from verification and validation testing demonstrate that conformance to applicable technical requirement specification...have been met."
    Conformance to user needs"...and user needs have been met."
    Functionality and performance of new and existing features"Testing in the form of module, integration and system level verification was performed to evaluate the performance and functionality of the new and existing features against requirement specification."
    Effective risk control measures for safety-related functions"The design and usability validation was also made to ensure that the risk control measures associated with functions related to safety (FRS) for the new functionality was effective."
    Device is "confident and stable""...and that the system is confident and stable."
    Maintains fundamental functionality and technical characteristics of the predicate"The fundamental functionality and technical characteristics of the device are the same as the predicate device, K151666."
    Supports Leksell Gamma Knife® Icon and PerfexionNew Feature: Added support for both Leksell Gamma Knife® Icon and Leksell Gamma Knife Perfexion.
    Supports non-square images for pre-planning and follow-upNew Feature: Possible to import and co-register non-square images.
    Volumetric margin expansion tool functions as intendedNew Feature: A new margin tool allows for performing volumetric expansion of delineated volumes.
    DICOM import improvements (layout, identification, USB, attribute viewing)New Features: Improved DICOM import dialog with image preview, easier navigation, series description display and assignment, USB import, and viewing of DICOM attributes for imported series.
    Upgrade to CentOS 7.1 with from CentOS 5.8New Feature: Upgraded Operating System to CentOS 7.1.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)

    • Sample size for test set: Not applicable (no patient-based clinical test set or data described).
    • Data provenance: Not applicable (no patient-based clinical data described).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)

    • Number of experts: Not applicable (no clinical test set requiring expert ground truth).
    • Qualifications of experts: The document mentions "competent and professionally qualified personnel" performed design and usability validation, but provides no further details on their specific qualifications or their role in establishing ground truth for a test set. This likely refers to internal development and quality assurance personnel rather than external clinical experts.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set

    • Adjudication method: Not applicable (no clinical test set described).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: No, an MRMC comparative effectiveness study was not done. The device is a treatment planning system, not an AI-assisted diagnostic tool for human readers.
    • Effect size: Not applicable.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Standalone performance: The non-clinical testing included "module, integration and system level verification" against requirements. This can be interpreted as evaluating the algorithm's (software's) performance in a standalone context, ensuring its internal logic and functions work as specified. However, this is not a diagnostic algorithm's standalone performance, but rather a planning system's functional performance.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of ground truth: For the non-clinical testing, the "ground truth" would be the software requirements specifications and design documents. The validation sought to prove the system performed according to these pre-defined specifications. There is no mention of clinical ground truth (expert consensus, pathology, outcomes data) as part of this submission.

    8. The sample size for the training set

    • Sample size for training set: Not applicable. This device is a treatment planning system and not an AI/ML device that requires a training set in the conventional sense for a diagnostic algorithm. Its development would involve software engineering and testing principles, not machine learning model training.

    9. How the ground truth for the training set was established

    • Ground truth for training set: Not applicable. As with the previous point, there's no mention of a training set or associated ground truth establishment methods for an AI/ML model.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1