Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K231825
    Device Name
    Panther TPS
    Manufacturer
    Date Cleared
    2023-12-15

    (177 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The purpose of Panther Treatment Planning System is to provide a complete program for radiation therapy treatment planning. This includes computation, display, evaluation, and output documentation of radiation dose estimations to be submitted for independent clinical review and judgment prior to use. The device provides data in the form of display, hardcopy prints and/or plots to guide a physician in selecting the optimum patient treatment plan. Prowess products are radiotherapy treatment planning programs that allow radiation physicists, and dosimetrists to optimize the delivery of radiation in treating cancer and related diseases.

    The radiation therapy treatment planning system provides two-dimensional and three-dimensional planning software for external (photon and electron) treatments using linear accelerators and cobalt-60 beams. The software includes several optional modules which are licensed to users.

    IMRT – the IMRT module provides treatment planning for intensity-modulated radiation therapy (IMRT) using external photon beams.

    RealART – the RealART module is intended to provide online correction of the position and shape of the beam portals based on the images acquired on the treatment day when the patient is in the treatment position.

    ProArc – the ProArc module is intended to support treatment planning by creating treatment plans for intensity-modulated arc radiation therapy.

    Stereotactic – the Stereotactic module is intended to support highly advanced precision-targeted radiation planning.

    Device Description

    Panther TPS is a three-dimensional treatment planning system for external beam radiation. The software is intended to assist in the relative positioning of radiation therapy treatment devices by predicting the three-dimensional isodose distributions that would be delivered for a particular device setting. It includes computation, display, evaluation and output documentation of radiation dose estimates to be submitted for independent clinical review and judgment prior to use.

    Panther TPS is a Microsoft Windows based treatment planning system that includes several modules, each of which has a specific function or series of functions.

    This significant change associated with the release of this version is a platform change, adding support for Windows Server.

    AI/ML Overview

    The provided document, a 510(k) summary for the "Panther TPS" device, does not contain specific acceptance criteria or an explicit "study" proving the device meets performance criteria in the typical sense of a clinical or comparative study. Instead, it relies on demonstrating substantial equivalence to a predicate device (Panther Stereotactic, K193459) through non-clinical testing, hazard analysis, and adherence to quality standards.

    The key information regarding performance assessment is detailed in sections 8 and 9 of the 510(k) summary: "Summary of Non-clinical Tests" and "Performance Testing." These sections focus on verification and validation of the software and comparison to the predicate device using identical test cases.

    Therefore, many of the requested items (sample size, expert qualifications, adjudication methods, MRMC studies, effect sizes, specific ground truth types) are not applicable or not explicitly detailed in the provided K231825 document, as it focuses on software validation and substantial equivalence rather than a clinical performance study.

    Here's a breakdown of the information that is available or can be inferred, and where the document lacks the requested detail:


    Acceptance Criteria and Device Performance (Based on available information):

    Acceptance Criteria (Implied)Reported Device Performance
    Functional Equivalence: Performs all functions as intended."Verification and validation testing has demonstrated substantially equivalent performance to the predicate device's functions as intended."
    Safety and Effectiveness: No new safety/effectiveness concerns."found to perform as intended and the benefits to patient and user outweigh any inherent risks," and "Its use does not raise any new or different safety and effectiveness concerns when compared to the predicate."
    Hazard Mitigation: All identified hazards are prevented/mitigated."A hazard analysis was conducted, and associated documentation is included in this submission. Methods for preventing and/or mitigating defined hazards have been included as well."
    Software Quality: Compliance with relevant software standards."Panther TPS was designed and implemented according to established Prowess Inc. established design and development, as well as quality management, procedures... complies with internationally recognized standards including ISO 14971, IEC 62304, and IEC 62083."
    Regression Testing: Changes do not negatively impact other areas."relevant regression testing was conducted by Prowess Quality Assurance to ensure that changes to the software did not result in any unanticipated, negative impact on other areas of the software."
    Predicate Equivalence: Identical performance on validation test cases."Identical validation test cases were performed on both the device and the predicate, which demonstrates substantial equivalence and proves that no new issues of safety and effectiveness have been introduced."

    Detailed Breakdown of Study Information:

    1. A table of acceptance criteria and the reported device performance: See table above. The acceptance criteria are largely inferred from the general requirements for substantial equivalence and software validation.

    2. Sample size used for the test set and the data provenance:

      • Sample Size: Not specified in terms of number of patient cases or specific test data sets. The document states "established test plans and protocol" and "identical validation test cases." This implies a set of predetermined test cases, but the quantity is not provided.
      • Data Provenance: Not explicitly stated. Given it's a software validation for a treatment planning system, the "data" would likely be simulated patient data, phantoms, or anonymized clinical data used for testing dose calculations and other functions. No country of origin is mentioned. The testing described is "in-house." The nature is implicitly retrospective as it involves testing a developed software against pre-defined scenarios and a predicate.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: Not specified.
      • Qualifications: "clinical physicists contracted by Prowess." No further detail regarding years of experience or board certification is provided. Their role was in "verif[ying]" the adequacy of methods for mitigating potential risks.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

      • Adjudication Method: Not applicable or not specified. This type of adjudication is typically for subjective assessments in clinical studies (e.g., image interpretation by multiple readers). The testing described here is primarily a technical verification and validation of software functionality and dose calculation accuracy against established norms or the predicate.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • MRMC Study: No. The Panther TPS is a treatment planning system, not an AI-assisted diagnostic device where human reader improvement would be measured via MRMC. The study described is a non-clinical software validation demonstrating substantial equivalence.
      • Effect Size: Not applicable.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Standalone Performance: Yes, in essence. The "Performance Testing" refers to the software's inherent ability to perform calculations and generate plans. The document states: "The software has been verified and validated based on established testing plans. The functionalities have been tested by in-house test engineers." This is a confirmation of the algorithm's performance independent of a human operator's influence on the algorithm's internal execution (though human users input data and make judgments based on the output).
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Type of Ground Truth: For a radiation therapy treatment planning system, the "ground truth" would generally involve:
        • Physical Phantoms/Measurements: Comparing calculated dose distributions to actual measurements in phantoms.
        • Analytical/Theoretical Calculations: Comparing software calculations to established physics principles or gold-standard analytical solutions.
        • Predicate Device Output: Demonstrating that the new device produces results identical to a legally marketed predicate device for the same inputs (as stated: "Identical validation test cases were performed on both the device and the predicate").
        • Clinical Physicist Review: "verified by clinical physicists" implies expert review of the outputs and methods.
          The document emphasizes the "identical validation test cases" proving substantial equivalence to the predicate, implying the predicate's performance serves as a key reference for correctness.
    8. The sample size for the training set:

      • Training Set Sample Size: Not applicable. The Panther TPS is not described as an AI/ML device that requires a training set in the conventional sense (e.g., for pattern recognition or image segmentation models). It's a deterministic software for dose calculation and treatment planning. The "training" in this context would be related to algorithm development and parameterization, not machine learning model training with data.
    9. How the ground truth for the training set was established:

      • Training Set Ground Truth: Not applicable, as it's not an AI/ML device with a training set.
    Ask a Question

    Ask a specific question about this device

    K Number
    K211760
    Device Name
    Panther OIS
    Manufacturer
    Date Cleared
    2021-09-28

    (112 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Panther OIS is an information management system used to manage medical data and clinical workflow in a hospital or clinic. To support radiation oncology users, it allows the user to:

    • · Enter or import, modify, store and archive treatment plans and images from treatment planning systems.
    • · Import, view, manipulate, enhance, annotate, store, and archive radiological images.
    • · Select and provide radiation treatment plans to a radiation treatment delivery system for atment.
    • · Store and view treatment records provided by the radiation treatment delivery system.
    Device Description

    Panther OIS is an information management system used to organize medical data and facilitate the clinic workflow in a hospital or clinic. It is built on the Windows system as standard thin client-server system with a centralized database.

    Panther OlS has incorporated and enhanced the previously cleared Prowess Panther OIS|R&V (K122616) into its framework so the user is able to:

    • Record patient related information, especially radiation treatment planning and records. ●
    • Schedule patients, medical resources and any type of activity.
    • Capture procedure codes that will be used for billing.
    • Generate reports for statistics purpose.
    • Complete their tasks easily using the new ribbon Ul that supports their workflow.
    • Import treatment plans
    • Record patient treatment histories
    • . Deliver treatment plans on radiation treatment delivery systems that have a DICOM interface for both external and stereotactic plans.
    • Import, access, store and archive radiological images and review images to verify delivered treatment.
    • The client is updated to a thin client so that it can be installed on more supported devices.
    • Support for stereotactic treatment workflow which is similar to the external beam workflow but using a shot concept instead of a beam/arc concept.

    Changes within the scope of this 510(k):

    • Addition of support for stereotactic planning ●
    • Addition of thin-client support, running resources stored on a central server instead of its localized hard drive
    • R&V-related functionalities have been removed
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Panther OIS device, based on the provided FDA 510(k) summary:

    The Panther OIS is an information management system for radiation oncology. It is not an AI-powered diagnostic device, and therefore, many of the typical AI-related study components (like expert ground truth, MRMC studies, or specific performance metrics like sensitivity/specificity) are not directly applicable or reported in this type of submission.

    The acceptance criteria primarily revolve around the system's functionality, safety, and adherence to established software development and quality standards, demonstrating substantial equivalence to predicate devices.


    1. Table of Acceptance Criteria and Reported Device Performance

    Given that this is an Oncology Information System (OIS) focusing on data management and workflow, the "acceptance criteria" are primarily related to its ability to perform its defined functions reliably and safely, matching or exceeding the predicate devices. Performance is demonstrated through successful verification and validation (V&V) testing.

    Acceptance Criteria (Implied from Summary)Reported Device Performance (Summary of V&V Results)
    Functional Equivalence to Predicate Devices:
    - Patient Chart/Record ManagementYes (performs as intended)
    - DICOM Import/Export (including plans with beams/arcs/shots)Yes (performs as intended)
    - Treatment Plans (Import, Store, Access, Modify, Archive)Yes (performs as intended)
    - Images (Import, Store, Access, Modify, Archive)Yes (performs as intended - an enhancement over the primary predicate which did not have this feature for images)
    - Image ReviewYes (performs as intended - an enhancement over the primary predicate)
    - Treatment Machine CharacterizationYes (performs as intended)
    - SchedulingYes (performs as intended)
    - Activity CaptureYes (performs as intended - an enhancement over the primary predicate)
    - Support for Stereotactic PlanningYes (performs as intended - new addition in the subject device)
    Software Quality and Safety:
    - Adherence to Predetermined SpecificationsDemonstrated by verification and validation (V&V) testing.
    - Substantially Equivalent Performance to Predicate DevicesDemonstrated by V&V testing and comparison of features.
    - Operation as IntendedDemonstrated by V&V testing.
    - Safety and Effectiveness for Specified UseDemonstrated by hazard analysis, V&V testing (in-house and external), and user site testing, confirming that the benefits outweigh risks.
    - Compliance with Software Development Standards (ISO 14971, IEC 62304)Stated compliance.
    - No Unanticipated Negative Impact from Changes (Thin-Client, Stereotactic)Relevant regression testing was conducted.
    User Experience and Clinical Integration:
    - Performance in Clinical Environment (safety & feedback)Verified through external testing by "OUR United" and user site (beta-site) testing using clinical cases. This testing confirmed the software is safe and effective in a clinical environment.

    2. Sample Size for the Test Set and Data Provenance

    • Test Set Sample Size: The document does not specify a numerical "sample size" in terms of cases or patients for the software testing. Instead, it refers to:
      • "Established test plans and protocol" for in-house V&V.
      • External testing by "OUR United" to verify safety and performance under conditions equivalent to an actual clinical environment.
      • "Our beta-site using clinical cases" for additional in-field testing.
    • Data Provenance:
      • External testing by "OUR United": The specific country of origin is not mentioned.
      • Beta-site testing: Not specified.
      • Retrospective or Prospective: Not explicitly stated, but the use of "clinical cases" during beta-site testing implies retrospective data or data collected prospectively during the test period. The functionality of an OIS would typically be tested with existing clinical data or simulated patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Ground Truth for OIS Functionality: The concept of "ground truth" for an OIS is different from that for an AI diagnostic algorithm. For an OIS, ground truth relates to the accuracy of data handling, workflow execution, and proper interfacing. This is typically established through:
      • Defined specifications: The software correctly implements its intended functions.
      • Comparison to predicate devices: The functionality matches or improves upon what is already cleared.
      • Clinical expert review: In this case, "clinical physicists contracted by Prowess" were involved in verifying the adequacy of risk mitigation. This implies their expertise was used to validate the safety and functional integrity of the system in a clinical context.
    • Number of Experts & Qualifications:
      • The document mentions "clinical physicists contracted by Prowess" were involved in verifying the adequacy of risk mitigation. The specific number of these physicists is not provided.
      • Their qualification is stated as "clinical physicists," implying expertise in the practical application and safety aspects of radiation oncology systems.

    4. Adjudication Method for the Test Set

    The document does not describe a traditional adjudication method (like 2+1 or 3+1) because the study is not focused on evaluating human-level diagnostic performance or AI algorithm output agreement. Instead, verification and validation involved:

    • In-house testing: By "in-house test engineers."
    • External testing: By "OUR United" for performance in equivalent clinical conditions.
    • Beta-site testing: By "our beta-site using clinical cases" to confirm safety and effectiveness in a clinical environment.

    The "adjudication" is implicitly done by confirming that the software meets its predetermined specifications and functions as intended without errors, as evaluated by these different testing groups.


    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • No, an MRMC comparative effectiveness study was not done.
      • This type of study is typically conducted for AI-powered diagnostic devices to assess how the AI assists human readers (e.g., radiologists) in their diagnostic tasks.
      • The Panther OIS is an information management system, not an AI diagnostic algorithm, so an MRMC study is not relevant to its clearance.

    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    • Yes, in essence, standalone performance testing was done for the software's functional adherence to specifications.
      • The "verification and validation of the software was performed in-house according to established test plans and protocol." This represents the core "standalone" performance testing of the software's functionalities (e.g., can it import a DICOM plan correctly, can it store a patient record, can it generate a report?).
      • The "functional testing was conducted both in-house and by OUR United."
      • It's important to distinguish that "standalone performance" for an OIS means proving the software itself works as intended according to its functional and safety requirements, not demonstrating a diagnostic accuracy metric.

    7. The Type of Ground Truth Used

    For an OIS, the "ground truth" is primarily based on:

    • Predetermined Specifications: The software is designed to perform specific functions according to a set of requirements. The "ground truth" is that the software correctly executes these requirements.
    • Predicate Device Functionality: The Panther OIS demonstrated "substantial equivalence" to predicate devices, meaning its functional behavior aligns with or improves upon established, legally marketed systems.
    • Clinical Expert Verification: "Clinical physicists contracted by Prowess" validated risk mitigation, indirectly confirming the system's safe and effective operation within clinical workflows.

    There isn't a "pathology" or "outcomes data" type of ground truth in the diagnostic sense, as this device doesn't make diagnostic calls.


    8. The Sample Size for the Training Set

    • Not applicable / not provided: The Panther OIS is an information management system, not an AI algorithm that undergoes a "training" phase with a dataset. Therefore, there is no "training set sample size" as would be seen for a machine learning model. Its development follows traditional software engineering principles.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable: As explained above, this device does not utilize a "training set" in the context of machine learning. The "ground truth" for its development is its functional specifications, compliance with regulatory standards, and established best practices in oncology information systems and software engineering.
    Ask a Question

    Ask a specific question about this device

    K Number
    K193459
    Manufacturer
    Date Cleared
    2020-04-27

    (133 days)

    Product Code
    Regulation Number
    892.5050
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Panther Stereotactic is intended to support highly advanced precision-targeted radiation planning.

    Device Description

    Panther Stereotactic is an optional software module that has been added to the existing Prowess Panther Treatment Planning System to support planning with multiple shots instead of beams/arcs. Each shot is defined as a full or partial arc of one or multiple radiation sources with different collimator sizes depending on the machine configuration depending on the delivery types. Stereotactic forward planning extends Prowess Panther's existing arc definition features to define shot parameters such as location, size and arc angles. Stereotactic inverse planning extends Prowess Panther's existing simulated annealing algorithm to find optimal shot parameters such as location, size and arc angles.

    AI/ML Overview

    This document is a 510(k) summary for the Panther Stereotactic device by Prowess, Inc. It describes a software module intended to support highly advanced precision-targeted radiation planning.

    Based on the provided text, a comprehensive study proving the device meets specific acceptance criteria, with the details requested, is not explicitly detailed. The document focuses on demonstrating substantial equivalence to predicate devices rather than proving performance against specific quantitative acceptance criteria through a dedicated clinical study with detailed metrics.

    However, we can infer information about the non-clinical testing and general verification/validation activities that serve to establish the device's safety and effectiveness.

    Here’s a breakdown based on the available information, noting where specific details are not provided:


    Acceptance Criteria and Device Performance (Inferred from Non-Clinical Testing):

    Since specific quantitative acceptance criteria for performance are not explicitly stated in a table format in the provided text, we must infer them based on the description of the verification and validation (V&V) activities. The primary acceptance criterion appears to be functional correctness, safety, and effectiveness equivalent to predicate devices within a radiation planning context.

    Acceptance Criteria (Inferred)Reported Device Performance (Summary from Non-Clinical Tests)
    Functional Correctness: The software performs as intended for stereotactic planning, including shot parameter definition and optimization."Verification and validation of the software was performed in-house according to established test plans and protocol... Functional testing was conducted both in-house and by OUR New Medical Technologies Ltd. ... Verification and validation testing has demonstrated that Panther Stereotactic has met its predetermined specifications, demonstrated substantially equivalent performance to the predicate devices, functions as intended..."
    Safety: Risk mitigation successfully addresses identified hazards, and the software does not introduce new safety concerns."A hazard analysis was conducted, and associated documentation has been included. Methods for preventing and/or mitigating defined hazards are detailed... A comprehensive risk analysis has been conducted. Detailed methods of mitigating these potential risks have been identified by the development team, and verified by clinical physicists contracted by Prowess and determined to be adequate." "Its use does not raise any new or different safety and effectiveness concerns when compared to the predicates."
    Effectiveness: The device is effective for precision-targeted radiation planning, comparable to predicate devices."...demonstrated substantially equivalent performance to the predicate devices... functions as intended, and is safe and effective for its specified use." "This testing has confirmed that the software is safe and effective in a clinical environment." "The software has been found to perform as intended and the benefits to patient and user outweigh any inherent risks..."
    No Unanticipated Negative Impact (Regression Testing): Changes to the software do not negatively affect other areas."In addition, relevant regression testing was conducted by Prowess Quality Assurance to ensure that changes to the software did not result in any unanticipated, negative impact on other areas of the software."
    User Environment Performance: Software performs well in a clinical use setting."Although clinical testing is not required to demonstrate substantial equivalence... we elected to conduct beta testing by OUR New Medical Technologies Ltd. to perform stereotactic planning under conditions equivalent to that of an actual clinical environment, in order to obtain feedback and to verify the results of in-house testing in a user environment... the system was also tested by our beta-site using clinical cases."
    Compliance with Standards: Adherence to relevant medical device and software standards."design and development of the medical device software complies with internationally recognized standards including ISO 14971:2007 Medical devices – Application of risk management to medical devices, IEC 62304 Medical device software life cycle processes, and IEC 62083 Medical electrical equipment – Requirements for the safety of radiotherapy treatment planning systems."

    1. Sample sizes used for the test set and the data provenance:

    • Test Set Sample Size: The document mentions "clinical cases" for beta testing and "established test plans" for in-house verification. However, specific numerical sample sizes for the test set (number of cases/patients) are not provided.
    • Data Provenance: The beta testing was conducted by "OUR New Medical Technologies Ltd.", which implies it was an external site, likely a clinical environment. No specific country of origin is mentioned for the data, nor is it explicitly stated whether the cases were retrospective or prospective, though "clinical environment" and "clinical cases" suggest they were real-world patient data. In-house testing uses unspecified "test plans."

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The document states that "clinical physicists contracted by Prowess" verified the adequacy of risk mitigation methods. For the beta site testing, "OUR New Medical Technologies Ltd." performed the testing, implying that their clinical staff (likely medical physicists, radiation oncologists, or dosimetrists) were involved in generating and evaluating results.
    • Specific numbers or qualifications (e.g., years of experience) of these experts are not provided.

    3. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    • The document does not describe any formal adjudication method for establishing ground truth or evaluating test results, such as a consensus process among multiple readers. The emphasis is on testing by the beta site and verification by Prowess's internal teams and contracted physicists.

    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No MRMC study was conducted or reported. The device is a radiation therapy treatment planning system, not explicitly described as an AI-assisted diagnostic or decision support tool where human reader improvement would be typically measured. The focus is on the software's ability to plan treatment, not on improving human diagnostic accuracy.

    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • The software's core function is planning. The "functional testing" and "verification and validation" would inherently involve evaluating the algorithm's output (e.g., dose calculations, shot parameter optimization) in a "standalone" sense, though it's always within the context of a treatment planning workflow.
    • The document states, "The software has been verified and validated based on established testing plans. The functionalities have been tested by in-house test engineers." This suggests standalone performance evaluation of the algorithms.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • For a treatment planning system, "ground truth" would primarily relate to the accuracy of dose calculations against established physics models or benchmarks, and the clinical feasibility/quality of the generated treatment plans.
    • The ground truth seems to have been established through a combination of:
      • "Predetermined specifications" for functional testing.
      • Comparison to "substantially equivalent performance to the predicate devices."
      • Verification by "clinical physicists contracted by Prowess."
      • Feedback and verification based on "clinical cases" from the beta site, implying a reference to clinical standards of care and expected plan quality.
    • No pathology or patient outcomes data were used as ground truth for this clearance.

    7. The sample size for the training set:

    • The document describes the device as a "software module that has been added to the existing Prowess Panther Treatment Planning System." It refers to extending existing algorithms ("Panther's existing arc definition features" and "existing simulated annealing algorithm").
    • There's no mention of a separate "training set" in the context of machine learning, nor any indication that this module is an AI/ML product developed using a training set. The descriptions point to deterministic algorithms for planning.

    8. How the ground truth for the training set was established:

    • As the device is described as an extension of existing deterministic algorithms (not an AI/ML model with a 'training set'), this question is not applicable based on the provided information.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1