Search Results
Found 2 results
510(k) Data Aggregation
(99 days)
The Prowess Panther ProArc module is intended to support radiation treatment planning by creating treatment plans for intensity-modulated arc radiation therapy.
Panther ProArc is an optional software module to the Prowess Panther radiation therapy treatment planning system. It is an extension to the inverse planning, IMRT planning capability provided by Prowess Panther (previously cleared under K032456). Panther ProArc includes tools for visualizing and creating arc therapy plans, defining arc therapy beam properties and constraints, and allowing the user to do export these plans for delivery via DICOM protocol to the linear accelerator for treatment.
The provided text does not contain specific information about acceptance criteria, detailed device performance metrics, or a formal study with statistical data for the Prowess Panther ProArc module. The document is a 510(k) summary for regulatory clearance, primarily focusing on demonstrating substantial equivalence to predicate devices rather than proving specific performance against predefined acceptance criteria.
However, based on the information provided, here's a breakdown of what can be inferred and what is not available:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with corresponding performance metrics. It generally states that the device "met its predetermined specifications" and "demonstrated substantially equivalent performance to the predicate devices, functions as intended, and is safe and effective for its specified use."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not explicitly stated. The document mentions "real patient cases" were used during beta testing, but the number of cases is not specified.
- Data Provenance: The beta testing was conducted at "Medical College of Wisconsin and Huntsman Cancer Hospital," suggesting data from the USA.
- Retrospective or Prospective: Not explicitly stated. The phrase "using real patient cases" could imply either.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Number of experts: The "medical physicists at Medical College of Wisconsin and Huntsman Cancer Hospital" were involved in functional testing and beta testing. Additionally, "clinical physicists contracted by Prowess" were involved in verifying risk mitigation. The exact number and their specific roles in establishing ground truth for a test set are not detailed.
- Qualifications of experts: They are referred to as "medical physicists" and "clinical physicists." Specific years of experience or board certifications are not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method for establishing ground truth from multiple experts.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A formal MRMC comparative effectiveness study, as typically understood for AI-assisted diagnostic tools, was not conducted or reported. This device is a treatment planning system, not a diagnostic AI. The evaluation focused on substantial equivalence to existing treatment planning systems, not on improving human reader performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The "verification and validation of the software was performed in-house according to established test plans and protocol," which included "functional testing." This internal testing would represent the standalone performance of the algorithm. Additionally, "beta testing at Medical College of Wisconsin and Huntsman Cancer Institute using real patient cases" also evaluated the software's performance, likely in a user environment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document implies that the ground truth for evaluating the treatment plans generated by Panther ProArc was established by comparing its output with "predetermined specifications" and by showing "substantially equivalent performance" to predicate devices. This would likely involve:
- Physics-based calculations: Verification of dose distribution, dose-volume histograms (DVH), and other dosimetric parameters against expected values or those generated by the predicate devices.
- Clinical expert review: Medical physicists and clinical physicists would review the generated plans for clinical acceptability and adherence to treatment goals.
- Comparison to predicate devices: The "ground truth" for substantial equivalence was largely defined by the performance of the legally marketed predicate devices (Varian's Eclipse and CMS's Monaco RTP System).
8. The sample size for the training set
This information is not provided. The document does not discuss a "training set" in the context of machine learning, as this is a radiation therapy treatment planning system, implying a more deterministic or rule-based software, rather than a machine learning model that requires explicit training data.
9. How the ground truth for the training set was established
Not applicable, as a training set for machine learning is not mentioned. For a treatment planning system, the "knowledge base" would be derived from physics principles, clinical guidelines, and potentially pre-defined planning templates or parameters, rather than a "ground truth" derived from a specific dataset.
Ask a specific question about this device
(161 days)
Panther™ RealART is used to correct geometric mismatches between radiation beams and the treatment target under on-line image-guidance. The optional software module is part of the family of treatment planning products under the trade name Panther™.
PANTHER™ RealART is an optional software module to the Prowess radiation therapy planning software for supporting online image-guided radiation therapy. The Panther™ RealART on-line plan adaptation system is composed of a workstation with two Intel Quad-Core Xeon processors (or later versions) running Windows XP operation system (or later versions) and proprietary software that allows the trained users to adjust radiation therapy treatment plans based on the images acquired on the day of treatment with the patient at the treatment position.
The provided text describes the Panther™ RealART device and its 510(k) submission but does not include a detailed table of acceptance criteria or specific reported device performance metrics against such criteria. The study described is primarily a non-clinical verification and validation study, supplemented by a limited "clinical testing" activity.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding device performance metrics. Instead, it broadly states that the device "met its specifications" and "demonstrated substantially equivalent performance to the predicate device, and is safe and effective for its intended use."
The closest statements to performance criteria are:
- "results in equivalent or better quality of treatment plans as compared with the use of predicate device as demonstrated in our field tests."
- "the use of Panther™ RealART will improve the accuracy of IMRT treatment delivery."
Without specific numerical targets for "quality of treatment plans" or "accuracy of IMRT treatment delivery," a precise table cannot be constructed.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for "Clinical Tests": Four CT datasets, acquired from a total of four fractions from two prostate and one pancreas cases.
- Data Provenance: The "clinical testing" was conducted at the Medical College of Wisconsin using "real patient cases." This indicates prospective data collection for the purpose of testing the device, though the term "clinical testing" in this context refers to a limited field test rather than a full-scale clinical trial.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The document does not specify the number of experts or their qualifications involved in establishing ground truth for the test set. It mentions "medical physicists/dosimetrists at the Medical College of Wisconsin" conducted functional testing and "clinical physicists contracted by Prowess" verified risk mitigation methods. However, it doesn't state if these individuals established the specific ground truth for the "clinical tests" cases, nor does it detail their qualifications (e.g., years of experience).
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) for establishing ground truth or assessing the device's performance on the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study is mentioned. The "clinical testing" involved a very small sample of cases and did not appear to focus on comparing human reader performance with and without AI assistance.
6. Standalone Performance Study (Algorithm Only Without Human-in-the-Loop Performance)
The primary testing described is focused on the software's functionality and its ability to assist trained users in adjusting treatment plans.
The device's description highlights its components ("Rapid delineation of targets and organs at risk (OAR)," "Segment aperture morphing," "Segment weight optimization"). These are algorithms, and their performance was likely evaluated in a standalone manner during the "Verification and validation of the software... in house according to the Verification and Validation (V&V) Protocol" and functional testing. However, no specific standalone performance metrics (e.g., sensitivity, specificity for contour delineation, or accuracy of segment morphing) are reported in this summary. The "clinical tests" serve to "verify the dose coverage of the target and the correctness of the on-line correction" (which is an output of the algorithm with user input).
7. Type of Ground Truth Used
The type of ground truth used for the "clinical tests" is not explicitly defined, but it can be inferred that it involves expert assessment of the correctness of the on-line correction and dose coverage based on actual patient CT images and treatment plans. It is likely a form of expert consensus or expert-defined optimal plan rather than pathology or long-term outcomes data, given the context of radiation therapy planning.
8. Sample Size for the Training Set
The document does not provide any information regarding the sample size used for the training set of the Panther™ RealART algorithms.
9. How Ground Truth for the Training Set Was Established
The document does not provide any information on how the ground truth for the training set was established, as it does not discuss the training phase of the algorithm.
Ask a specific question about this device
Page 1 of 1