Search Results
Found 1 results
510(k) Data Aggregation
(99 days)
The Prowess Panther ProArc module is intended to support radiation treatment planning by creating treatment plans for intensity-modulated arc radiation therapy.
Panther ProArc is an optional software module to the Prowess Panther radiation therapy treatment planning system. It is an extension to the inverse planning, IMRT planning capability provided by Prowess Panther (previously cleared under K032456). Panther ProArc includes tools for visualizing and creating arc therapy plans, defining arc therapy beam properties and constraints, and allowing the user to do export these plans for delivery via DICOM protocol to the linear accelerator for treatment.
The provided text does not contain specific information about acceptance criteria, detailed device performance metrics, or a formal study with statistical data for the Prowess Panther ProArc module. The document is a 510(k) summary for regulatory clearance, primarily focusing on demonstrating substantial equivalence to predicate devices rather than proving specific performance against predefined acceptance criteria.
However, based on the information provided, here's a breakdown of what can be inferred and what is not available:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of acceptance criteria with corresponding performance metrics. It generally states that the device "met its predetermined specifications" and "demonstrated substantially equivalent performance to the predicate devices, functions as intended, and is safe and effective for its specified use."
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not explicitly stated. The document mentions "real patient cases" were used during beta testing, but the number of cases is not specified.
- Data Provenance: The beta testing was conducted at "Medical College of Wisconsin and Huntsman Cancer Hospital," suggesting data from the USA.
- Retrospective or Prospective: Not explicitly stated. The phrase "using real patient cases" could imply either.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
- Number of experts: The "medical physicists at Medical College of Wisconsin and Huntsman Cancer Hospital" were involved in functional testing and beta testing. Additionally, "clinical physicists contracted by Prowess" were involved in verifying risk mitigation. The exact number and their specific roles in establishing ground truth for a test set are not detailed.
- Qualifications of experts: They are referred to as "medical physicists" and "clinical physicists." Specific years of experience or board certifications are not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
The document does not describe any specific adjudication method for establishing ground truth from multiple experts.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A formal MRMC comparative effectiveness study, as typically understood for AI-assisted diagnostic tools, was not conducted or reported. This device is a treatment planning system, not a diagnostic AI. The evaluation focused on substantial equivalence to existing treatment planning systems, not on improving human reader performance.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The "verification and validation of the software was performed in-house according to established test plans and protocol," which included "functional testing." This internal testing would represent the standalone performance of the algorithm. Additionally, "beta testing at Medical College of Wisconsin and Huntsman Cancer Institute using real patient cases" also evaluated the software's performance, likely in a user environment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document implies that the ground truth for evaluating the treatment plans generated by Panther ProArc was established by comparing its output with "predetermined specifications" and by showing "substantially equivalent performance" to predicate devices. This would likely involve:
- Physics-based calculations: Verification of dose distribution, dose-volume histograms (DVH), and other dosimetric parameters against expected values or those generated by the predicate devices.
- Clinical expert review: Medical physicists and clinical physicists would review the generated plans for clinical acceptability and adherence to treatment goals.
- Comparison to predicate devices: The "ground truth" for substantial equivalence was largely defined by the performance of the legally marketed predicate devices (Varian's Eclipse and CMS's Monaco RTP System).
8. The sample size for the training set
This information is not provided. The document does not discuss a "training set" in the context of machine learning, as this is a radiation therapy treatment planning system, implying a more deterministic or rule-based software, rather than a machine learning model that requires explicit training data.
9. How the ground truth for the training set was established
Not applicable, as a training set for machine learning is not mentioned. For a treatment planning system, the "knowledge base" would be derived from physics principles, clinical guidelines, and potentially pre-defined planning templates or parameters, rather than a "ground truth" derived from a specific dataset.
Ask a specific question about this device
Page 1 of 1