Search Results
Found 1 results
510(k) Data Aggregation
(25 days)
Eclipse Treatment Planning System v16.1
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS consists of different applications, each used for specific purposes at a different phase of treatment planning.
Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided document is a 510(k) summary for the Varian Eclipse Treatment Planning System, v16.1. It describes the device and claims substantial equivalence to a predicate device (Eclipse Treatment Planning System v16.0). However, it does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a study proving the device meets those criteria.
Specifically, the document primarily focuses on software verification and validation, standards conformance, and a comparison to its predicate device. It explicitly states: "No animal studies or clinical tests have been included in this pre-market submission." This indicates that the type of performance data typically associated with studies proving a device meets specific clinical or diagnostic acceptance criteria (e.g., sensitivity, specificity, accuracy against a ground truth) is not present here.
Therefore, many of the requested fields cannot be directly extracted from this document.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and reported device performance:
- Acceptance Criteria: Not explicitly stated in terms of quantitative performance metrics (e.g., accuracy thresholds, sensitivity/specificity targets). The acceptance criteria mentioned are general conformance to requirements, specifications, and standards (e.g., IEC 62304, IEC 62366-1, IEC 61217, IEC 62083, IEC 82304-1).
- Reported Device Performance: The document states, "Test results demonstrate conformance to applicable requirements and specifications." and "There were no remaining discrepancy reports (DRs) which could be classified as Safety or Customer Intolerable." This is a general statement about meeting system-level requirements, not specific quantitative performance metrics against a defined ground truth for a clinical indication.
Acceptance Criteria (General) | Reported Device Performance (General) |
---|---|
Conformance to applicable requirements and specifications | Test results demonstrate conformance. |
Conformance with specified IEC standards | Conforms in whole or in part with IEC 62304, 62366-1, 61217, 62083, 82304-1. |
No remaining critical discrepancy reports | No remaining DRs classified as Safety or Customer Intolerable. |
Performs as intended in specified use conditions | Verification and validation demonstrate the subject device should perform as intended. |
As safe and effective as the predicate device | Deemed as safe and effective and performs at least as well as the predicate device. |
2. Sample size used for the test set and the data provenance:
- Not specified. The document mentions "Software Verification and Validation Testing" but does not detail the size or nature of any specific test sets used for evaluating clinical performance or dose calculation accuracy with patient data. It also does not mention data provenance (e.g., country of origin, retrospective/prospective). Given that no clinical studies were performed, any "test set" would likely refer to internal engineering tests, rather than a clinical dataset.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. Since no clinical studies or "patient-like" test sets with established ground truths for diagnostic/clinical accuracy were mentioned, this information is not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not specified. This is relevant for studies involving human readers or expert consensus on ground truth, which were not part of this submission's provided performance data.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was not done. The document explicitly states "No animal studies or clinical tests have been included in this pre-market submission." This type of study would fall under clinical testing.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not explicitly stated in terms of clinical performance. The "Software Verification and Validation Testing" implies testing of the algorithm, but the performance endpoints are not reported in clinical terms (e.g., sensitivity, specificity, accuracy of dose calculation compared to a gold standard in patients). The changes (GPU calculation for Acuros PT dose, DECT for proton stopping power, preventing dose calculation for DECT Rho and Z images) are technical enhancements that would have been validated through internal engineering tests for accuracy and consistency, but the specific results of these standalone performance evaluations against clinical ground truth are not provided here.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified for clinical endpoints. For software verification and validation, the "ground truth" would be the expected output or behavior according to system requirements and specifications (e.g., a known correct dose calculation for a phantom, correct image processing results). However, this is not a ground truth related to clinical outcomes or expert consensus on patient data.
8. The sample size for the training set:
- Not applicable/Not specified. Treatment planning systems typically use physics models and algorithms, not machine learning models that require "training sets" in the conventional sense. While there might be internal data used for calibration or model development, it's not referred to as a "training set" in this context, nor is its size provided.
9. How the ground truth for the training set was established:
- Not applicable/Not specified. As there's no mention of a traditional machine learning "training set," this information is not relevant to the provided document.
In summary, the provided FDA 510(k) summary focuses primarily on software development practices, adherence to standards, and a comparison demonstrating substantial equivalence to a predicate device based on non-clinical testing. It explicitly states the absence of animal or clinical studies, meaning the type of performance evaluation you're asking about (related to clinical accuracy against ground truth using patient data) was not part of this specific submission.
Ask a specific question about this device
Page 1 of 1