Search Results
Found 18 results
510(k) Data Aggregation
(124 days)
Eclipse Treatment Planning System (18.1)
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments for patients with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal radiation (brachytherapy) treatments.
Eclipse provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. It is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
Eclipse is used for planning external beam radiation therapy treatments employing photon energies between 1 and 50 MV, electron energies between 1 and 50 MeV, for proton energies between 50 and 300 MeV, and for planning internal radiation (brachytherapy) treatments with any clinically approved radioisotope. The treatment planning system utilizes a patient model or virtual patient derive from medical imaging techniques to simulate, calculate and optimize the radiation dose distribution inside the body during a treatment procedure in order to ensure effective treatment of the tumor but to minimize damage to surrounding tissue.
The provided document is a 510(k) premarket notification letter from the FDA regarding the Eclipse Treatment Planning System (18.1). It primarily addresses the substantial equivalence of the new version to a legally marketed predicate device (Eclipse Treatment Planning System 18.0).
Unfortunately, this document does not contain the detailed information required to describe acceptance criteria and a study proving the device meets those criteria, as requested in your prompt.
Here's why and what information is missing:
- No specific acceptance criteria table or performance metrics: The document states that "Test results demonstrate conformance to applicable requirements and specifications," but it does not list what those requirements or specifications are (the acceptance criteria), nor does it provide a table of reported device performance against those criteria.
- No information on clinical studies or human-in-the-loop performance: The document explicitly states, "No animal studies or clinical tests have been included in this pre-market submission." This immediately tells us that there was no MRMC study, no standalone performance study in a clinical context, and no ground truth established from patient outcomes or expert consensus for such a study.
- Focus on software V&V and equivalence to predicate: The "Summary of Performance Testing (Non-Clinical Testing)" section primarily references software verification and validation (V&V) activities (unit, integration, system testing) and measurement comparison tests using Gamma evaluation criteria and plan comparisons using clinical objectives and workflow testing to show comparability to the predicate. These are engineering and software quality assurance tests, not clinical performance studies with defined acceptance metrics for AI/algorithm performance.
- No mention of AI/algorithm specific performance: The document describes "RapidArc Dynamic" as an "improved optimization algorithm," but it does not treat it as a distinct AI algorithm requiring specific clinical performance validation against a ground truth as one might expect for a diagnostic or prognostic AI tool. The testing mentioned appears to be related to the accuracy and efficiency of the planning output compared to the predicate, rather than the performance of an AI model in a diagnostic or assistive capacity.
- No details on sample size, data provenance, expert ground truth, or adjudication: Because no clinical performance study was conducted or reported, all these details are consequently missing.
In summary, the provided document focuses on regulatory compliance, substantial equivalence to a predicate device, and general software V&V, rather than providing the detailed clinical performance study information you are asking for, which is typical for AI/ML-driven diagnostic or prognostic devices seeking regulatory clearance.
The "Eclipse Treatment Planning System" is a software tool used by trained medical professionals to design and simulate radiation therapy treatments. While it includes "optimization algorithms" (like RapidArc Dynamic), the FDA submission treats these changes as enhancements to an existing system, validated through engineering and software testing for comparability, rather than a novel AI/ML device requiring an independent clinical performance study as outlined in your prompt questions.
Ask a specific question about this device
(87 days)
Eclipse Treatment Planning System (v18.0)
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan irradiation with photon, electron and proton beams, as well as for internal radiation (brachytherapy) treatments.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS consists of different applications, each used for specific purposes at a different phase of treatment planning. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided FDA 510(k) summary for the Eclipse Treatment Planning System (v18.0) does not include acceptance criteria or the specific details of a study that proves the device meets acceptance criteria in the format requested.
The document states that "Software verification and validation was conducted and documentation was provided," and that "Test results demonstrate conformance to applicable requirements and specifications." However, it does not provide a table of acceptance criteria or reported device performance metrics. It also explicitly states: "No animal studies or clinical tests have been included in this pre-market submission."
Therefore, for the specific questions requested, the direct answer from the provided text is that the information is not available.
Here's a breakdown of the requested information that cannot be sourced from the provided text:
- A table of acceptance criteria and the reported device performance: Not provided. The document generally states conformance to requirements and specifications but doesn't list specific performance metrics or their acceptance thresholds.
- Sample size used for the test set and the data provenance: Not provided. The nature of the "test set" for software verification and validation is not detailed, nor is the origin of any data used. Given the statement "No animal studies or clinical tests," it's highly likely this refers to internal software testing data.
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts: Not provided. There is no mention of expert involvement in establishing ground truth for any test set.
- Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not provided.
- If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not provided. The device is a "Treatment Planning System" and the focus is on its software functionality, not on AI assistance for human readers in diagnostic interpretation.
- If a standalone (i.e. algorithm only without human-in-the loop performance) was done: The document describes "Software Verification and Validation Testing" but does not distinguish between standalone and human-in-the-loop performance, nor does it provide performance metrics. The nature of a "treatment planning system" inherently involves a human user in the loop.
- The type of ground truth used (expert consensus, pathology, outcomes data, etc): Not provided.
- The sample size for the training set: Not applicable/Not provided. As a traditional software update for a treatment planning system, it's unlikely to involve a "training set" in the machine learning sense. The changes described are primarily related to dose calculation algorithms, UI support, and brachytherapy controls.
- How the ground truth for the training set was established: Not applicable/Not provided.
Summary of what is present:
- Device Name: Eclipse Treatment Planning System (v18.0)
- Predicate Device: Eclipse Treatment Planning System v16.1 (K201607)
- Indications for Use: To plan radiotherapy treatments for patients with malignant or benign diseases, using photon, electron, proton, and brachytherapy beams.
- Performance Data: "Software verification and validation was conducted and documentation was provided... Test results demonstrate conformance to applicable requirements and specifications."
- No Clinical/Animal Studies: Explicitly stated that "No animal studies or clinical tests have been included in this pre-market submission."
- Software Level of Concern: "Major."
- Standards Conformance: A list of IEC, ISO, and EN ISO standards is provided (e.g., IEC 62304, IEC 62366-1, IEC 82304-1, IEC 62083, IEC 61217, ISO 15223-1, ISO 20417, ISO 14971, EN ISO 13485).
- Conclusion: The device is considered "safe and effective and perform at least as well as the predicate device" based on non-clinical data, verification, and validation.
Ask a specific question about this device
(28 days)
ARIA Radiation Therapy Management (v15.8), Eclipse Treatment Planning System (v15.8)
The ARIA Radiation Therapy Management product is a treatment application. It enables the authorized user to enter, access, modify, store and archive treatment plan and image data from diagnostic studies, treatment planning, simulation, plan verification and treatment. ARIA Radiation Therapy Management also stores the treatment histories including dose delivered to defined sites and provides tools to verify performed treatments.
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The ARIA Radiation Therapy Management product is a treatment plan and image management application. It enables the authorized user to enter, access, modify, store and archive treatment plan and image data from diagnostic studies, treatment planning, simulation, plan verification and treatment. ARIA Radiation Therapy Management also stores the treatment histories including dose delivered to defined sites, and provides tools to verify performed treatments. ARIA Radiation Therapy Management supports the integration of all data and images in one central database including archiving and restoration. The different ARIA Radiation Therapy Management features support the visualization, processing, manipulation and management of all data and images stored in the system. Images can also be imported through the network using DICOM, the available image import filters or by means of film digitizers.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided text describes a 510(k) premarket notification for two medical devices: ARIA Radiation Therapy Management (v15.8) and Eclipse Treatment Planning System (v15.8).
Based on the provided text, the following information can be extracted:
-
Acceptance Criteria and Device Performance: The document states that the devices were "verified and validated according to the FDA Quality System Regulation (21 CFR §820) and other FDA recognized consensus standards." It also mentions, "Test results demonstrate that the device conforms to design specifications and meets of the intended users, including assuring risk mitigations were implemented and functioned properly."
However, the document does not provide a specific table of acceptance criteria with reported numerical device performance metrics. The performance assessment is general, confirming adherence to regulatory standards and design specifications rather than specific quantitative thresholds for accuracy, sensitivity, specificity, or other performance indicators typical for AI/ML-driven devices. -
Study That Proves the Device Meets Acceptance Criteria:
The study that proves the device meets the acceptance criteria is Software Verification and Validation Testing.
Here's a breakdown of the specific points requested, based on the provided text:
1. A table of acceptance criteria and the reported device performance:
* Acceptance Criteria: The acceptance criteria are broadly defined as conformance to design specifications, meeting intended user requirements, and assuring risk mitigations were implemented and functioned properly. This is according to FDA Quality System Regulation (21 CFR §820) and other FDA recognized consensus standards (listed below).
* Reported Device Performance: "Test results demonstrate that the device conforms to design specifications and meets of the intended users, including assuring risk mitigations were implemented and functioned properly." No specific numerical performance metrics (e.g., accuracy, sensitivity, specificity values) are provided for a direct comparison in a table format.
2. Sample size used for the test set and the data provenance:
* Sample Size: The document does not specify the sample size (e.g., number of cases or patients) used for the software verification and validation testing.
* Data Provenance: The document does not mention the country of origin of the data, nor does it specify if the data was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
* This information is not provided. The document primarily focuses on software engineering and regulatory compliance rather than clinical performance validation involving experts establishing ground truth for a test set.
4. Adjudication method for the test set:
* This information is not provided. There's no mention of a clinical test set requiring adjudication in the context of this submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
* No MRMC comparative effectiveness study was done. The document explicitly states: "No data from animal studies or clinical tests have been included in this pre-market submission." This indicates that the regulatory submission primarily relies on software verification and validation and comparison to predicate devices, not studies demonstrating human reader improvement with AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
* The document implies that standalone software verification and validation testing was done, as it states "Software Verification and Validation Testing" was performed to ensure conformance to design specifications and risk mitigation. However, it does not explicitly detail "algorithm only" performance metrics in a clinical sense. The devices are described as tools for trained medical professionals, suggesting a human-in-the-loop operation, but the provided V&V is on the software itself.
7. The type of ground truth used:
* The term "ground truth" in a clinical performance context is not explicitly mentioned. For software verification and validation, ground truth would relate to the correctness of the software's output against its design specifications and expected behavior, rather than clinical outcomes or expert consensus on medical images.
8. The sample size for the training set:
* This information is not applicable and therefore not provided. These devices are not described as AI/ML systems that undergo a machine learning training phase on a dataset. They are software tools for treatment planning and management.
9. How the ground truth for the training set was established:
* This information is not applicable and therefore not provided, as these are not AI/ML training data sets.
In summary, the provided document focuses on regulatory compliance through software verification and validation and substantial equivalence to predicate devices, rather than detailed clinical performance studies often associated with novel AI/ML device submissions. The "acceptance criteria" are compliance with quality system regulations and standards, and "performance" is demonstrated by successful verification and validation tests against design specifications.
Ask a specific question about this device
(25 days)
Eclipse Treatment Planning System v16.1
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS consists of different applications, each used for specific purposes at a different phase of treatment planning.
Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided document is a 510(k) summary for the Varian Eclipse Treatment Planning System, v16.1. It describes the device and claims substantial equivalence to a predicate device (Eclipse Treatment Planning System v16.0). However, it does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and a study proving the device meets those criteria.
Specifically, the document primarily focuses on software verification and validation, standards conformance, and a comparison to its predicate device. It explicitly states: "No animal studies or clinical tests have been included in this pre-market submission." This indicates that the type of performance data typically associated with studies proving a device meets specific clinical or diagnostic acceptance criteria (e.g., sensitivity, specificity, accuracy against a ground truth) is not present here.
Therefore, many of the requested fields cannot be directly extracted from this document.
Here's a breakdown of what can and cannot be answered based on the provided text:
1. Table of acceptance criteria and reported device performance:
- Acceptance Criteria: Not explicitly stated in terms of quantitative performance metrics (e.g., accuracy thresholds, sensitivity/specificity targets). The acceptance criteria mentioned are general conformance to requirements, specifications, and standards (e.g., IEC 62304, IEC 62366-1, IEC 61217, IEC 62083, IEC 82304-1).
- Reported Device Performance: The document states, "Test results demonstrate conformance to applicable requirements and specifications." and "There were no remaining discrepancy reports (DRs) which could be classified as Safety or Customer Intolerable." This is a general statement about meeting system-level requirements, not specific quantitative performance metrics against a defined ground truth for a clinical indication.
Acceptance Criteria (General) | Reported Device Performance (General) |
---|---|
Conformance to applicable requirements and specifications | Test results demonstrate conformance. |
Conformance with specified IEC standards | Conforms in whole or in part with IEC 62304, 62366-1, 61217, 62083, 82304-1. |
No remaining critical discrepancy reports | No remaining DRs classified as Safety or Customer Intolerable. |
Performs as intended in specified use conditions | Verification and validation demonstrate the subject device should perform as intended. |
As safe and effective as the predicate device | Deemed as safe and effective and performs at least as well as the predicate device. |
2. Sample size used for the test set and the data provenance:
- Not specified. The document mentions "Software Verification and Validation Testing" but does not detail the size or nature of any specific test sets used for evaluating clinical performance or dose calculation accuracy with patient data. It also does not mention data provenance (e.g., country of origin, retrospective/prospective). Given that no clinical studies were performed, any "test set" would likely refer to internal engineering tests, rather than a clinical dataset.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. Since no clinical studies or "patient-like" test sets with established ground truths for diagnostic/clinical accuracy were mentioned, this information is not provided.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not specified. This is relevant for studies involving human readers or expert consensus on ground truth, which were not part of this submission's provided performance data.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was not done. The document explicitly states "No animal studies or clinical tests have been included in this pre-market submission." This type of study would fall under clinical testing.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Not explicitly stated in terms of clinical performance. The "Software Verification and Validation Testing" implies testing of the algorithm, but the performance endpoints are not reported in clinical terms (e.g., sensitivity, specificity, accuracy of dose calculation compared to a gold standard in patients). The changes (GPU calculation for Acuros PT dose, DECT for proton stopping power, preventing dose calculation for DECT Rho and Z images) are technical enhancements that would have been validated through internal engineering tests for accuracy and consistency, but the specific results of these standalone performance evaluations against clinical ground truth are not provided here.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not specified for clinical endpoints. For software verification and validation, the "ground truth" would be the expected output or behavior according to system requirements and specifications (e.g., a known correct dose calculation for a phantom, correct image processing results). However, this is not a ground truth related to clinical outcomes or expert consensus on patient data.
8. The sample size for the training set:
- Not applicable/Not specified. Treatment planning systems typically use physics models and algorithms, not machine learning models that require "training sets" in the conventional sense. While there might be internal data used for calibration or model development, it's not referred to as a "training set" in this context, nor is its size provided.
9. How the ground truth for the training set was established:
- Not applicable/Not specified. As there's no mention of a traditional machine learning "training set," this information is not relevant to the provided document.
In summary, the provided FDA 510(k) summary focuses primarily on software development practices, adherence to standards, and a comparison demonstrating substantial equivalence to a predicate device based on non-clinical testing. It explicitly states the absence of animal or clinical studies, meaning the type of performance evaluation you're asking about (related to clinical accuracy against ground truth using patient data) was not part of this specific submission.
Ask a specific question about this device
(25 days)
Eclipse Treatment Planning System v16.0
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS consists of different applications, each used for specific purposes at a different phase of treatment planning.
Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided text describes the 510(k) summary for the Eclipse Treatment Planning System v16.0. It primarily focuses on demonstrating substantial equivalence to a predicate device (Eclipse Treatment Planning System v15.6) through software verification and validation, rather than presenting a performance study with acceptance criteria for a novel AI/ML-driven medical device.
Therefore, many of the requested details such as a table of acceptance criteria, sample sizes for test sets, data provenance, number of experts for ground truth, adjudication methods, MRMC studies, standalone AI performance, and training set details are not applicable or not provided in this specific document.
This document outlines a regulatory submission for a software update to an existing medical device, not a new AI-powered diagnostic or treatment planning system that would typically undergo the extensive validation described in your prompt. The "performance data" section primarily refers to software verification and validation testing, not clinical performance or accuracy in a diagnostic or predictive sense.
Here's what can be extracted based on the provided text, and where information is not available:
1. A table of acceptance criteria and the reported device performance
- Not explicitly provided as acceptance criteria for AI model performance. The document states: "Test results demonstrate conformance to applicable requirements and specifications." This implies compliance with software requirements and design specifications, not performance on clinical metrics of accuracy or effectiveness in the way an AI diagnostic tool would be evaluated.
- The "performance data" discussed is related to software verification and validation against requirements, not diagnostic accuracy or clinical impact.
2. Sample sizes used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not applicable/Not provided. The document states: "No animal studies or clinical tests have been included in this pre-market submission." This indicates that the "test set" was for software testing and validation, not for evaluating performance on patient data. Therefore, details like data provenance or retrospective/prospective nature are not relevant to this submission.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable. Since no clinical or patient-data-based studies were performed for performance evaluation in the context of diagnostic accuracy, there was no need for expert-established ground truth on a test set.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. As no clinical performance study involving human interpretation was conducted to establish ground truth.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC comparative effectiveness study was not done. The submission explicitly states "No animal studies or clinical tests have been included in this pre-market submission."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not explicitly stated as a separate standalone performance study in the context of an AI algorithm. The device is a "Treatment Planning System" which is a software tool used by trained medical professionals. Its "performance" is evaluated by its conformance to software specifications and safety, not as a standalone AI diagnostic tool.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Not applicable for clinical ground truth. The "ground truth" for the software validation would be the functional requirements and design specifications that the software was tested against.
8. The sample size for the training set
- Not applicable. This submission is for a software update to an existing treatment planning system, not for a new AI/ML model that requires a dedicated training set. The changes described are feature introductions and enhancements, not an AI model that learns from large datasets.
9. How the ground truth for the training set was established
- Not applicable. For the same reason as #8.
Ask a specific question about this device
(17 days)
Eclipse Treatment Planning System
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The Eclipse Treatment Planning System (Eclipse TPS) is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. It is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
This document, a 510(k) premarket notification from Varian Medical Systems for their Eclipse Treatment Planning System, primarily focuses on demonstrating substantial equivalence to a predicate device. It does not contain the specific type of performance study data (e.g., clinical study results comparing AI-assisted vs. non-AI assisted performance, or standalone algorithm performance against ground truth) that would typically include acceptance criteria and detailed study results for an AI/ML powered device.
The document states: "No animal studies or clinical tests have been included with this pre-market submission." This explicitly means that the kind of study you are asking about, which involves evaluating the device's performance against detailed acceptance criteria using a test set, was not part of this submission. The "PERFORMANCE DATA" section (page 4) refers solely to "Software verification and validation."
Therefore, based on the provided text, I cannot extract the information requested in your prompt regarding acceptance criteria and performance data from a clinical study or a standalone algorithm evaluation. The acceptance criteria and performance data for this submission are focused on software verification and validation, as well as demonstrating substantial equivalence based on technological characteristics and intended use.
Ask a specific question about this device
(28 days)
Eclipse Treatment Planning System
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
This document is a 510(k) summary for the Eclipse Treatment Planning System (Eclipse TPS), specifically version 15.5, from Varian Medical Systems. It describes changes from a previous version (v15.1.1, K170969) and argues for substantial equivalence.
Here's an analysis of the provided information concerning acceptance criteria and supporting studies, particularly regarding the newly included "Smart Segmentation Knowledge Based Contouring (SS KBC)" module, which appears to be the most relevant AI/software feature:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not provide a formal table of explicit acceptance criteria with quantitative performance metrics for the entire Eclipse Treatment Planning System, nor for the specific new features. Instead, it states a general conclusion from non-clinical testing:
- Acceptance Criteria/Goal (Inferred): The product conforms to defined user needs and intended uses, and there are no discrepancy reports (DRs) remaining with a priority of Safety Intolerable or Customer Intolerable. The device is considered safe and performs at least as well as the predicate device.
- Reported Device Performance: "The outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable."
For the Smart Segmentation Knowledge Based Contouring (SS KBC) module, the document notes that it was previously cleared under K141248 (v2.2). This implies that the specific performance metrics and acceptance criteria for this module would have been detailed in that earlier submission. In this current submission (K172163), the SS KBC module is being integrated into the broader Eclipse TPS, and the claim is that its previously established safety and effectiveness hold.
2. Sample Size Used for the Test Set and Data Provenance
The document provides very limited detail on specific test sets for the current submission (K172163). It broadly refers to "Verification and Validation" being performed for all new features and "regression testing" for existing features.
- Sample Size: Not explicitly stated for the overall V&V or for specific features.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). The document refers to "test outcomes" but doesn't specify data sources.
For the Smart Segmentation Knowledge Based Contouring (SS KBC) module, since it was previously cleared under K141248, the relevant test set details would be found in that earlier submission. This document only states that the feature "is included as a part of Eclipse Treatment Planning System (v15.5)" and references its prior clearance.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
This information is not provided in the current document. The document primarily focuses on software engineering V&V processes (tracing requirements to test outcomes, DRs). For the SS KBC module, this information would have been part of the K141248 submission where its performance as a standalone tool was assessed and ground truth established. Given it's a "Knowledge Based Contouring" system, human expert input for ground truth would be expected.
4. Adjudication Method for the Test Set
This information is not provided in the current document.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC study is mentioned in this 510(k) summary. The summary focuses on comparing technical features with a predicate device and confirming software functionality, not on human-in-the-loop performance improvement with AI assistance.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- The document implies that standalone performance testing was done as part of the overall "Verification and Validation" process for the software features, as it addresses "product conformance to the defined user needs and intended uses." However, specific details or quantitative results of such standalone performance are not provided in this summary directly.
- For the Smart Segmentation Knowledge Based Contouring (SS KBC) module, as it was previously cleared under K141248, it is highly probable that its standalone algorithmic performance (e.g., accuracy of auto-segmentation compared to ground truth) was evaluated in that prior submission. This document lists it as a feature of the Eclipse TPS v15.5, meaning its existing, cleared functionality is being integrated.
7. Type of Ground Truth Used
- This information is not explicitly stated for the general V&V.
- For the Smart Segmentation Knowledge Based Contouring (SS KBC) module, given its function ("Knowledge Based Contouring"), the ground truth for its original clearance (K141248) would typically involve expert delineation consensus (e.g., radiation oncologists, radiologists) as the gold standard for anatomical structures.
8. Sample Size for the Training Set
- This information is not provided in the current document. Training set details would be most relevant for the knowledge-based/AI component (SS KBC).
9. How the Ground Truth for the Training Set Was Established
- This information is not provided in the current document. For the SS KBC module, again, this would have been part of the K141248 submission. Typically, for knowledge-based contouring systems, ground truth for training data involves expert delineation and potentially review processes.
Summary of AI-Specific Information from the Provided Document:
The most AI-related component mentioned is the "Smart Segmentation Knowledge Based Contouring (SS KBC) as an Eclipse module." Crucially, the document states: "Smart Segmentation - Knowledge Based Contouring (SS KBC) v2.2 was cleared under K141248. The corresponding features above were cleared previously as a part of that submission."
This means that the current submission (K172163) does NOT provide the detailed acceptance criteria or study results for the SS KBC module itself. Instead, it incorporates a previously cleared module and relies on its prior clearance for safety and effectiveness. To find the specific details regarding the SS KBC's performance, a review of the K141248 submission would be necessary. This current document validates that the integration of this module into Eclipse TPS v15.5 maintains its intended functionality and does not introduce new safety or effectiveness concerns.
Ask a specific question about this device
(96 days)
Eclipse Treatment Planning System
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided document is a 510(k) premarket notification for the "Eclipse Treatment Planning System," specifically version 15.1.1. It details the device's intended use and its substantial equivalence to a predicate device (Eclipse Treatment Planning System 13.7).
However, the provided text does not contain specific acceptance criteria, reported device performance metrics, or details about a study to prove these criteria. It outlines the changes from the predicate device and states that "Verification and Validation were performed for all the new features and regression testing was performed against the existing features of Eclipse." It concludes that "the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable."
Therefore, I cannot provide the requested information for the following points as they are not present in the given text:
- A table of acceptance criteria and the reported device performance
- Sample size used for the test set and the data provenance
- Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Adjudication method for the test set
- If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
- If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The type of ground truth used
- The sample size for the training set
- How the ground truth for the training set was established
The document focuses on demonstrating substantial equivalence by outlining technological changes and high-level verification and validation, rather than presenting a detailed study with specific performance metrics against acceptance criteria.
Ask a specific question about this device
(25 days)
Eclipse Treatment Planning System
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation (brachytherapy) treatments.
The provided text describes a 510(k) premarket notification for the "Eclipse Treatment Planning System." It focuses on the changes and verification/validation activities for a new version of the system compared to its predicate device. However, it does not contain the detailed information necessary to complete a table of acceptance criteria and reported device performance in the manner requested (i.e., specific numerical acceptance criteria and a corresponding reported performance metric).
Here's a breakdown of what information is available and what is missing, based on your request:
1. Table of acceptance criteria and the reported device performance:
- Acceptance Criteria: Not explicitly stated in a quantitative manner. The document mentions "System requirements created or affected by the changes can be traced to the test outcomes" and that the product "conformed to the defined user needs and intended uses." This implies that the acceptance criteria are adherence to system requirements and user needs, but specific numerical targets (e.g., accuracy +/- X%) are not provided.
- Reported Device Performance: Similarly, specific performance metrics (e.g., accuracy, precision, sensitivity, specificity) with numerical values are not reported. The document states that the outcome was that "there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable" and that the device is "safe and effective and to perform at least as well as the predicate device."
Therefore, a table cannot be constructed with the level of detail requested.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- Sample Size: Not specified. The document mentions "Verification and Validation were performed for all the new features and regression testing was performed against the existing features of Eclipse." This implies testing on, presumably, various patient plans or scenarios, but the number of cases or patients (i.e., the sample size) is not given.
- Data Provenance: Not specified. There is no mention of the origin of the data used for testing (e.g., country) nor if it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
The text focuses on the technical verification of the software's functionality, rather than a clinical study involving human expert evaluation of its outputs.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Adjudication Method: Not specified. Given that the testing appears to be primarily technical verification against system requirements rather than clinical agreement studies, an adjudication method for ground truth would likely not be relevant or mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC Study: No, an MRMC comparative effectiveness study is not mentioned. The document describes a software update for a treatment planning system, focusing on new dose calculation algorithms and planning tools, not an AI-assisted diagnostic or decision-support tool where human reader performance would be compared.
- Effect Size: Not applicable, as no MRMC study was reported.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Standalone Performance: Yes, in a sense. The non-clinical testing performed is fundamentally a "standalone" evaluation of the algorithm's functionality and accuracy against its defined technical requirements. The document states "Verification and Validation were performed for all the new features and regression testing was performed against the existing features of Eclipse." This includes testing the new dose calculation algorithms (Acuros PT, Acuros BV intermediate dose calculation) and planning tools. However, specific performance metrics (e.g., accuracy of a dose calculation compared to a gold standard physics measurement) are not quantified in the provided text.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Ground Truth: For the dose calculation algorithms, the "ground truth" would typically involve highly accurate physical measurements or very high-fidelity Monte Carlo simulations. For other functionalities, it would be adherence to pre-defined system requirements and expected outputs. Pathology or outcomes data are not mentioned, as this is a treatment planning system, not a diagnostic device. The document implies compliance with "defined user needs and intended uses" and "system requirements."
8. The sample size for the training set:
- Sample Size for Training Set: Not applicable/not specified. The Eclipse Treatment Planning System is a deterministic software based on physical models (e.g., Monte Carlo dose calculation). It is not described as an AI or machine learning model that would typically require a "training set" in the conventional sense. The "training" in this context refers to the development and calibration of the physical models within the software.
9. How the ground truth for the training set was established:
- Ground Truth for Training Set: Not applicable/not specified, for the same reasons as above. The system relies on physical principles and mathematical models, rather than learning from labeled data.
In summary, the provided FDA 510(k) summary focuses on demonstrating substantial equivalence by outlining technological changes and confirming that non-clinical testing verified the new features and ensured the device performs as safely and effectively as its predicate. It does not contain the detailed quantitative performance metrics or clinical study results that your questions are seeking.
Ask a specific question about this device
(83 days)
ECLIPSE TREATMENT PLANNING SYSTEM
The Eclipse Treatment Planning System (Eclipse TPS) is used to plan radiotherapy treatments with malignant or benign diseases. Eclipse TPS is used to plan external beam irradiation with photon, electron and proton beams, as well as for internal irradiation (brachytherapy) treatments. In addition, the Eclipse Proton Eye algorithm is specifically indicated for planning proton treatment of neoplasms of the eye.
The Varian Eclipse™ Treatment Planning System (Eclipse TPS) provides software tools for planning the treatment of malignant or benign diseases with radiation. Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments. Eclipse TPS is capable of planning treatments for external beam irradiation with photon, electron, and proton beams, as well as for internal irradiation, (brachytherapy) treatments.
The provided text is a 510(k) Premarket Notification for the Eclipse Treatment Planning System, a software device used for radiotherapy treatment planning. It details the device's features, comparison to predicate devices, and a summary of non-clinical testing.
However, the document does not contain information about acceptance criteria or a study proving the device meets these criteria in the context of an AI/ML algorithm's performance (e.g., accuracy, sensitivity, specificity, or human improvement with AI assistance).
Instead, the document focuses on demonstrating substantial equivalence to a predicate device primarily through feature comparison and general verification/validation of software functionality.
Therefore, most of the specific questions regarding acceptance criteria, performance metrics, sample sizes for test/training sets, expert involvement, and ground truth establishment cannot be answered from the provided text.
Here's an attempt to answer the questions based only on the information available:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: The document states that the outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable. This acts as a high-level qualitative acceptance criteria for the entire system's functionality and safety.
- Reported Device Performance: The document concludes that "Varian therefore considers Eclipse 13.5 to be safe and to perform at least as well as the predicate device." No specific quantitative performance metrics (e.g., accuracy percentages, sensitivity/specificity, or a table of such) are provided for individual features or the system as a whole in the context of an AI/ML model's output. The performance is implied by the successful conclusion of verification and validation without critical discrepancies.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- The document mentions "Verification and Validation were performed for all the new features and regression testing was performed against the existing features of Eclipse." However, it does not specify the sample size of any test set (e.g., number of patient cases, treatment plans) used for this testing.
- Data Provenance: Not mentioned. It's a software system for planning, so "data" might refer to simulated or clinical patient data, but its origin or nature (retrospective/prospective) is not detailed.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not mentioned. The testing described is software verification and validation, not a clinical performance study involving expert image interpretation or similar. The "ground truth" for software testing would typically be based on expected software behavior, calculations, and adherence to specifications rather than expert consensus on medical images.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not mentioned. This concept is more relevant to studies establishing ground truth for diagnostic or prognostic AI models, which is not the primary focus of this submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC study was NOT done (or at least not reported in this summary). The document details changes to a treatment planning system, including "RapidPlan (previously known as Dose Volume Histogram Estimation)" and "support for mARC treatment planning by Siemens treatment machines." While RapidPlan involves a type of estimation, the document does not present it as a diagnostic AI requiring human reading assistance with an associated MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document describes "Non-clinical Testing" including "Verification and Validation" and states that "The outcome was that the product conformed to the defined user needs and intended uses and that there were no DRs (discrepancy reports) remaining which had a priority of Safety Intolerable or Customer Intolerable." This implies significant standalone testing of the software's functionality and calculations. However, no specific performance metrics like accuracy, precision for the algorithms (e.g., AcurosXB, RapidPlan) are presented in a quantitative way for independent assessment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- For a treatment planning system, "ground truth" for testing primarily involves:
- Mathematical/Physical Models: Comparing calculated dose distributions and parameters against established physics principles and validated models.
- Predicate Device Comparison: Ensuring new features produce results consistent with or improved over previously validated predicate devices.
- Software Requirements/Specifications: Ensuring all features function according to their defined specifications.
- The document indicates that "System requirements created or affected by the changes can be traced to the test outcomes," suggesting that meeting the requirements served as a form of ground truth. It also notes "regression testing was performed against the existing features of Eclipse," implying comparison to the established behavior of the predicate. No mention of expert consensus on medical image ground truth, pathology, or outcomes data is made.
8. The sample size for the training set
- Not mentioned. The document describes a treatment planning system rather than a deep learning model requiring explicit training sets of medical images/data in the modern AI sense. While "RapidPlan" involves "Dose Volume Histogram Estimation" which might learn from previous plans, the specifics of its "training set" (if any) are not detailed here.
9. How the ground truth for the training set was established
- Not mentioned. As the existence and nature of a "training set" are not explicitly discussed, the method for establishing its ground truth is also not provided.
Ask a specific question about this device
Page 1 of 2