Search Results
Found 2 results
510(k) Data Aggregation
(175 days)
ClearCheck Model RADCC V2
ClearCheck is intended to assist radiation therapy professionals in generating and assessing the quality of radiotherapy treatment plans. ClearCheck is also intended to assist radiation treatment planners in predicting when a treatment plan might result in a collision between the treatment machine and the patient or support structures.
The ClearCheck Model RADCC V2 device is software that uses treatment data, image data, and structure set data obtained from supported Treatment Planning System and Application Programming Interfaces to present radiotherapy treatment plans in a user-friendly way for user approval of the treatment plan. The ClearCheck device (Model RADCC V2) is also intended to assist users to identify where collisions between the treatment machine and the patient or support structures may occur in a treatment plan.
It is designed to run on Windows Operating Systems. ClearCheck Model RADCC V2 performs calculations on the incoming supported treatment data. Supported Treatment Planning Systems are used by trained medical professionals to simulate radiation therapy treatments for malignant or benign diseases.
The provided text describes the acceptance criteria and study for the ClearCheck Model RADCC V2 device, which assists radiation therapy professionals in generating and assessing treatment plans, including predicting potential collisions.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
BED / EQD2 Functionality | |
Passing criteria for dose type constraints | 0.5% difference when compared to hand calculations using well-known BED/EQD2 formulas. |
Passing criteria for Volume type constraints | 3% difference when compared to hand calculations using well-known BED/EQD2 formulas. |
Deformed Dose Functionality | |
Qualitative DVH analysis | Good agreement for all cases compared to known dose deformations. |
Quantitative Dmax and Dmin differences | +/- 3% difference for deformed dose results compared to known dose deformations. |
Overall Verification & Validation Testing | All test cases for BED/EQD2 and Deformed Dose functionalities passed. Overall software verification tests were performed to ensure intended functionality, and pass/fail criteria were used to verify requirements. |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state a specific numerical sample size for the test set used for the BED/EQD2 and Deformed Dose functionality validation. It mentions "all cases" for Deformed Dose and "a plan and plan sum" for BED/EQD2. This implies testing was done on an unspecified number of representative cases, but not a statistically powered cohort.
- Data Provenance: Not specified in the provided text. It does not mention the country of origin or whether the data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- The document does not mention the use of experts to establish ground truth for the test set.
- For BED/EQD2, the ground truth was established by "values calculated by hand using the well-known BED / EQD2 formulas."
- For Deformed Dose, the ground truth was established by "known dose deformations."
4. Adjudication Method for the Test Set
- Not applicable as there is no mention of expert review or adjudication for the test set. Ground truth was established by calculation or "known" deformations.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
- No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not performed. The document explicitly states: "no clinical trials were performed for ClearCheck Model RADCC V2." The device is intended to "assist radiation therapy professionals," but its impact on human reader performance was not evaluated through a clinical study.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, performance evaluations for the de novo functionalities (BED/EQD2 and Deformed Dose) appear to be standalone algorithm performance assessments. The device's calculations were compared against established mathematical formulas (BED/EQD2) or known deformations (Deformed Dose) without human intervention in the evaluation process.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
- BED/EQD2: Ground truth was based on "values calculated by hand using the well-known BED / EQD2 formulas." This is a computational/mathematical ground truth.
- Deformed Dose: Ground truth was based on "known dose deformations." This implies a physically or computationally derived ground truth where the expected deformation results were already established.
8. The Sample Size for the Training Set
- The document does not specify a sample size for the training set. It primarily focuses on the validation of new features against calculated or known results, rather than reporting on a machine learning model's training data.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for any training set was established. Given the nature of the device (software for calculations and collision prediction, building on predicate devices), it's possible that analytical methods and established physics/dosimetry principles form the basis, rather than a large labeled training dataset in the typical machine learning sense for image interpretation.
Ask a specific question about this device
(90 days)
ClearCheck
ClearCheck is intended for quality assessment of radiotherapy treatment plans.
The ClearCheck device (model RADCC) is a software intended to present treatment plans obtained from Eclipse Treatment Planning System (also referred to as Eclipse TPS) of Varian Medical Systems in a user friendly way (numerical form of data) for user approval of the treatment plan. ClearCheck runs as a dynamic link library (dll) plugin to Varian Eclipse. It is designed to run on the Windows Operating System and generated reports can be viewed on Internet Explorer. ClearCheck performs calculations on the plan obtained from Eclipse TPS (Version 12 (K131891) and Version 13.5 (K141283)) which is a software used by trained medical professionals to design and simulate radiation therapy treatments for malignant or benign diseases. ClearCheck has two components: A standalone Windows Operating System executable application that is used for 1. administrative operations to set specified default settings and user settings. 2. A plan evaluation application that is a dynamic link library (dll) file that is a plugin to the Varian Medical Systems Eclipse TPS. The plugin is designed to evaluate the quality of an Eclipse treatment plan. Plan quality is based on user specified Dose Constraints and Plan Check Parameters.
Here's an analysis of the provided text regarding the acceptance criteria and study for the ClearCheck device, organized according to your request.
Please note: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device. It explicitly states that "no clinical trials were performed for ClearCheck" (Section 5.7). Therefore, a substantial portion of your requested information (e.g., MRMC studies, specific performance metrics against ground truth, expert qualifications, adjudication methods, sample sizes for test/training sets with ground truth derivation methods) is not present in this type of regulatory submission. The verification tests mentioned are likely internal software validation rather than clinical performance studies.
1. Table of acceptance criteria and the reported device performance
The document does not provide a specific table of quantitative acceptance criteria for device performance based on clinical outcomes or accuracy metrics. Instead, "pass/fail criteria were used to verify requirements" during internal verification tests. These requirements are implicit in the comparison to the predicate device and the claim of substantial equivalence.
Acceptance Criteria Category | Reported Device Performance / Assessment |
---|---|
Intended Use | ClearCheck is intended for quality assessment of radiotherapy treatment plans, equivalent to the predicate device. |
Pure Software Device | Yes, equivalent to the predicate device. |
Intended Users | Medical physicists, medical dosimetrists, and radiation oncologists, equivalent to the predicate device. |
OTC/Rx | Prescription use (Rx), equivalent to the predicate device. |
Operating System | Runs on Windows 7, 8, 10, Server 2008, 2008 RS, 2012. Supported an additional OS (Windows 10) compared to the predicate, which does not raise new safety/effectiveness questions. |
CPU | 2.4+ GHz and Multi-core processors (2+ cores, 4+ threads), equivalent to the predicate device. |
Hard Drive Space | Requires ~3.5MB for software (vs. 20MB for predicate), suggests 100GB for patient data (vs. 900GB for predicate). Difference acknowledged and deemed not to raise new safety/effectiveness questions because ClearCheck stores constraint templates, not large DICOM datasets like the predicate. |
Display Resolution & Color Depth | 1280 x 1024, 24- or 32-bit color depth (vs. 1920 x 1080 for predicate). Difference acknowledged and deemed not to raise new safety/effectiveness questions as it supports smaller monitors without affecting image quality. |
Software Functionality | Performs calculations on plans from Eclipse TPS based on user-specified Dose Constraints and Plan Check Parameters. Verification tests performed to ensure the software works as intended and passed requirements. |
Safety and Effectiveness | Deemed as safe and effective as the predicate device through Verification and Validation testing and Hazard Analysis. |
2. Sample size used for the test set and the data provenance
The document does not specify a "test set" in the context of clinical or performance data. It mentions "Verification tests" were performed for the software. These tests would involve internal generated data or existing clinical plans to validate the software's functionality, but no details on sample size, data provenance, or specific test cases are provided for external review.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not provided. Since no clinical trials or external performance evaluations of this nature were conducted (as stated in Section 5.7), the concept of "ground truth" as derived by experts for a test set is not applicable to the submitted performance data.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided for the same reasons as above.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. The document explicitly states "no clinical trials were performed for ClearCheck." The device is a "quality assessment" tool for radiotherapy plans, not an AI for image interpretation or diagnosis that would typically involve human reader performance studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance assessment in the sense of the algorithm's internal calculations and functionality was performed as part of "Verification tests." However, specific quantitative metrics common for standalone AI algorithms (e.g., sensitivity, specificity, AUC against a clinical ground truth) are not provided in this regulatory summary. The device's "performance" is primarily assessed by its functional correctness and consistency with the predicate device's overall purpose of quality assessment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
This is not explicitly stated. For the internal "Verification tests," the "ground truth" would likely be the expected outputs or calculated values based on established physics principles and treatment planning guidelines, which the software is designed to implement and report. This is not the same as clinical "ground truth" derived from patient outcomes or expert consensus on a diagnosis.
8. The sample size for the training set
This information is not applicable and not provided. ClearCheck is described as a software tool that performs calculations and presents data based on user-specified dose constraints and plan check parameters from an existing Eclipse TPS plan. It is not an AI/ML algorithm that learns from a "training set" of data to make predictions or classifications.
9. How the ground truth for the training set was established
This information is not applicable and not provided, as the device does not employ a machine learning model that requires a training set with associated ground truth.
Ask a specific question about this device
Page 1 of 1