(204 days)
CollisionCheck is intended to assist radiation treatment planners in predicting when a treatment plan might result in a collision between the treatment machine and the patient or support structures.
The CollisionCheck device (model RADCO) is software intended to assist users to identify where collisions between the treatment machine and the patient or support structures may occur in a treatment plan. The treatment plans are obtained from the Eclipse Treatment Planning System (also referred to as Eclipse TPS) of Varian Medical Systems. CollisionCheck runs as a dynamic link library (DLL) plugin to Varian Eclipse. It is designed to run on the Windows Operating System. CollisionCheck performs calculations on the plan obtained from Eclipse TPS (Version 12 (K131891), Version 13.5 (K141283), and Version 13.7 (K152393) which is a software used by trained medical professionals to install and simulate radiation therapy treatments for malignant or benign diseases.
The provided text describes the regulatory clearance of CollisionCheck (K171350) and compares it to a predicate device, Mobius3D (K153014). However, it does not contain specific details about acceptance criteria, the study design (e.g., sample size, data provenance, ground truth establishment, expert qualifications, or adjudication methods), or MRMC study results. The document states that "no clinical trials were performed for CollisionCheck" and mentions "Verification tests were performed to ensure that the software works as intended and pass/fail criteria were used to verify requirements." This implies that the performance demonstration was likely limited to software verification and validation, rather than a clinical performance study with human-in-the-loop or standalone AI performance metrics.
Therefore, many of the requested details cannot be extracted from the provided text. I will provide what can be inferred or stated as absent based on the document.
Acceptance Criteria and Device Performance
The document does not explicitly list quantitative acceptance criteria with corresponding performance metrics like sensitivity, specificity, or F1-score for the CollisionCheck device. Instead, the performance demonstration focuses on software verification and validation to ensure the device works as intended and is as safe and effective as the predicate device.
Table of Acceptance Criteria and Reported Device Performance (Inferred/Based on Document Context):
Acceptance Criterion (Inferred from regulatory context and V&V) | Reported Device Performance (Inferred/Based on V&V Statement) |
---|---|
Functionality: Accurately simulate treatment plan and predict gantry collisions with patient or support structures. | Verification tests confirmed the software works as intended, indicating successful simulation and collision prediction. (Pass) |
Safety: Device operation does not introduce new safety concerns compared to predicate. | Hazard Analysis demonstrated the device is as safe as the Predicate Device. (Pass) |
Effectiveness: Device effectively assists radiation treatment planners in identifying potential collisions. | Verification tests confirmed the software works as intended, indicating effective assistance in collision identification. (Pass) |
Algorithm Accuracy (Collision Prediction): Implicitly, the algorithm should correctly identify collision events when they occur and not falsely identify them when they do not. | No specific accuracy metrics (e.g., sensitivity, specificity, precision recall) reported. Performance is based on successful completion of verification tests. |
Comparison to Predicate: Substantially equivalent to Mobius3D's collision check feature regarding safety and effectiveness. | Minor technological differences do not raise new questions on safety and effectiveness. Deemed substantially equivalent. (Pass) |
Study Details:
Given the statement "no clinical trials were performed for CollisionCheck," and the focus on "Verification tests," most of the questions regarding a typical AI performance study (like those involving test sets, ground truth experts, MRMC studies) cannot be answered with specific data from this document. The performance demonstration appears to have been solely based on internal software verification and validation activities.
-
Sample sizes used for the test set and data provenance:
- Test Set Sample Size: Not specified. The document only mentions "verification tests" and "pass/fail criteria."
- Data Provenance: Not specified. It's likely synthetic or internal clinical data used for software testing, rather than a distinct, prospectively collected, or retrospectively curated clinical test set for performance evaluation in a regulatory sense.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable/Not specified. Given that "no clinical trials were performed," it's highly improbable that a formal expert-adjudicated ground truth was established for a test set in the context of an AI performance study. Ground truth in this context would likely be defined by the physics-based simulation of collisions within the software's design.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not applicable/Not specified. No adjudication method is mentioned, consistent with the absence of a clinical performance study involving human readers.
-
If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The document explicitly states, "no clinical trials were performed." Therefore, no effect size of human reader improvement with AI assistance is reported.
-
If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- While the "verification tests" would evaluate the algorithm's standalone functionality, the document does not provide specific performance metrics (e.g., sensitivity, specificity) for its standalone performance that would typically be seen in a standalone AI evaluation. The device assists a human user, so its "standalone" performance wouldn't be in isolation but rather its ability to correctly identify collisions as defined by its internal models.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies a physics-based or computational ground truth. The device performs calculations and simulations. The "ground truth" for its verification and validation would be whether its simulation correctly identifies collisions based on defined geometric and physical parameters. It's not based on expert consensus, pathology, or outcomes data, as it's a planning assistance tool, not a diagnostic one.
-
The sample size for the training set:
- Not applicable/Not specified. The document describes CollisionCheck as software that performs calculations and simulations (modeling the linac as a cylinder, supporting applicators, etc.). It is not described as an AI or machine learning model that requires a "training set" in the conventional sense of supervised learning on a large dataset. Its functionality is likely rule-based or physics-informed, rather than learned from data.
-
How the ground truth for the training set was established:
- Not applicable/Not specified. Since it's not described as an ML model with a training set, the concept of establishing ground truth for a training set does not apply here. The "ground truth" for its development would be the accurate mathematical and physical modeling of collision scenarios.
§ 892.5050 Medical charged-particle radiation therapy system.
(a)
Identification. A medical charged-particle radiation therapy system is a device that produces by acceleration high energy charged particles (e.g., electrons and protons) intended for use in radiation therapy. This generic type of device may include signal analysis and display equipment, patient and equipment supports, treatment planning computer programs, component parts, and accessories.(b)
Classification. Class II. When intended for use as a quality control system, the film dosimetry system (film scanning system) included as an accessory to the device described in paragraph (a) of this section, is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 892.9.