Search Results
Found 1 results
510(k) Data Aggregation
(40 days)
The Agility multileaf collimator is indicated for use when additional flexibility is required in conforming the radiation beam to the anatomy to be exposed.
The associated Integrity R3.0 software is the interface and control software for the Elekta medical digital linear accelerator and is intended to assist a licensed practitioner in the delivery of radiation to defined target volumes (e.g. lesions, arterio-venous malformations, malignant and benign tumors), whilst sparing surrounding normal tissue and critical organs from excess radiation. It is intended to be used for single or multiple fractions, delivered as static and/or dynamic beams of radiation, in all areas of the body where such treatment is indicated.
This Traditional 510(k) describes the addition of the new Agility multileaf collimator beam limiting device and its associated control software to the Elekta medical linear accelerator. The new device has 160 leaves of 5mm width at isocenter, a fast leaf speed of up to 65 mm/s, low leakage (<0.5%) and is capable of interdigitation within a maximum field size of 40 x 40 cm. Control is by extension to the existing Elekta linear accelerator control system software. Synchronization of the movement of the dynamic leaf guides with individual leaf movements achieves enhanced leaf speed and removes the need for a split field.
The Agility includes dynamic leaf guides, fluorescing ruby leaf markers ('Rubicon') for improved leaf tracking by the optics system, the elimination of backup diaphragms by providing low interleaf leakage, sculpted field defining diaphragms, separate lighting systems for patient plane illumination and movement control using LEDs, and a new control cabinet on which the Integrity user interface and machine control software is executed including a hardware firewall to provide safe network connection.
The provided document is a 510(k) premarket notification for a medical device called "Agility™," a multileaf collimator, and its associated control software, Integrity R3.0. This type of document focuses on demonstrating substantial equivalence to a predicate device rather than a detailed comparative effectiveness study of AI versus human readers or standalone AI performance.
Therefore, many of the requested elements for describing "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of AI performance are not applicable or cannot be extracted from this specific document.
However, I can provide information based on the engineering and performance specifications detailed in the 510(k) summary.
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the reported performance of the device and its predicate, primarily focusing on mechanical and physics performance characteristics to demonstrate substantial equivalence. The "study" proving the device meets these criteria is described as "module, integration and system level verification," "regression testing," and "validation... under clinically representative conditions."
Table of Acceptance Criteria and Reported Device Performance:
| Attribute | Acceptance Criterion (Predicate Device Performance - K082122) | Reported Device Performance (Agility™ - This Submission) |
|---|---|---|
| Mechanical | ||
| Interdigitation capable | yes | yes |
| Number of leaves | 80 | 160 |
| Nominal leaf width at isocenter | 10 mm | 5 mm |
| Maximum field size | 40 x 40 cm | 40 x 40 cm |
| Max distance between leaves | 32.5 cm | 20 cm |
| Leaf travel over central axis | 12.5 cm | 15 cm |
| Leaf nominal height | 82 mm | 90 mm |
| Leaf positioning resolution | 0.1 mm | 0.1 mm |
| Leaf positioning verification | Optical and machine vision system | Optical and machine vision system (Rubicon) |
| Diaphragm over-travel | 0 | 12 cm |
| Dimensions / Weight / Speeds | ||
| Head rotation | 365 degrees | 365 degrees |
| Head weight | 380 kg | 420 kg |
| Radiation head diameter | 620 mm | 815 mm at widest, 694 mm at narrowest |
| Head to isocenter clearance | 45 cm | 45 cm |
| Head rotation speed (set-up) | 12°/s | 12°/s |
| Head rotation speed (dynamic) | 6°/s | 6°/s |
| Leaf speed (combined w/ guide) | 2.0 cm/sec | up to 6.5 cm/s |
| Leaf speed | 2.0 cm/sec | up to 3.5 cm/s |
| Diaphragm speed | 1.5 cm/s | up to 9 cm/s |
| Wedge | ||
| Integrated wedge size | Automatic 0-60° | Automatic 0-60° |
| Wedge field size | 30 x 40 cm | 30 x 40 cm |
| Physics Performance | ||
| Leaf position accuracy | ± 1 mm | 1 mm at isocenter, 0.5 mm RMS* |
| Leaf position repeatability | 0.5 mm | 0.5 mm |
| Avg transmission through leaf bank | 1.5% | <0.375% |
| Peak transmission through leaf bank | 2.1% | <0.5% |
| X-radiation leakage (patient plane) | <0.2% max; <0.1% avg. | <0.2% max, <0.1% avg. |
| X-radiation leakage (outside patient plane) | <0.5% | <0.5% (at 1 m) |
| Delivery techniques | ||
| Dynamic Delivery Capability | yes (sliding window, dynamic arc, VMAT, multiple island shielding, offset field shaping) | yes (sliding window, dynamic arc, VMAT, multiple island shielding, offset field shaping) |
Note: The "Acceptance Criterion" column reflects the performance of the predicate device (MLCi2), as the submission aims to demonstrate substantial equivalence and often improved performance. The acceptance for the new device is typically meeting or exceeding these established benchmarks.
Study Details (as inferable from the document):
-
Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):
- The document does not specify a "test set" in the context of patient data or clinical images for performance testing like an AI algorithm would.
- Performance testing was conducted on "production equivalent systems both at Elekta and at hospital sites." No specific sample size (e.g., number of machines, number of tests) is provided, nor is the country of origin of the data explicitly stated other than Elekta Limited being based in the UK.
- The testing described is engineering verification and validation, not a clinical trial with patient data.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This question is not applicable. The device is a hardware component (multileaf collimator) and its control software. "Ground truth" in the clinical sense (e.g., definitive diagnosis from experts) is not relevant to its performance testing.
- Validation was performed by "competent and professionally qualified personnel," but their specific number or detailed qualifications (e.g., radiologist with X years of experience) are not provided as it's not a diagnostic AI device.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This is not applicable. Adjudication methods are used to establish ground truth in clinical studies, particularly for diagnostic devices or AI algorithms. This device's testing involves engineering and physics measurements.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is a component of a linear accelerator used for radiation therapy, not a diagnostic AI system that assists human readers.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- This is not applicable in the context of an AI algorithm's standalone performance. The device is hardware with control software. Its "standalone" performance refers to its mechanical and physics capabilities as documented in the table, without direct human intervention in the moment-to-moment leaf movement (which is automated by the software). However, it is explicitly designed to assist a "licensed practitioner" in delivering radiation, meaning it is ultimately human-in-the-loop for treatment planning and oversight.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The concept of "ground truth" as pathology or outcomes data is not applicable here. The "ground truth" for the device's performance is typically established by:
- Technical specifications: Design requirements for leaf width, speed, accuracy, leakage, etc.
- Physics measurements: Using dosimeters, films, or other calibrated instruments to verify radiation beam shaping, dose delivery accuracy, leakage, etc.
- Mechanical measurements: Calibrated tools to verify physical dimensions, movements, and resolutions.
- The concept of "ground truth" as pathology or outcomes data is not applicable here. The "ground truth" for the device's performance is typically established by:
-
The sample size for the training set:
- This is not applicable. This device is not an AI algorithm trained on a dataset in the conventional sense. The "control software" is developed through traditional software engineering processes, not machine learning model training.
-
How the ground truth for the training set was established:
- This is not applicable, as there is no "training set" in the context of machine learning. The "ground truth" for the software's functionality would be its design requirements and specifications, validated through formal verification and validation protocols.
Ask a specific question about this device
Page 1 of 1