Search Results
Found 1 results
510(k) Data Aggregation
(146 days)
Leksell GammaPlan (LGP)
Leksell GammaPlan® is a computer-based system designed for Leksell Gamma Knife® treatment planning.
Leksell GammaPlan® is a powerful, computer-based treatment planning system specifically designed for the simulation and planning of stereotactic Leksell Gamma Knife® radiosurgery based on tomographic and projectional images.
The basis of treatment planning is the acquisition and processing of digital images by a computer workstation running the treatment planning application software. The program is capable of handling a range of different imaging modalities. Images from tomographic sources such as Computer Tomography (CT), Magnetic Resonance (MR) and Positron Emission Tomography (PET) scanners can be used as well as projectional images from angiograms (AI). This allows the direct comparison between vascular structures in projectional images and tissue structures in CT and MR.
Digital images can be imported into the system via the computer network.
The treatment planning application has the ability to plan a patient's treatment protocol based on a single target or multiple targets.
The basic elements of treatment planning are:
- defining the cranial target or targets .
- . devising the configuration of the collimators to be used during treatment
- determining the parameters of the radiation shots to be delivered by Leksell • Gamma Knife®.
The provided FDA 510(k) summary for the Leksell GammaPlan® (LGP) (K232854) does not contain specific details regarding acceptance criteria and the comprehensive study results in the way requested for a typical AI/ML device.
This submission is for an updated version of a treatment planning system, which is generally software for medical device operation, rather than an AI/ML algorithm that performs autonomous diagnostic or prognostic tasks. As such, the performance testing focuses on software verification and validation against requirements, rather than clinical performance metrics in terms of sensitivity, specificity, etc., with a ground truth established by experts.
However, I can extract the information that is present and indicate where the requested information is not available in the document.
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
For software functionality and performance: | "Testing in the form of module, integration and system level verification was performed to evaluate the performance and functionality of the new and existing features against requirement specifications." |
"Formal design and usability validation has been performed on a clinically equivalent device by competent and professionally qualified personnel to ensure that the product fulfils the intended use and user needs." | "Results from verification and validation testing demonstrate that conformance to applicable technical requirement specification and user needs have been met and that the system is confident and stable." |
"The design and usability validation was also made to ensure that the risk control measures associated with functions related to safety for the new functionality was effective." | "The results of verification and validation as well as conformance to relevant safety standards demonstrate that LGP v11.4 meets the established safety and performance criteria..." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size for Test Set: Not specified in the document. The testing described is software verification and validation, not a clinical trial with a "test set" in the sense of patient data for AI model evaluation.
- Data Provenance: Not specified, as it's not a clinical data-driven study. The document mentions "clinically equivalent device" for validation, implying internal testing and possibly simulated or anonymized data, but details are absent.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Number of Experts: Not specified.
- Qualifications of Experts: The document mentions "competent and professionally qualified personnel" performed design and usability validation, but no specific professional qualifications (e.g., radiologist, medical physicist) or experience levels are provided. Ground truth in the context of a treatment planning system primarily relates to the accuracy of computational models and adherence to clinical guidelines, rather than expert interpretation of images for diagnosis.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication Method: Not specified. It's unlikely this type of adjudication was performed, as the evaluation focused on software functionality and adherence to specifications, not on resolving disagreements in expert clinical assessment.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC study was not performed. This device is a treatment planning system, not an AI diagnostic or assistance tool intended to improve human reader performance in interpreting medical images. The document states, "No animal or clinical tests were performed to establish substantial equivalence with the predicate device."
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Standalone Performance: The device itself is software for treatment planning. Its "standalone" performance would be its ability to correctly calculate and display treatment plans based on inputs, which is covered by the mentioned verification and validation testing. However, this is not "standalone AI algorithm performance" in the typical sense of a diagnostic AI. The system is inherently "human-in-the-loop" as it requires a clinician to define targets and parameters.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Type of Ground Truth: For the type of software described (treatment planning), the "ground truth" for verification and validation typically involves:
- Specification Compliance: Verification that the software performs according to its written requirements and design specifications.
- Industry Standards: Conformance to relevant medical device software (e.g., IEC 62304) and medical physics standards.
- Phantom/Physical Measurement Comparisons: For dose calculations, comparison with physical measurements in phantoms or known analytical solutions (though not explicitly detailed for this specific 510(k)).
- Clinical Equivalence/Usability: Validation that the software supports intended clinical use, as performed by "competent and professionally qualified personnel."
The document does not specify "expert consensus," "pathology," or "outcomes data" as ground truth for this submission, as these are more relevant for diagnostic or prognostic AI.
8. The sample size for the training set
- Sample Size for Training Set: Not applicable and not mentioned. This device is a treatment planning system, not a machine learning model that requires a training set in the conventional sense.
9. How the ground truth for the training set was established
- Ground Truth for Training Set Establishment: Not applicable and not mentioned, as there is no machine learning "training set" for this device.
Ask a specific question about this device
Page 1 of 1