Search Results
Found 5 results
510(k) Data Aggregation
(454 days)
Delta4 Insight is a software intended to provide quality assurance of radiotherapy treatment dose calculation.
Delta4 Insight is not a treatment planning system or a radiation delivery device. Information provided by Delta4 Insight shall not be used to directly modify or influence radiation treatments. Delta4 Insight is to be used radiation oncology personnel for quality assurance purposes.
Delta4 Insight is software specifically design for quality assurance of radiotherapy treatment plans generated by a treatment planning system. The device calculates a secondary dose calculation via an independent Monte-Carlo based dose calculation software and compares this to the treatment planning system dose. The device is used as a secondary check for the results of a TPS and not for comparison with a measurement.
Delta4 Insight is a software module within the General Delta4 software. The software module is independent of all other software modules within the Delta4 software. The device is NOT a treatment planning system or a radiation therapy delivery device. It is only used by trained radiation therapy oncology personnel for the purposes of quality assurance in a hospital setting.
Insight supports treatments with MV photons. No other radiation type is supported.
Here's a breakdown of the acceptance criteria and the study details for the Delta4 Insight device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criteria (Used by Predicate) | Reported Delta4 Insight Performance |
|---|---|
| 97% of dose voxels with >5% of maximum dose pass a gamma criteria of 2%/2mm when comparing patient plan dose calculation results to a reference algorithm (Acuros). [Hoffman et al, MedPhys 2018] | Similar gamma pass rate to that of the predicate when comparing patient plan dose calculation results to a reference algorithm (Monaco, Acuros). |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: "Test plans were chosen to span the space of test parameters for the types of treatment types/modalities/energies/TPS/machines that are supported. The variety of plans tested are reflective of common clinical treatments and tumor sites and span many field sizes and tissue depths."
- Interpretation/Clarification: The exact number of test plans used is not specified. However, the description implies a diverse set of clinically relevant plans.
- Data Provenance: "The dose from anonymized DICOM patient treatment plans from chosen reference algorithms were re-calculated with Insight and the dose compared."
- Interpretation/Clarification: The data consists of retrospective, anonymized DICOM patient treatment plans. The country of origin is not specified but is likely from European or North American oncology departments given the context of FDA submission and common practices.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The document does not explicitly state that human experts established a "ground truth" for the test set in the traditional sense of clinical assessment.
- Instead, the comparison is made against "reference algorithms" (Acuros, Monaco) which themselves are established dose calculation engines. The accuracy of these reference algorithms is implicit, and one (Acuros) is cited with a publication (Hoffman et al, MedPhys 2018).
- The study focuses on the agreement between software calculations, not expert visual assessment.
4. Adjudication Method for the Test Set
- The concept of "adjudication" (e.g., 2+1, 3+1 expert review) is not applicable to this type of study.
- The comparison method used is a gamma comparison between the Delta4 Insight dose results and the original reference TPS dose. Gamma analysis is a common method in radiation oncology for quantitatively comparing dose distributions.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, a MRMC comparative effectiveness study was not done.
- This device is a "secondary check QA software" designed for an algorithm-to-algorithm comparison (Delta4 Insight's calculation vs. Treatment Planning System's calculation), not for human reader performance evaluation. Therefore, the effect size of human readers improving with AI vs. without AI assistance is not relevant to this submission.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone performance evaluation was indeed done.
- The study explicitly describes comparing "the dose from anonymized DICOM patient treatment plans from chosen reference algorithms were re-calculated with Insight and the dose compared." This is a direct measure of the algorithm's performance in matching established reference calculations.
7. The Type of Ground Truth Used
- The "ground truth" in this context is the dose calculation results from established "reference algorithms" (Monaco, Acuros) within existing Treatment Planning Systems (TPS). It is not pathology, outcomes data, or human expert consensus in the diagnostic imaging sense. It is a comparison against other validated computational models for dose calculation.
8. The Sample Size for the Training Set
- The document does not specify a separate training set size.
- The description focuses on "validation testing involved testing of clinical treatment plans" and "test plans were chosen to span the space of test parameters." This suggests that the device's algorithms were likely developed and refined using internal data, but the 510(k) summary only describes the validation/test set and its performance.
9. How the Ground Truth for the Training Set Was Established
- Since a training set is not explicitly mentioned with its own ground truth establishment in this document, this information is not provided.
- Typically, for a dose calculation algorithm, "ground truth" during development (training) would involve comparing the algorithm's output against physical measurements (e.g., phantom measurements, ionization chamber data) and/or highly accurate, gold-standard Monte Carlo simulations for a variety of beam configurations and patient geometries. However, this 510(k) summary focuses on the comparative performance against other clinical TPS algorithms for validation.
Ask a specific question about this device
(103 days)
The intended use of the device is
• quality assurance of patient specific treatment delivery prior to the treatment in IMRT (including VMAT) and 4DRT (e.g. respiratory gating and tumour tracking).
• quality assurance of the radiation delivery system.
The device consists out of matrices of semiconductors embedded in a phantom. These matrices are inserted into the radiation field of a medical linear accelerator. If radiation from a radiotherapy treatment field) hits the semiconductors a signal is created and transferred to a computer where it is analysed and among others compared with the intended dose distribution.
The new device Delta4 Phantom+ MR is equivalent in form and function to the cleared device Delta4 Phantom+, but has been verified as an MR-conditional product.
The provided text describes a medical device called the "Delta4 Phantom+ MR", which is a pre-treatment verification system for quality assurance in radiation therapy. It is a new version of an existing device, the "Delta4 Phantom+", with the key difference being that the new device is MR-conditional.
Here's an analysis of the acceptance criteria and study information:
Acceptance Criteria and Reported Device Performance
The document does not explicitly state numerical acceptance criteria in a table format with corresponding reported device performance values. Instead, it describes equivalence to the predicate device. The core acceptance criterion for the new device is its equivalence in safety, design, and performance to the predicate device ("Delta4 Phantom+"), with the added capability of being MR-conditional.
Specifically, the document states:
- "The new device is equivalent, in most cases identical with the predicate devices regarding safety, design and performance."
- "Comparison tests have been performed with the predicate device in clinical and nonclinical situations. Among others pre-treatment verification measurements of numerous treatment plans have been performed with both the new device and the predicate device. The results were compared with each other and it was determined that the results had very good correlation."
Therefore, the acceptance criteria are implicitly that the Delta4 Phantom+ MR performs equivalently to the Delta4 Phantom+ in pre-treatment verification measurements for radiation therapy, specifically in IMRT, VMAT, and 4DRT QA, while additionally being MR-conditional. The reported device performance is that it achieved "very good correlation" with the predicate device.
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Equivalence in safety, design, and performance to Delta4 Phantom+ (predicate device) in clinical and nonclinical situations for QA measurements. | Achieved "very good correlation" with the predicate device in pre-treatment verification measurements of numerous treatment plans. |
| MR-conditional capability. | Successfully designed to be MR-conditional by substituting ferromagnetic parts with non-magnetic materials (ee.g., aluminum). |
Study Information
The document provides limited details about a formal study with a defined test set.
2. Sample size used for the test set and the data provenance:
- Sample Size: The document mentions "numerous treatment plans" were used for pre-treatment verification measurements. A specific numerical sample size is not provided.
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). It suggests "clinical and nonclinical situations," implying some real-world or simulated clinical scenarios.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided. The comparison is against the predicate device, not against an expert-established ground truth in the traditional sense of diagnostic AI.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- An adjudication method is not applicable/not provided as the comparison is between two devices' measurement outputs, not expert interpretations of data.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- An MRMC study was not conducted. This device is a measurement and quality assurance tool, not an AI-assisted diagnostic device requiring human reader improvement studies.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The performance comparison described is essentially a standalone (device-only) comparison between the new device and the predicate device, both operating without human intervention for the measurement aspect directly. The software then analyzes these measurements.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The "ground truth" for this device's performance comparison is the measurements provided by the predicate device, which is already cleared and considered reliable for QA purposes. The new device's measurements are compared to those of the predicate device.
8. The sample size for the training set:
- This device is not an AI/machine learning model in the typical sense that would require a "training set." It is a physical measurement device. Therefore, a training set is not applicable/not provided.
9. How the ground truth for the training set was established:
- As a training set is not applicable, this information is not provided.
Ask a specific question about this device
(246 days)
The intended use of the device is
- quality assurance of patient specific treatment delivery during and before external radiotherapy treatment, including IMRT, VMAT and 4DRT (respiratory gating and tumour tracking)
- quality assurance of the radiation delivery system.
The device is an instrument specifically designed to check delivered treatment plans by medical accelerator systems used for radiation therapy applications for quality assurance (QA) purposes.
The device consists of a 2-dimensional matrix of semiconductors and associated acquisition, display and analysis computer programs. It is a quality assurance (QA) device enabling detailed mapping of therapeutic radiation dose distributions. This dose information is used in both quantitative and subjective assessments of the performance of radiation therapy treatment planning systems and therapeutic radiation delivery systems.
The detector matrix is inserted into the radiation field of a medical linear accelerator. If radiation (from a radiotherapy treatment field) hits the detectors a signal is created and transferred to a computer where it is evaluated (e.g. by comparing planned with measured dose distributions).
The provided text does not contain information about specific acceptance criteria, a study proving device performance against those criteria, or details regarding ground truth establishment, expert involvement, or sample sizes related to algorithms. The document is a 510(k) premarket notification summary for the "Delta4 Discover" and "Delta4 Discover+" devices, which are quality assurance (QA) tools for radiation therapy.
Here's an analysis of what is available in the text, and where specific requested information is missing:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated in the document. The document mentions that "The new device is designed in full accordance with the applicable sections of the following standards: IEC 60601-1, IEC 60601-1-2", which are general safety and essential performance standards, not specific clinical performance acceptance criteria for its QA function.
- Reported Device Performance: Not detailed in terms of specific metrics like accuracy, sensitivity, or specificity. The document broadly states, "The new device is superior or at least equivalent with the predicate devices regarding safety, design and performance." It does not provide any quantitative comparison or performance data to support this claim.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- This information is not present in the provided text. No test set or data provenance is mentioned.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- This information is not present in the provided text. Given the device is a QA tool for radiation dose distributions, ground truth would likely refer to highly accurate dose measurements or calculations, not necessarily expert image interpretation. However, the exact methodology is not described.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- This information is not present in the provided text.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- This information is not present in the provided text. The device is described as a QA instrument for measuring radiation dose, not an AI-assisted diagnostic or interpretation tool for human readers. Therefore, an MRMC study related to human reading improvement with AI assistance would not be applicable or expected for this type of device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies standalone performance for the device in its role as a QA instrument, as it "enabling detailed mapping of therapeutic radiation dose distributions" and "evaluates (e.g. by comparing planned with measured dose distributions)." However, no specific study methodology for standalone performance is described, and no quantitative results are given.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The document does not explicitly state the type of ground truth used for any studies. For a radiation dose QA device, the "ground truth" for dose distributions would typically be based on highly accurate measurements from a reference dosimetry system or physics-based calculations.
8. The sample size for the training set
- This information is not present in the provided text. The document does not mention a training set, as it describes a measurement device, not an AI/machine learning algorithm that would typically require a training set.
9. How the ground truth for the training set was established
- This information is not present in the provided text, as no training set is mentioned.
Ask a specific question about this device
(107 days)
The intended use of the device is:
· quality assurance of patient specific treatment delivery prior to the treatment in IMRT (including VMAT) and 4DRT (e.g. respiratory gating and tumour tracking).
· quality assurance of the radiation delivery system.
The device consists out of matrices of semiconductors embedded in a phantom. These matrices are inserted into the radiation field of a medical linear accelerator. If radiation (from a radiotherapy treatment field) hits the semiconductors a signal is created and transferred to a computer where it is analysed and among others compared with the intended dose distribution.
This document is a 510(k) premarket notification for the Delta4 Phantom+ device, which is an improved version of the predicate device, Delta4. The submission focuses on demonstrating substantial equivalence to the predicate, rather than an independent clinical study to establish acceptance criteria and performance against those criteria as would be typical for a device with a new clinical function or significant safety/performance changes.
Therefore, the information you're requesting regarding acceptance criteria and performance studies in the context of an AI device is not fully available or directly applicable from this specific document. This document describes a medical device used for Quality Assurance of patient-specific radiation treatment delivery in radiotherapy. It is not an AI/ML powered device in the context of diagnostic or predictive tasks.
However, I can extract the closest analogous information available from the document regarding performance comparison against its predicate device.
Here's a breakdown of what can be extracted based on your request, with caveats where the information isn't directly present or relevant to AI device assessment:
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly define specific numerical acceptance criteria for performance beyond stating that the measurements "had very good correlation" with the predicate device. This is typical for a 510(k) submission where substantial equivalence to a legally marketed predicate is being demonstrated for a physical quality assurance device, rather than a novel diagnostic AI algorithm.
| Acceptance Criteria (Implied) | Reported Device Performance (Summary) |
|---|---|
| Equivalence or superiority to predicate device (Delta4) in safety, effectiveness, design, and performance for Quality Assurance of patient-specific radiation treatment delivery in IMRT (including VMAT) and 4DRT. | "The results [from pre-treatment verification measurements] were compared with each other and it was determined that the results had very good correlation." "The new device is superior or at least equivalent, in many cases identical with the predicate devices regarding safety, effectiveness, design and performance." |
| Wireless operation (battery-powered, Wi-Fi communication) | Achieved, improving convenience, setup speed, and safety (due to elimination of cable entangling risk). |
| Main Analysis Parameters (Dose Difference, Distance to agreement, Gamma index) | Same as predicate device. |
| Compliance with existing medical device standards (IEC 61010-1, IEC 60601-1-2) | Met. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: Not explicitly stated in terms of a specific number of cases or measurements. The document mentions "numerous treatment plans."
- Data Provenance: Not specified (e.g., country of origin). The study involved "pre-treatment verification measurements." Given the manufacturer is Swedish, data may originate from European or other regions where the predicate device is used.
- Retrospective or Prospective: Not explicitly stated, but "pre-treatment verification measurements" suggest they were likely conducted as part of internal testing or possibly in a clinical setting but not necessarily as a formal prospective clinical trial for this submission.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not Applicable. For this type of physical quality assurance device, the "ground truth" and performance evaluation are typically based on physical measurement comparisons, not expert human interpretation for diagnostic tasks. The device itself is designed to measure physical radiation dose distributions. Performance is assessed against the expected dose distribution from the treatment plan and comparison with a known, validated device (the predicate).
4. Adjudication Method for the Test Set
- Not Applicable. As the "ground truth" is not established by human experts in a diagnostic context, there is no expert adjudication method like 2+1 or 3+1. The comparison is between the new device's measurements and those of the predicate device, against expected dose distributions.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No. This document describes a physical measurement device for quality assurance in radiotherapy, not an AI diagnostic tool. Therefore, an MRMC study comparing human reader performance with and without AI assistance is not relevant or reported here.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
- Yes, analogous to standalone performance. The "performance" assessment described for the Delta4 Phantom+ is inherently standalone. The device measures dose distributions and compares them to intended distributions via its integrated software. The validation involved comparing these measurements against the predicate device. Human involvement would be in operating the device and interpreting its outputs, but the measurement and comparison itself is an algorithmic function of the device. The "very good correlation" indicates its standalone performance similarity to the predicate.
7. Type of Ground Truth Used
- The "ground truth" for evaluating this device's performance is multi-faceted:
- Intended Dose Distribution: The theoretical dose distribution prescribed by the treatment planning system.
- Predicate Device Measurements: Measurements obtained from the legally marketed predicate device (Delta4), which is assumed to be accurate and provide a valid reference.
8. Sample Size for the Training Set
- Not Applicable. This is a physical hardware and software device for measurements, not an AI/ML model that is "trained" on a dataset in the conventional sense. The device's algorithms are based on physics and signal processing principles.
9. How the Ground Truth for the Training Set Was Established
- Not Applicable. See point 8. No "training set" or "ground truth for training set" in the context of an AI/ML model is relevant to this device. Its operational principles are founded on established physics and engineering.
Ask a specific question about this device
(87 days)
The intended use of Delta4 is quality assurance of patient specific treatment delivery prior to the treatment in IMRT and 4DRT (respiratory gating and tumor tracking).
The new device consists out of:
- software .
- . phantom
- Detector arrays. .
- Multi-channel electrometer .
- Connection cables .
When measurements are to be performed the device is typically put on the patient table (or couch). Then the device is exposed typically from different angles.
Here's an analysis of the provided text regarding the acceptance criteria and study for the Delta4 device:
Based on the provided document, the device described is the Delta4, a quality assurance system for patient-specific treatment delivery in IMRT (Intensity Modulated Radiation Therapy) and 4DRT (4-Dimensional Radiation Therapy, including respiratory gating and tumor tracking).
1. Table of Acceptance Criteria and Reported Device Performance
The document is a 510(k) summary, which focuses on demonstrating substantial equivalence to predicate devices rather than establishing novel acceptance criteria for a new clinical claim. Therefore, explicit, quantified acceptance criteria and performance metrics in the typical sense (e.g., sensitivity, specificity, accuracy against a gold standard) are not detailed in this document.
Instead, the document asserts:
- The new device "is partly better, partly equivalent and in some cases identical with the predicate devices regarding safety, design and performance." (Section 9 CONCLUSION)
- It improves workflow: The customer can automatically perform quality assurance in one single session, avoiding two exposures (IMRT vs. 4DRT measurement instruments).
- Technological advantages are listed for its semiconductor detectors over ionization chambers and films/TLD (smaller volume, no external high voltage, high-resolution/online readout).
Summary Table (based on document's claims, not quantitative metrics):
| Acceptance Criteria Category | Reported Device Performance (based on claims) |
|---|---|
| Safety | Equivalent to predicate devices; designed for IEC 601-1 and IEC 601-1-2 conformance; inherently safe due to absence of high voltage in semiconductor technology. |
| Design | Equivalent/identical with predicate devices. |
| Performance | Equivalent/partly better than predicate devices, especially regarding workflow for IMRT and 4DRT in single session. Semiconductor detectors offer advantages like smaller volume, no external HV, faster readout. |
| Intended Use | Achieves quality assurance of patient-specific treatment delivery prior to IMRT and 4DRT. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a discrete "test set" sample size or data provenance in the context of a clinical performance study. The 510(k) summary focuses on design characteristics and technological equivalence rather than presenting results from a clinical trial with a defined patient cohort.
The statement "Clinical tests are not necessary; however, the device has been tested in a clinical environment" (Section 8) implies some form of evaluation but provides no details on:
- The number of cases or studies involved.
- The country of origin for any data generated.
- Whether any data was retrospective or prospective.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not provided in the document. As clinical tests with explicit ground truth establishment were deemed "not necessary" for this 510(k), no details on experts or their qualifications for establishing ground truth are present. The "typical user" is identified as a "Physicist or other dosimetry expert," suggesting these professionals would evaluate the device's output in a real-world setting.
4. Adjudication Method for the Test Set
Since no specific "test set" and ground truth establishment process are described, an adjudication method is not mentioned.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size of AI Improvement
No MRMC comparative effectiveness study is mentioned. The device is a quality assurance system for radiation therapy, an "algorithm only" type of device for measurement and comparison, not an AI-assisted diagnostic or treatment planning tool that would typically involve human readers.
6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done
The device is inherently a standalone (algorithm only) performance system in terms of its core function. It measures dose distribution using detectors and its software compares this measured dose to a calculated dose plan from an external Treatment Planning System (TPS). The "human-in-the-loop" aspects involve the physicist interpreting the comparison results and making decisions based on them.
The comparison of calculated and measured dose is performed "inside the device's software" (Section 4.2, point 6), indicating its standalone algorithmic function.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
The document describes the device's function as comparing a measured dose distribution to a calculated dose distribution (from a TPS). In this context, the "ground truth" for the device's comparison is the calculated dose distribution from the Treatment Planning System. The device's purpose is to verify if the actual delivered dose (measured) matches the planned dose (calculated).
8. The Sample Size for the Training Set
The document does not mention a "training set" as would be typical for machine learning-based algorithms. The device's technology appears to be based on established physics principles of semiconductor detectors and dose measurement/comparison, not on a machine learning model that requires training data.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned or implied, the question of how its ground truth was established is not applicable to this document.
Ask a specific question about this device
Page 1 of 1