Search Results
Found 7 results
510(k) Data Aggregation
(267 days)
DOSIsoft SA
PLANET Onco Dose is a standalone software intended to manage, process, display and analyze anatomical and functional images. It provides tools and functionalities to assist in medical diagnosis and therapy response assessment, to assist in the contouring of regions of interest, and to assist in internal dosimetry computation for radionuclide based therapies. The modalities of these medical imaging systems include CT, MRI, SPECT, PET, XA, planar scintigraphy, RT Struct and RT Dose as supported by ACR/NEMA DICOM 3 standard format.
PLANET Onco Dose is intended for retrospective determination of dose only and should not be used to deviate from approved radioactive products, product dosing and administration instructions.
PLANET Onco Dose is dedicated to be used by qualified medical professionals in Molecular Imaging, and/or Medical Oncology.
PLANET Onco Dose provides the User with the means to segment structures in medical image volumes by providing dedicated delineation, contouring and propagation tools for both tumors and normal tissues (i.e. Regions of Interest (ROI)).
PLANET Onco Dose provides tools to display, co-register (rigid and deformable), perform quantification for assessment and response evaluation of patients undergoing a course of oncology treatment.
PLANET Onco Dose allows import / export of results (contours and dosimetries) to / from any DICOM compliant system (e.g. Treatment Planning Systems, PACS).
PLANET Onco Dose allows to compute in 3D at the radiation doses received by tissues as a result of radionuclide administration. Dose can be computed with or without tissue density correction using two models depending on considered isotopes.
PLANET Onco Dose provided the User with the means to perform modeling of absorption and elimination kinetics of Radiopharmaceutical Therapy (RPT). Time-integrated activity and dose rate can be calculated from three kinds of clinical setups involving full 3D image acquisitions, hybrid 2D/3D image acquisitions and single time point approaches.
PLANET Onco Dose supports isotopes with beta and gamma contributions.
PLANET Onco Dose provides tools for dosimetry comparison for sequential treatments and for dosimetry summation.
PLANET Onco Dose is a software platform dedicated to medical diagnosis aid, contouring, internal dosimetry computation and therapy response assessment, using molecular imaging modalities.
PLANET Onco Dose is a modular software suite composed of three elements:
. PLANET - Core System: reviewing, fusion and registration of multi-modal anatomical (computed tomography (CT), magnetic resonance imaging (MRI), X-ray angiography (XA)) and functional (positron emission tomography (PET), single photon emission computed tomography (SPECT), planar scintigraphy) series;
. PLANET Onco - Oncology Module: contouring of region of interest, tumor segmentation, quantification, therapy response assessment;
. PLANET Dose - Dosimetry Module: pharmacokinetics modeling and internal dosimetry computation for locally regulatory approved pharmaceuticals.
The provided document describes the PLANET Onco Dose (3.2) software, intended for medical image management and processing, dosimetry computation, and therapy response assessment. While it details the device's intended use, technological comparisons, and that performance testing was conducted, it does not provide specific acceptance criteria or the numerical results of performance, functional, or algorithmic testing.
Therefore, I cannot populate the table or answer most of the questions using only the provided text.
Here's what can be extracted and what information is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria (e.g., Specificity, Sensitivity, Accuracy, Dice Score, ROC AUC) | Reported Device Performance |
---|---|
Not specified in the provided document. | Not specified in the provided document. |
2. Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not specified.
- Data Provenance: Not specified. The document states "validation activities under clinically representative conditions" but does not detail the origin (e.g., country, hospital, retrospective/prospective) of the data used for actual testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not specified.
- Qualifications of Experts: Not specified.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not specified. The document mentions "The critical evaluation of the relevant scientific literature confirms that the choices made for the TRT dosimetry computation methods implemented within PLANET Onco Dose are those recommended by the scientist international community," which might imply comparisons, but it doesn't describe an MRMC study related to human performance improvement with AI assistance for PLANET Onco Dose (3.2) itself.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, implicitly. The document states, "PLANET Onco Dose was submitted to performance, functional and algorithmic testing," and later, "The performance obtained by the demonstration of performances lead to clearly define the area of application of the various internal dosimetry methods." The comparison of its dose computation algorithms with Monte Carlo methods also suggests standalone algorithmic evaluations. However, specific standalone performance metrics are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not explicitly stated for all tests. For the dosimetry comparison, it mentions "Dose computation algorithms available in PLANET Onco Dose were compared with Monte Carlo method and the results showed consistency between evaluated methods." This implies Monte Carlo results may have been used as a reference/ground truth for dosimetry algorithm validation. For other functionalities like contouring, it's not specified how ground truth was established, but typically this would involve expert consensus on medical images.
8. The sample size for the training set:
- Not specified. The document does not describe a training set, as it focuses on the performance and validation of the software. This suggests it's likely a rule-based or conventional algorithmic software rather than a deep learning AI model that requires a distinct training phase.
9. How the ground truth for the training set was established:
- Not applicable/Not specified, as no training set is described.
Summary of Device and Study Information from the Document:
PLANET Onco Dose (3.2) is a standalone software for managing, processing, displaying, and analyzing anatomical and functional images. It assists in medical diagnosis, therapy response assessment, contouring of regions of interest, and internal dosimetry computation for radionuclide-based therapies.
Key Study Information:
- Testing Conducted: Performance, functional, and algorithmic testing, risk management assessment (including cybersecurity), and validation activities under clinically representative conditions.
- Workflows Covered: Standard SIRT, Full SPECT/CT pharmacokinetics for MRT, 2D/3D hybrid pharmacokinetics for MRT, Single time point pharmacokinetics for MRT.
- Dosimetry Algorithm Comparison: PLANET Onco Dose's Voxel S Value dose kernel convolution algorithm and local energy deposition algorithm were compared with the Monte Carlo method, showing "consistency."
The document concludes that the results demonstrate the safety and effectiveness of PLANET Onco Dose (3.2) and that it is substantially equivalent to its predicate devices (PLANET Onco Dose (3.1) and Torch™). However, it lacks the quantitative results of these tests and the specifics of the methods (e.g., sample sizes, expert qualifications) used to establish ground truth or evaluate performance against acceptance criteria.
Ask a specific question about this device
(232 days)
DOSIsoft SA
ThinkQA Edition 2 software is used to verify that the dose distribution calculated by a treatment planning system for external beam radiation therapy is consistent with treatment plan parameters.
Based on read-in treatment plan data, ThinkQA Edition 2 re-calculates a dose distribution in a three-dimensional representation of a patient or a phantom and provides dose-volume indicators which compare it to the initial dose distribution calculated by the treatment planning system.
ThinkQA Edition 2 is not a treatment planning system. It is a Quality Assurance software only to be used by qualified and trained radiation therapy personnel.
ThinkQA Edition 2 is a standalone software device used within a radiation therapy clinic which is designed to perform secondary dose calculation based on DICOM RT treatment plan data provided by a treatment planning system.
ThinkQA Edition 2 is only meant for quality assurance purpose. It cannot define or transmit any instructions to a delivery device, nor does it control any other medical device.
ThinkQA Edition 2 performs dose calculation verifications for radiation therapy plans by doing an independent calculation of dose distribution in a three-dimensional representation of a phantom. Dose distribution is initially calculated by a treatment planning system which is a software tool that allows to define and transmit treatment plan parameters that will further be used for treatment delivery. Based on treatment plan parameters, ThinkQA Edition 2 re-calculates dose distributions using a proprietary Collapsed Cone Convolution algorithm. It uses CT images (real patient anatomy) to perform dose computation with Collapsed Cone Convolution.
ThinkQA Edition 2 compares the reference TPS dose distribution with its own calculation using specific indicators such as 3D gamma agreement index on significant volumes. ThinkQA Edition 2 computes Gamma Passing Rate for automatic dose areas and anatomical structures: Planning Target Volumes (PTVs) and Organs at Risk (OARs).
Based on these indicators, ThinkQA Edition 2 displays a pass/fail status that informs the user whether or not the acceptance criteria that he has defined are met. The acceptance criteria does not give in any way information that could be used to determine whether or not the treatment plan is clinically relevant. It just evaluates the consistency between treatment plan parameters and the dose distribution computed by the TPS.
ThinkQA Edition 2 has been designed to be compatible with radiotherapy adaptative workflows. This includes a number of mandatory features:
- User interface design, grouping verifications for adaptive plans under a single primary plan verification; .
- . Automatic computation upon reception of DICOM data from the TPS;
- Sufficient speed of computation, compatible with adaptive workflow with patient waiting on couch.
The performance of ThinkQA Edition 2 makes it suitable for the following photon treatment delivery techniques: Static beams, IMRT Step & Shoot, Dynamic IMRT with fixed gantry and Rotational IMRT (VMAT).
In order to guaranty the independence of the secondary dose check, the beam models are not intended to be adjusted to match the user's reference TPS. The user only provides its actual measured dose rate in reference conditions and HU-density conversion table.
ThinkQA Edition 2 runs on workstations or virtual machines with Linux CentOS 7 operating system. Its web interface is accessible from any system supporting the specified in chapter ThinkQA Edition 2 web application. ThinkQA Edition 2 is able to communicate with other equipment installed on the network complying with the DICOM and DICOM RT industry standards.
The FDA 510(k) summary for ThinkQA (Edition 2) describes a software device for quality assurance in radiation therapy. The document outlines comparisons to a predicate device (MU2net) and evidence for substantial equivalence, including performance evaluations.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a formal "acceptance criteria" table with specific quantitative thresholds that the device had to meet for its performance evaluation, nor does it provide detailed numerical outputs beyond qualitative statements. However, it implicitly defines a performance expectation related to dosimetric evaluation:
Acceptance Criteria (Implicit) | Reported Device Performance |
---|---|
Agreement with measured data for beam models (per AAPM WG 219 recommendations). | "The agreement between ThinkQA Edition 2, the primary TPS and the measured data was found to be excellent in terms of beam shape and absolute dose." |
Dosimetric evaluation on varied plans using tight gamma index tolerance (2%/2mm, global, 95% passing rate). | "The dosimetric evaluation was performed on a large variety of plans with a growing complexity and a tight gamma index tolerance (2%/2mm, global, 95% of passing rate). The overall performance of ThinkQA Edition 2 in terms of beam modelling was found to be satisfactory for the three beam models, with all the tested plans respecting the gamma tolerances. An exception should be noted for a few number of Elekta Unity 7 MV FFF plans sensitive to the electron return effect." |
Consistency with predicate device's decision-making on plan validation/rejection. | "The same set of plans were evaluated with the predicate MU2net with the recommended relative tolerance of 5% dose difference with reference dose. ThinkQA Edition 2 and MU2net supported the same decision on whether to validate or reject the evaluated plans. Additionally for situations where MU2net control was inconclusive (e.q. prescription point located outside of the irradiated volume) the full 3D gamma evaluation provided by ThinkQA Edition 2 allowed a decision making." |
Mitigation of cybersecurity threats and vulnerabilities. | "The system tests demonstrate that product outputs have met the product input requirements with a mitigation of threats and vulnerabilities as far as possible." |
2. Sample Size for the Test Set and Data Provenance
- Test Set Sample Size: The document refers to "a large variety of plans" for the dosimetric evaluation, but a specific number is not provided. For the beam modeling unique to this submission, there were "three beam qualities (6 MV, 6 MV FFF and Elekta Unity 7 MV FFF) and two primary TPS (RayStation and Monaco)."
- Data Provenance: The document does not specify the country of origin for the data or whether the studies were retrospective or prospective. It implies the data was generated internally for testing and evaluation purposes. The "measured depth dose curves and profiles" suggest real-world or phantom measurements were performed.
3. Number of Experts and Qualifications
- Number of Experts: Not explicitly stated. The studies were likely conducted by the manufacturer's internal team, including physicists and engineers specialized in medical physics and radiation therapy.
- Qualifications of Experts: Not explicitly stated, but the context implies expertise in radiation oncology physics, treatment planning systems, and dose calculation algorithms ("qualified and trained radiation therapy personnel"). The mention of "AAPM working group 219" recommendations suggests adherence to professional standards in radiation oncology physics.
4. Adjudication Method for the Test Set
- Adjudication Method: Not applicable in the traditional sense of human interpretation of results. The device's performance was evaluated against physical measurements and established dosimetric metrics (e.g., gamma index passing rate, dose difference). The comparison to the predicate device acted as a form of "adjudication" for decision consistency.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- MRMC Study: No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed. This device is a quality assurance software that performs an objective, mathematical comparison of dose distributions rather than aiding human readers in diagnosis or interpretation that would necessitate an MRMC study. Its purpose is to verify consistency based on defined parameters, not to improve human diagnostic performance.
6. Standalone (Algorithm Only) Performance
- Standalone Performance: Yes, the described performance evaluation (dosimetric evaluation, beam modeling) is a standalone assessment of the algorithm's ability to calculate dose distributions and compare them to reference data. The device's output (gamma passing rate, pass/fail status) is based solely on its internal calculations and comparisons, without human intervention in the calculation or the determination of the result itself. Human users then interpret this output.
7. Type of Ground Truth Used
- Ground Truth Type:
- Measured Data: For beam modeling, the ground truth was "corresponding measured depth dose curves and profiles," indicating physical measurements.
- Reference Treatment Planning System (TPS) Dose Distribution: For the clinical performance evaluation and comparison, the ground truth was implicitly the dose distribution calculated by the "reference TPS" (RayStation and Monaco), against which ThinkQA's calculations were compared using metrics like the gamma index.
- Predicate Device Output: For consistency checks, the "decision" (validation or rejection) of the predicate device (MU2net) served as a comparative ground truth.
8. Sample Size for the Training Set
- Training Set Sample Size: The document does not explicitly discuss a separate "training set" in the context of a machine learning model, as the dose calculation for ThinkQA Edition 2 is based on a "proprietary Collapsed Cone Convolution algorithm" and beam models, rather than a data-driven machine learning approach that would necessitate a distinct training phase with labeled data in the same way. The beam modeling process involves systematic adjustments and evaluations, which could be considered an iterative tuning or "training" specific to dose calculation, but a specific "training set size" is not applicable in the typical AI/ML sense.
9. How Ground Truth for the Training Set Was Established
- Ground Truth for Training Set Establishment: Since the core dose calculation algorithm (Collapsed Cone Convolution) is a physics-based model, it does not rely on labeled training data in the way a machine learning algorithm would. The "beam modeling process" involved:
- Comparison of computed depth dose curves and profiles against "measured depth dose curves and profiles." These physical measurements serve as the ground truth for calibrating and validating the accuracy of the beam models within the CCC algorithm for different beam qualities and TPS.
- The goal was for the beam models to be independently accurate and not necessarily "adjusted to match the user's reference TPS" to maintain calculation independence.
Ask a specific question about this device
(94 days)
DOSIsoft
Treatment plan verification System by double Monitor Units calculation and in vivo verifications management.
MU2net is indicated for use as a quality assurance software to verify conventional and IMRT treatment plans provided by Radiotherapy Treatment Planning Systems and transferred as DICOM RT Plan data sets.
It is also indicated for use as a quality assurance software to manage in vivo measurements performed during the patient treatment fractions.
MU2net is dedicated to be used by high qualified professionals in medical physics such as Radiation-Oncologists, Medical Physicists and Technicians performing treatment planning.
MU2net is a standalone software designed to perform secondary monitor unit (MU) or dose calculation based on DICOM RT Plan data sets provided by Radiotherapy Treatment Planning Systems.
MU2net is also designed to manage in vivo measurements performed by skin dose detectors (diode in vivo dosimetry).
The DOSIsoft SA MU2net device is a quality assurance software designed to verify conventional and IMRT treatment plans and manage in vivo measurements during patient treatment. The provided document, a 510(k) Premarket Notification Summary, outlines the acceptance criteria and the study that proves the device meets these criteria.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance:
The document describes MU2net as a "Major Level of Concern" software, requiring robust testing. While it does not provide a specific table of numerical acceptance criteria with corresponding performance metrics (e.g., accuracy percentages or error margins), it generally states that the device meets safety and effectiveness requirements. The performance is assessed through various testing types.
Acceptance Criteria (Inferred) | Reported Device Performance |
---|---|
Safety and Effectiveness | Demonstrated through performance, functional, and algorithmic testing, risk management assessment, and validation activities under clinically representative conditions. MU2net meets user needs and intended use requirements. |
Substantial Equivalence | The intended use, clinical, and technical characteristics (principles of operation, functionalities, and critical performances) are the same as for the predicate device IMSure QA Software (K031975). Differences in functional features do not significantly affect safety or effectiveness. |
Compliance with Regulations | Designed and documented in accordance with FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." |
Accuracy of MU/Dose Calculation | Based on "formalism booklet 3 - Monitor Unit Calculation for High Energy Photon Beams from ESTRO and Report 12 from Netherlands Commission on Radiation Dosimetry (NCS)." Inspired by J. H. Kung et al. "A monitor unit verification calculation in intensity modulated radiotherapy as a dosimetry quality assurance." (Medical Physics 2000 October; 27(10):2226-30). No specific numerical accuracy given. |
In vivo Measurement Management | Software designed to manage in vivo measurements performed by skin dose detectors (diode in vivo dosimetry), with automatic or manual import of detector measurements. No specific performance metrics given. |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not specify the exact sample size used for the test set or the data provenance (e.g., country of origin, retrospective or prospective nature of the data). It broadly mentions "clinically representative conditions" for validation activities.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications:
The document does not provide details on the number of experts used to establish ground truth for the test set or their specific qualifications (e.g., radiologists with X years of experience). It states that the device is "dedicated to be used by high qualified professionals in medical physics such as Radiation-Oncologists, Medical Physicists and Technicians performing treatment planning," implying that such professionals would be involved in, or validate, the ground truth.
4. Adjudication Method for the Test Set:
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for the test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done:
The document does not mention a Multi-Reader Multi-Case (MRMC) comparative effectiveness study, nor does it report an effect size of how much human readers improve with AI vs. without AI assistance. The focus of the submission is on standalone software performance and its equivalence to a predicate device, rather than human-in-the-loop performance improvement.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done:
Yes, a standalone study was done. The device, MU2net, is described as a "standalone software quality control system" and its performance testing focuses on the algorithm's capabilities. The comparative assessment with the predicate device (IMSure QA Software) also centers on the technological characteristics and performance of the software itself.
7. The Type of Ground Truth Used:
The type of ground truth used is inferred from the device's function as a secondary monitor unit (MU) or dose calculation system and in vivo measurement manager. The "ground truth" would be established by:
- Established Physics Formalisms and Reports: The calculation algorithm is "Commissioning based on formalism booklet3 - Monitor Unit Calculation for High Energy Photon Beams from ESTRO and Report 12 from Netherlands Commission on Radiation Dosimetry (NCS)." This implies an established and validated theoretical framework serves as a ground truth for calculations.
- Published Research: Inspired by J. H. Kung et al. "A monitor unit verification calculation in intensity modulated radiotherapy as a dosimetry quality assurance" Medical Physics 2000 October; 27(10):2226-30, suggesting that published, peer-reviewed methods contribute to the ground truth.
- Measured Physical Beam Data: The device uses "measured physical beam data" for dose calculation and "Measured, tabular database stored" for physics data, indicating that real-world physical measurements form a part of the ground truth for calibration and verification.
- In vivo Detector Measurements: For the in vivo verification component, the "Import in vivo dose data" and the management of "detector measurements" would rely on the readings from skin dose detectors as the ground truth for actual patient dose.
8. The Sample Size for the Training Set:
The document does not provide information regarding a specific "training set" sample size. As a "standalone software quality control system" for secondary calculations and in vivo measurement management, it's more likely based on established physics models and calibrated parameters rather than a machine learning model that requires a discrete training set in the typical sense. If internal models are used, the architecture and training data are not described.
9. How the Ground Truth for the Training Set Was Established:
Given the nature of the device as a calculative and management tool based on physics principles, the concept of a "training set" and its ground truth is not explicitly addressed in the document in the typical machine learning context. Instead, the algorithm's basis is rooted in:
- Physics Formalisms: ESTRO and NCS reports.
- Scientific Literature: J.H. Kung et al.'s publication.
- Measured Physical Data: Calibration using measured beam data from linear accelerators.
These form the underlying "ground truth" upon which the software's algorithms are built and validated, rather than a specific labeled dataset used for training an AI model.
Ask a specific question about this device
(131 days)
DOSIsoft
PLANET Onco Dose is a standalone software intended to be used with PET or SPECT hybrid imaging systems in order to manage, process, display and analyze nuclear medical medical image series, to assist in medical diagnosis, to assist in treatment analysis and in therapy response assessment, to assist in the contouring of region of interest for radiotherapy.
PLANET Onco Dose is dedicated to be used by qualified medical professionals in Molecular Imaging, and/or Medical Oncology.
The medical modalities of these medical imaging systems include CT. MRI, SPECT, PET. XA, RT Struct and RT Dose as supported by ACR/NEMA DICOM 3 standard format.
PLANET Onco Dose provides the user with the means to segment structures in medical image volumes by providing dedicated delineation, contouring and propagation tools for both tumors and normal tissues (i.e. Regions of Interest (ROI)).
PLANET Onco Dose provides tools to display, co-register (including deformable registration), compute Standardized Uptake Value (SUV) and import / export results (contours and dosimetries) to / from Treatment Planning Systems (TPS) and PACS devices for assessment and response of patients undergoing a course of oncology treatment.
PLANET Onco Dose provides the user with the means to assist in the assessment quantification of radiation doses received by tissues as a result of administering a radionuclide (e.g. Permanent Yttrium-90 microsphere implants).
PLANET Onco Dose provides tools for post-treatment absorbed dose calculation and evaluation on PET and SPECT images. The following functions are available to allow dose calculations for patients after they have received a treatment using permanent Yttrium-90 (Y90) microspheres:
-
3D liver-lung shunt assessment;
-
Local Deposition Model;
-
Voxel S Value approach based on the schema in MIRD Pamphlet 17 [1]:
-
Dosimetry based on 90Y-microspheres-PET (or SPECT Bremsstrahlung) series;
-
Compatible with PET images acquired with another radioisotope instead of Y90 when Y90 acquisitions are not supported by the scanner (correction of branching ratio and decay parameters).
For Y90 microspheres. PLANET Onco Dose cannot be used to prescribe the radionuclide activity to be administered to the patient for the therapy. The user has to provide the parameters (e.g. activity) in order for PLANET Onco Dose to estimate the radiation doses that the tissues received as a result of the administration.
PLANET Onco Dose should only be used for the retrospective determination of dose and not for the case where is a need for retreatment using Y90 microspheres.
PLANET Onco Dose is a software platform dedicated to medical diagnosis aid, therapy response assessment aid, contouring for radiotherapy and internal dosimetry computation, using molecular imaging modalities.
PLANET Onco Dose is a modular software suite composed of three elements:
- PLANET: Core System: license controller, reviewing of multi-modal molecular image series -(PET/CT, SPECT/CT, PET/MRI, SPECT/MRI): fusion and registration;
- -PLANET Onco: Oncology Module: contouring of region of interest, turnor segmentation, tumoral activity monitoring, therapy response assessment;
- -PLANET Dose: Dosimetry Module: internal dosimetry computation for the Targeted Radionuclide Therapy (TRT).
The provided text describes a 510(k) premarket notification for a medical imaging software called PLANET Onco Dose (K182966). It aims to demonstrate substantial equivalence to a predicate device (Velocity K173636).
However, the document does not contain the specific details about acceptance criteria, the study design (e.g., sample size, data provenance, number of experts for ground truth, adjudication methods, MRMC studies, standalone performance), or the training set information (sample size, ground truth establishment) typically expected for a performance study proving a device meets acceptance criteria.
The "Performance Testing - Bench" section (Item 8) is very high-level and states: "PLANET Onco Dose was submitted to performance, functional and algorithmic testing, risk management assessment and validation activities under clinically representative conditions. The results of performance, functional and algorithmic testing, risk management assessment and validation activities under clinically representative conditions demonstrate the safety and effectiveness of PLANET Onco Dose." This is a general statement of compliance, not a detailed report of a study.
Therefore, I cannot fulfill most of the requested information based on the provided text.
Here's what can be inferred or stated about the acceptance criteria and the "study" based on the very limited information provided:
1. A table of acceptance criteria and the reported device performance:
The document does not explicitly state quantitative acceptance criteria or detailed reported device performance metrics. Instead, it relies on demonstrating substantial equivalence to a predicate device, focusing on similar technological characteristics and functionalities. The comparison table (pages 6-7) lists features and functionalities of PLANET Onco Dose and its predicate, implying that meeting the functionalities of the predicate is a key "performance" aspect.
Since no specific quantitative acceptance criteria or detailed performance data are provided, a table cannot be constructed. The overall "performance" claim is that the device "meets the requirements of the device, its user needs and intended use, which are demonstrated to be substantially equivalent to those of the predicate device."
2. Sample size used for the test set and the data provenance:
- Sample size: Not specified.
- Data provenance: Not specified. The document mentions "clinically representative conditions" for testing, but no details on the origin (e.g., country) or nature (retrospective/prospective) of the data are provided.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of experts: Not specified.
- Qualifications of experts: Not specified.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- Adjudication method: Not specified.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- MRMC study: Not mentioned or implied. The focus is on demonstrating substantial equivalence of the software's capabilities, not on human-in-the-loop performance improvement. The software is described as a "standalone software intended to be used with PET or SPECT hybrid imaging systems in order to manage, process, display and analyze nuclear medical medical image series, to assist in medical diagnosis, to assist in treatment analysis and in therapy response assessment, to assist in the contouring of region of interest for radiotherapy." It assists professionals but there's no mention of a study on how it improves human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The document states "PLANET Onco Dose is a standalone software". While it doesn't provide specific isolated algorithm performance metrics (e.g., sensitivity, specificity for a specific task), the "Performance Testing - Bench" section implies that the software's functional and algorithmic performance was evaluated independently to demonstrate its capabilities. However, specific metric-based results are not provided.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Type of ground truth: Not specified. The nature of the "performance, functional and algorithmic testing" and "validation activities" would typically involve some form of ground truth for evaluation, but the document does not elaborate on how this ground truth was established or what it comprised.
8. The sample size for the training set:
- Training set sample size: Not specified. The document describes a premarket notification for a software, but does not indicate whether it is an AI/ML model that requires a discrete "training set." If it is, no details are provided.
9. How the ground truth for the training set was established:
- Ground truth establishment: Not specified. (See point 8).
In summary, the provided FDA 510(k) clearance letter and summary primarily focus on establishing substantial equivalence based on intended use and technological characteristics compared to a predicate device. It lacks the detailed reporting of performance studies, including acceptance criteria, specific study design parameters (sample sizes, data provenance, expert involvement for ground truth, adjudication), and results, that would be present in a comprehensive study report or a different type of regulatory submission (e.g., De Novo or PMA for novel devices with specific performance claims).
Ask a specific question about this device
(56 days)
DOSIsoft
ThinkQA is a radiation therapy dosimetry Quality Assurance (QA) device consisting of a software framework intended to contain a suite of modules to verify that radiation dose actually delivered to the patient is as intended.
The Epibeam module contained in ThinkQA is intended to be used as follows:
Epibeam is a standalone software tool independent of the linear accelerator, the TPS and the Record-and-Verify system. It is intended to assist in reducing the clinical risk in the delivery of radiotherapy treatments and does not alter the treatment delivery. It is to be used by a radiation oncology medical professional as a guide to provide pretreatment plan delivery verification.
The software is to be used for the purposes of detecting errors in the delivery of radiation therapy prior to treatment, like corruption of the transferred plan data to the treatment unit, inappropriate multileaf collimator sequence or beam output malfunctioning. The software acquires data from the Electronic Portal Imaging Device (EPID) during a blank fraction dedicated to the pretreatment verification without the patient and subsequently processes it. The processed data is compared with data calculated by the Epibeam system. The comparison is derived on one hand, from the application of dose conversion to the EPID data and on the other hand, from the computation of a predicted dose image under ideal conditions of functioning. A gamma-index analysis is then performed according to the dose difference and distance-to-agreement criteria provided by the user.
Epibeam is not a treatment planning system and cannot be used to generate radiotherapy treatment plans. It provides an independent means of checking the reliability of the dose delivery for each beam in reference to TPS data.
Epibeam therefore provides an added level of treatment quality assurance, thus giving clinicians confidence especially when complex treatment techniques are employed (gantry-fixed and rotational intensity modulated radiation therapy).
Epibeam is intended to support decision making in relation to the delivery of treatment plan to the patient with every clinical linear accelerators equipped with an EPID, but does not alter the existing Indications for Use of the treatment unit.
ThinkQA is a modular software suite composed of the module Epibeam which is a quality assurance tool dedicated to Patient Specific QA for pretreatment verification of irradiation beams.
The EPIbeam verification module integrated to the ThinkOA software platform is a Quality Assurance tool in external beam radiation therapy, used in combination with the electronic portal imaging device (EPID) and dedicated to the irradiation beam pre-treatment verifications, particularly for IMRT and VMAT techniques.
EPIbeam principle is based on the comparison of two images expressed in terms of absolute dose: on the one hand, a RT Plan defined in the TPS is used for the acquisition of a real portal image (test image) with the EPID directly irradiated (without attenuating medium); on the same RT Plan is used to compute a theoretical portal image (reference image). Specific models and algorithms are applied to express both images in the same absolute dose terms.
The dose images obtained from the same RT Plan, one by the conversion model of the acquired raw EPID images and the other by the prediction model, can be quantitatively compared through dose difference mappings or 2D gamma-index. Both models are based on dosimetric data provided from the TPS.
Here's a summary of the acceptance criteria and study details for the ThinkQA Epibeam device, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of numerical acceptance criteria. Instead, it describes functional requirements and states that "acceptance criteria were met." The performance is generally framed as demonstrating substantial equivalence to the predicate device.
Acceptance Criteria (Implied / Functional) | Reported Device Performance |
---|---|
Pretreatment check functionality | Yes |
Independent software operation | Yes |
Ability to acquire pretreatment images | Yes |
Algorithm for computing predicted reference dose image | Yes |
Algorithm for converting acquired portal image into dose image | Yes |
Comparison of measured and reference dose images via gamma-index analysis | Yes |
Generation of reviewer reports | Yes |
Inclusion of gamma agreement index per beam | Yes |
Inclusion of significant statistic gamma index values per beam | Yes |
Ability to view test and reference images | Yes |
Ability to view superimposed test/reference dose profiles | Yes |
Ability to view 2D gamma index distribution | Yes |
Patient control database integration | Yes |
User-defined Alert Criteria for out-of-tolerance analysis | Yes |
Import Approved Plan data from Treatment Planning System | Yes |
Import Portal Images from pretreatment fraction | Yes |
Automatic offline analysis | Yes |
Support for multiple treatment techniques (Static, IMRT, VMAT) | Yes |
Requirement for EPID panel Calibration for commissioning | Yes |
Use of TPS results for dose data reference | Yes |
Demonstration of substantial equivalence to predicate device (K133572) | Achieved through performance, functional, and algorithmic testing. |
Conformance to applicable technical design specification | Met |
Achievement of safety and effectiveness | Achieved |
Meeting device requirements under normal conditions of use | Met |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a distinct "test set" with a particular sample size from real patient data. The validation seems to be based on "clinically representative conditions" and "test cases" rather than a specific patient cohort for a validation study.
- Sample Size for Test Set: Not explicitly stated as a separate patient-based test set. The testing involved "unit, integration and system tests" and "validation of the system under clinically representative conditions."
- Data Provenance: Not specified regarding country of origin or whether it was retrospective or prospective. Given the nature of a software release, it's likely synthetic or internally generated test cases reflecting various clinical scenarios, and potentially retrospective clinical data for "clinically representative conditions."
3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications
The document does not mention the use of external experts to establish a "ground truth" for a specific test set. The ground truth for the device's function appears to be established through:
- Comparison of acquired EPID data with data calculated by the Epibeam system itself, based on TPS plans and prediction models.
- The assumption that the TPS data and the device's prediction model represent the "ideal conditions of functioning" or "reference."
4. Adjudication Method for the Test Set
Not applicable/not mentioned. There's no indication of an adjudication method involving multiple human readers for establishing a ground truth or resolving discrepancies in a test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, an MRMC comparative effectiveness study was not done. The document focuses on the standalone performance and substantial equivalence of the software tool.
6. Standalone (i.e., algorithm only without human-in-the-loop performance) Study
Yes, a standalone study was done. The entire premise of the "Performance Testing - Bench" section describes the testing of the ThinkQA software's functionalities and algorithms independently. The Epibeam module is described as a "standalone software tool independent of the linear accelerator, the TPS and the Record-and-Verify system." The performance testing demonstrates that the software itself "meets the requirements of the device."
7. Type of Ground Truth Used
The ground truth for the comparison performed by the Epibeam module is based on:
- Predicted dose image: Calculated by the Epibeam system under ideal conditions, derived from the RT Plan defined in the Treatment Planning System (TPS).
- Dose conversion of acquired EPID data: The software converts raw EPID images into dose terms.
- TPS data: The models and algorithms used by Epibeam are based on dosimetric data provided by the TPS, which serves as a reference for the planned dose.
Essentially, the "ground truth" for the device's internal comparison is the expected dose distribution as calculated by the validated Treatment Planning System and through the device's own prediction models.
8. Sample Size for the Training Set
The document does not explicitly mention a "training set" or its size. As a "software framework" and a "Quality Assurance tool," its development likely involved conventional software engineering practices, potentially including internal data for model development and calibration, but a specific "training set" like in deep learning models is not detailed.
9. How the Ground Truth for the Training Set Was Established
Since a "training set" is not explicitly mentioned, the method for establishing its ground truth is also not described. The device's foundational data relies on the principles of radiation dosimetry and verified TPS data.
Ask a specific question about this device
(142 days)
DOSISOFT SA
EPIgray is a software to be used by radiation oncologist and medical physicist to detect errors in the delivery of high energy X-rays during the course of patient treatment. This product verifies if the reconstructed dose, computed by the system, is in agreement with the planned dose given by the treatment planning system. This product uses the measurements performed by an Electronic Portal Imaging Device (EPID). This product uses the prescription computed by the treatment planning system. This product in not used to give a prescription of the radiation therapy.
EPIgray is comprehensive software that allows the user to perform an in-vivo dosimetry by means of imaging device such as an Electronic Portal Imaging Device (EPID). The product is composed of: EPIgray workstation and In-Vivo Manager tool. Epigray workstation is an extension of an already cleared product: ISOgray planning system (K103146). It uses only two modules of the previously approved system: Information module and Exacor module. In Vivo manager software is a year application intended for in-vivo measurements management. In particular, it allows to retrieve, on a web browser, the result of dose reconstruction by EPIgray workstation based on EPID.
The provided text is a 510(k) premarket notification for DOSIsoft's EPIgray software. It outlines the device's description, intended use, and substantial equivalence to a predicate device. However, it explicitly states that clinical trials were not performed, and therefore, an acceptance criteria table and a comprehensive study demonstrating direct device performance against such criteria are not provided in the document.
The document focuses on non-clinical verification and validation testing to ensure the system works according to requirements, rather than a clinical study with detailed performance metrics.
Here's an analysis of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance:
This information is not available in the provided text. The document states: "Clinical trials were not performed as part of the development of this product." and "However, algorithm evaluation was performed by Medical Physicists team using measured data in a clinical facility. Evaluation summary is available in tab 13 of this submission." Without access to "tab 13," specific acceptance criteria and reported device performance metrics cannot be tabulated.
2. Sample size used for the test set and the data provenance:
- Sample Size for Test Set: Not explicitly stated. The document mentions "measured data in a clinical facility" was used for algorithm evaluation, but the size of this dataset is not specified.
- Data Provenance: "measured data in a clinical facility" suggests clinical data, likely retrospective, given no clinical trials were performed. The country of origin is not specified, but since the manufacturer is based in France and the evaluation was done by a "Medical Physicists team," it could be from a French clinical facility or an unspecified international setting.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Number of Experts: Not explicitly stated. The evaluation was performed by a "Medical Physicists team," which implies more than one expert, but an exact number is not given.
- Qualifications of Experts: "Medical Physicists team." Their specific experience (e.g., years of experience) is not mentioned.
4. Adjudication method for the test set:
Not applicable/Not mentioned. Since the evaluation involved a "Medical Physicists team" working with "measured data," it's more likely they were assessing algorithmic accuracy against physical measurements/expected dose rather than adjudicating discrepancies in expert interpretations. No specific adjudication method (like 2+1 or 3+1) is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
No. The document explicitly states: "Clinical trials were not performed." Therefore, no MRMC comparative effectiveness study was conducted to assess human reader improvement with AI assistance. The intended use of EPIgray is described as a software for error detection and dose reconstruction, which implies a standalone function that outputs warnings, not directly a tool for human readers to improve their diagnostic accuracy in a MRMC setting.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
Yes, implicitly. The algorithm evaluation by the Medical Physicists team using measured data in a clinical facility suggests a standalone performance assessment. The device's function is described as using EPID images to reconstruct dose and compare it to the planned dose, activating warnings. This is an algorithm-only function without explicit human-in-the-loop performance being evaluated in this submission.
7. The type of ground truth used:
The ground truth appears to be based on physical measurements and planned dose data. The device reconstructs the dose (based on EPID images) and compares it to the "planned dose given by the treatment planning system." The "algorithm evaluation was performed by Medical Physicists team using measured data," suggesting that these measurements (likely independent dosimetry measurements or precise EPID measurements used as a reference) served as a form of ground truth for assessing the accuracy of the reconstructed dose.
8. The sample size for the training set:
Not mentioned. The document focuses on the evaluation (test) phase and does not provide details regarding the training set's sample size.
9. How the ground truth for the training set was established:
Not mentioned. As the document doesn't discuss the training set, there's no information on how its ground truth was established.
Ask a specific question about this device
(137 days)
DOSISOFT SA
Ask a specific question about this device
Page 1 of 1