Search Results
Found 3 results
510(k) Data Aggregation
(197 days)
MOBIUS MEDICAL SYSTEMS, LP
Mobius3D software is used for quality assurance, treatment plan verification, and patient alignment and anatomy analysis in radiation therapy. It calculates radiation dose three-dimensionally in a representation of a patient or a phantom. The calculation is based on read-in treatment plans that are initially calculated by a treatment planning system, and may additionally be based on external measurements of radiation fields from other sources such as linac delivery log data. Patient alignment and anatomy analysis is based on read-in treatment planning images (such as computed tomography) and read-in daily treatment images (such as registered cone beam computed tomography).
Mobius3D is not a treatment planning system. It is only to be used by trained radiation oncology personnel as a quality assurance tool.
Mobius3D is a software product used within a radiation therapy clinic for quality assurance and treatment plan verification. It is important to note that while Mobius3D operates in the field of radiation therapy, it is neither a radiation delivery device (e.g. a linear accelerator), nor is it a treatment planning system (TPS). Mobius3D cannot design or transmit instructions to a delivery device, nor does it control any other medical device. Mobius3D is an analysis tool meant solely for quality assurance (QA) purposes when used by trained medical professionals. Being a software-only QA tool, Mobius3D never comes into contact with patients.
Mobius3D performs dose calculation verifications for radiation treatment plans by doing an independent calculation of radiation dose. Radiation dose is initially calculated by a treatment planning system (TPS), which is a software tool that develops a detailed set of instructions (i.e. a plan) for another system (e.g. a linear accelerator) to deliver radiation to a patient. The dose calculation performed by Mobius3D uses a proprietary collapsed cone convolution superposition (CCCS) algorithm.
Mobius3D also performs dose delivery quality assurance for radiation treatment plans by using the measured data recorded in a linear accelerator's delivery log files to calculate a delivered dose. This is presented to the end user in a software component of Mobius3D called MobiusFX. The MobiusFX component is available to users through licensing as an add-on to the core Mobius3D software features.
Compared to the previously cleared Mobius3D v 1.3.2 (K140660), Mobius3D v 2.0.0 contains the additional intended use of performing quality assurance of a patient's alignment and anatomy analysis. This analysis is based on comparison of Cone Beam Computed Tomography (CBCT) images taken immediately before treatment to the images used for treatment planning, which are typically acquired using standard Computed Tomography (CT). This analysis is presented to the end user in an add-on software module within Mobius3D called CBCT Checks.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
The document does not explicitly state acceptance criteria in a quantitative, measurable format (e.g., "sensitivity must be > X%", "accuracy must be > Y%"). Instead, it describes a functional evaluation.
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Software Functionality | "Software development, verification, and validation have been carried out in accordance with FDA guidelines. The software was tested against the established Software Design Specifications and passed all required tests." (This indicates the software functions as designed according to internal specifications.) |
Risk Mitigation | "A Risk Management Report was completed which identified and verified the mitigation of all required hazards." (Suggests potential risks identified during development were addressed.) |
Patient Positioning/Anatomy Analysis (CBCT Module) | "The report demonstrates the Mobius3D CBCT module’s ability to notify users to a potential patient positioning or anatomical difference." (This is the primary functional performance claim for the new feature, indicating its ability to detect the specified differences.) |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: "anonymized data from 22 different patients over 163 different fractions"
- Data Provenance: The document does not specify the country of origin. It states the data was "anonymized," implying it was real patient data, but it does not clarify if it was retrospective or prospective. Given it was "bench testing" and "anonymized data from 22 different patients over 163 different fractions," it is highly likely to be retrospective real-world patient data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
The document does not explicitly state how many experts were used or their qualifications for establishing ground truth in the "Mobius3D CBCT Module Evaluation" study. The study focuses on the module's ability to notify users of differences, not necessarily on the module's accuracy compared to expert consensus. Ground truth, in this context, might implicitly refer to the actual patient positioning/anatomical differences that the module was designed to detect, which would typically be evaluated by a medical physicist or radiation oncologist in a clinical setting, but this is not detailed.
4. Adjudication Method for the Test Set:
The document does not describe any adjudication method (e.g., 2+1, 3+1). The study's focus, as described, is on the module's ability to notify users of differences, rather than a comparison against an adjudicated "correct" answer for each case.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size:
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not reported in the provided text. The study described focuses on the standalone performance of the CBCT module ("ability to notify users") rather than comparing human readers with and without AI assistance.
6. If a Standalone Study (Algorithm Only Without Human-in-the-Loop Performance) Was Done:
Yes, a standalone study was done for the Mobius3D CBCT module. The "bench testing" described in "Mobius3D CBCT Module Evaluation - Patient Positioning / Anatomical Changes Bench Testing" evaluated the algorithm's performance in identifying potential patient positioning or anatomical differences without human interaction being part of the module's core evaluation. The report demonstrates the module's "ability to notify users," which indicates the algorithm itself is performing the detection and flagging.
7. Type of Ground Truth Used:
The document does not explicitly state the type of ground truth used for the CBCT module evaluation. However, based on the description ("ability to notify users to a potential patient positioning or anatomical difference"), the implied ground truth would be the actual presence of patient positioning shifts or anatomical changes between the planning CT and daily CBCTs. How this actual presence was definitively established is not detailed (e.g., an expert review of the images themselves, comparison to log files, etc.). It's likely an existing clinical assessment that the software is designed to emulate or flag.
8. Sample Size for the Training Set:
The document does not provide any information regarding the sample size used for the training set for Mobius3D or its CBCT module. It mentions the "proprietary collapsed cone convolution superposition (CCCS) algorithm" for dose calculation, which is a physics-based model, not typically "trained" on a dataset in the same way a machine learning algorithm is. For the CBCT module features, if any machine learning was used, the training data is not disclosed.
9. How the Ground Truth for the Training Set Was Established:
Since no information about a training set is provided, there is no information on how ground truth for a training set was established.
Ask a specific question about this device
(106 days)
MOBIUS MEDICAL SYSTEMS, LP
DoseLab Pro is quality assurance software intended to be used as part of a dosimetry verification system for linear accelerators. It can be used to import radiation-exposed images from scanned film, other measurement devices, and treatment planning systems to display differences between measured and calculated dose distributions.
DoseLab Pro is a software-only device that uses image analysis to perform radiation oncology quality assurance (QA) as part of a dosimetry verification system. Images are useful in radiation oncology QA because they can be analyzed qualitatively by viewing them and quantitatively using mathematical routines on the data that composes them. A variety of data sets can be analyzed as images in DoseLab Pro. They include radiation dose distributions calculated by treatment planning systems, measured dose distributions from arrays (diode and ion chamber), and radiation-exposed film images.
DoseLab Pro uses numerous built-in image analysis routines that have been developed to perform the tests and meet the standards of the medical physics QA community. In particular, these tools were designed to specifically support completing dose comparisons. Dose comparisons are made between two, two-dimensional images containing radiation dose and spatial information. The first image is exported from the dose calculation of a patient-specific computed treatment plan from treatment planning software, while the second image is the measured dose from delivery of that plan captured by film or a measurement array. DoseLab Pro assists in aligning the images spatially before performing several different comparisons including Gamma analysis and normalization. DoseLab Pro additionally contains tools for image editing, film image import, and film calibration.
It is important to note that while DoseLab Pro operates in the field of radiation therapy, it is neither a radiation delivery device (e.q. a linear accelerator), nor is it a treatment planning system (TPS). DoseLab Pro is an analysis tool meant solely for quality assurance (QA) purposes when used by trained medical professionals. Being a software-only QA tool, DoseLab Pro never comes into contact with patients.
The provided text is a 510(k) summary for the DoseLab Pro device, a quality assurance software for linear accelerators. It describes the device, its intended use, and claims substantial equivalence to a predicate device based on non-clinical performance data.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with corresponding performance metrics. Instead, it states:
Criterion | Reported Performance |
---|---|
Software functionality as per design | All tests passed without defect. |
Expected behavior and output | Confirmed with known good data for inputs. |
Device performs quality assurance comparisons between TPS images and measured dose images | Stated as the principal technological characteristic and demonstrated through testing. |
2. Sample size used for the test set and the data provenance
The document does not specify a numerical sample size for the test set. It mentions "known good data for inputs," but the number of such data points is not quantified.
- Sample size for test set: Not specified numerically.
- Data provenance: Not explicitly stated, given it's referred to as "known good data for inputs." It implies synthesized or carefully selected internal data rather than real-world retrospective or prospective clinical data from a specific country. However, the context of "dosimetry verification system for linear accelerators" suggests that this "known good data" would be representative of radiation oncology QA scenarios.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not mention the involvement of external experts to establish ground truth for the test set. The validation testing was performed "manually on the fully compiled software," implying an internal validation process based on the software's design specifications.
4. Adjudication method for the test set
No adjudication method (e.g., 2+1, 3+1) is mentioned in the document. The testing appears to be a direct comparison of software output against expected output based on "known good data," rather than a consensus-based ground truth establishment.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance
A multi-reader multi-case (MRMC) comparative effectiveness study was not done. DoseLab Pro is described as an image analysis tool for quality assurance, not a diagnostic or decision-support AI intended to be used with human readers to improve their performance in the traditional sense of medical image interpretation (e.g., radiologists interpreting images for disease detection). It performs calculations and displays differences between measured and calculated dose distributions, aiding medical physics QA, but not in a way that would typically involve an MRMC study comparing human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance evaluation was done. The "Non-Clinical Performance Data" section describes "Testing involved the use of known good data for inputs into DoseLab Pro and execution of tests designed to confirm expected behavior and expected output." This is a standalone evaluation of the software's functionality and accuracy in performing its intended QA tasks. The software itself is the "algorithm only" in this context, as it's not designed to be a human-in-the-loop diagnostic AI that modifies human reader performance. Its role is to independently carry out calculations and comparisons.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The type of ground truth used was based on "known good data" and "expected behavior and expected output." This implies a set of pre-defined, analytically derived, or previously validated outputs for specific inputs, against which the software's results were compared. It's an internal validation against known correct values, not an external ground truth like pathology or expert consensus on patient outcomes.
8. The sample size for the training set
The document does not mention a "training set." DoseLab Pro is described as software with "numerous built-in image analysis routines that have been developed to perform the tests and meet the standards of the medical physics QA community." This suggests that it is rule-based or algorithm-driven software, rather than a machine learning or AI model that requires a distinct training phase with a labeled dataset. Therefore, the concept of a "training set" as typically understood in machine learning is not applicable here.
9. How the ground truth for the training set was established
As there is no mention of a training set, the establishment of ground truth for a training set is not applicable. The software's design and built-in routines are based on established medical physics QA standards.
Ask a specific question about this device
(102 days)
MOBIUS MEDICAL SYSTEMS, LP
Mobius3D software is used for quality assurance and treatment plan verification in radiation therapy. It calculates radiation dose three dimensionally in a representation of a patient or a phantom. The calculation is based on read-in treatment plans that are initially calculated by a treatment planning system, and may additionally be based on external measurements of radiation fields from other sources such as linac delivery log data.
Mobius3D is not a treatment planning system. It is only to be used by trained radiation oncology personnel as a quality assurance tool.
Mobius3D is a software product used within a radiation therapy clinic for quality assurance and treatment plan verification. It is important to note that while Mobius3D operates in the field of radiation therapy, it is neither a radiation delivery device (e.g. a linear accelerator), nor is it a treatment planning system (TPS). Mobius3D cannot design or transmit instructions to a delivery device, nor does it control any other medical device. Mobius3D is an analysis tool meant solely for quality assurance (QA) purposes when used by trained medical professionals. Being a software-only QA tool, Mobius3D never comes into contact with patients.
Mobius3D performs dose calculation verifications for radiation treatment plans by doing an independent calculation of radiation dose. Radiation dose is initially calculated by a treatment planning system (TPS), which is a software tool that develops a detailed set of instructions (i.e. a plan) for another system (e.g. a linear accelerator) to deliver radiation to a patient. The dose calculation performed by Mobius3D uses a proprietary collapsed cone convolution superposition (CCCS) algorithm.
Mobius3D also performs dose delivery quality assurance for radiation treatment plans by using the measured data recorded in a linear accelerator's delivery log files to calculate a delivered dose. This is presented to the end user in a software component of Mobius3D called MobiusFX. The MobiusFX component is available to users through licensing as an add-on to the core Mobius3D software features.
This document largely focuses on the regulatory approval (510(k) clearance) of Mobius3D. While it describes the device's function and intended use, it does not provide detailed information about the specific acceptance criteria, the study design, or the performance outcomes that would typically be found in a clinical or validation study report.
Therefore, I cannot provide a complete answer to your request based on the provided text. The requested information regarding acceptance criteria, study details, sample sizes, expert qualifications, adjudication methods, MRMC studies, standalone performance, and ground truth establishment for training sets is not present in this 510(k) summary and FDA letter.
Here's what I can extract and what is missing:
1. Table of Acceptance Criteria and Reported Device Performance:
- Acceptance Criteria: Not explicitly stated in the document. The document mentions "detailed technological characteristics and indications for use presented within the full set of submitted documentation for this 510(k) application support the claim that Mobius3D is substantially equivalent to the predicate devices." This implies that performance criteria were likely benchmarked against predicate devices, but the specific numerical targets are not here.
- Reported Device Performance: Not explicitly stated in the document with specific metrics or values (e.g., accuracy, precision). The document states that Mobius3D "performs dose calculation verifications for radiation treatment plans by doing an independent calculation of radiation dose" and "performs dose delivery quality assurance... by using the measured data." This describes its function, not its quantified performance against acceptance criteria.
2. Sample size used for the test set and the data provenance:
- Sample Size (Test Set): Not mentioned.
- Data Provenance: Not mentioned (e.g., country of origin, retrospective/prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not mentioned, as the document does not describe the establishment of ground truth for a test set in the context of device performance validation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not mentioned.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Not mentioned. This device is described as a QA tool for radiation therapy, an "analysis tool meant solely for quality assurance (QA) purposes." It is not described as a device that directly assists human readers in interpreting images or making a diagnosis in a way that an MRMC study would typically evaluate for AI image analysis tools.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- The device itself is an "algorithm only" software for independent dose calculation and QA. Its performance is inherently "standalone" in its primary function of calculating dose. However, the results of its calculation are "presented to the end user" and used by "trained medical professionals." The document does not provide a specific "standalone performance study" report with metrics like accuracy or precision of its dose calculations, compared to a gold standard. It only describes what it does.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Not explicitly mentioned. For a dose calculation system, ground truth would typically refer to highly accurate dosimetry measurements or a gold-standard calculation method. The document only states it performs an "independent calculation of radiation dose" using a "proprietary collapsed cone convolution superposition (CCCS) algorithm."
8. The sample size for the training set:
- Not mentioned.
9. How the ground truth for the training set was established:
- Not mentioned.
In summary, the provided text from the 510(k) summary and FDA clearance letter focuses on the regulatory aspects, device description, and indications for use. It lacks the technical and scientific details about validation studies, acceptance criteria, and performance results that your request pertains to. Such information would typically be found in a separate validation report or technical documentation submitted as part of the 510(k) application, but not usually in the public summary or clearance letter itself.
Ask a specific question about this device
Page 1 of 1