Search Results
Found 2 results
510(k) Data Aggregation
(64 days)
VHA DEAN
The VHA Radiotherapy Bolus product is a device that will be placed on the skin of a patient as a radiotherapy accessory intended to help control the radiation dose received by the patient. VHA Radiotherapy Boluses are designed by radiation therapy professionals for a unique patient and are intended to modify the shape of a beam from a radiation therapy source. The VHA Radiotherapy Bolus product must be verified and approved by the radiation therapy professional prior to use on a patient. The VHA Radiotherapy Bolus is intended for patients of all ages receiving radiotherapy treatment.
VHA Radiotherapy Bolus was evaluated using 6 MV photons and 9MeV electrons but has not been assessed for use with protons or at orthovoltage X-rays.
Boluses are used in external beam radiation therapy (EBRT) to change the depth of the radiation dose delivered, thereby overcoming the skin-sparing effect. Using clinical treatment planning software (TPS) and clinical expertise, a radiotherapy clinician designs the bolus to conform with the patient anatomy. The bolus is produced using additive manufacturing in a soft elastomeric material to conform to the patient's skin. The bolus is placed on the patient and verified for fit and acceptance to the clinical treatment plan prior to initiating treatment.
The provided text (P0-P6) is a 510(k) Premarket Notification from the FDA regarding the "VHA Radiotherapy Bolus" device. It primarily focuses on demonstrating substantial equivalence to a predicate device (VSP Bolus, K214093) rather than detailing a specific study proving the device meets acceptance criteria through algorithm performance. The device described is a patient-specific physical bolus created through 3D printing for radiotherapy, not an AI/ML algorithm.
Therefore, many of the requested details, such as sample size for test/training sets, number of experts for ground truth, adjudication methods, MRMC studies, standalone algorithm performance, and how ground truth was established for training data, are not applicable to this document because the submitted device is a physical medical device, not a software algorithm requiring such clinical study designs.
However, I can extract information related to the acceptance criteria and performance testing that was conducted for this physical device.
Here's the analysis based on the provided text:
1. A table of acceptance criteria and the reported device performance
The document does not provide a quantitative table of acceptance criteria with specific performance metrics (e.g., accuracy, sensitivity, specificity) for the device itself. Instead, it states that the device's performance was evaluated through "Simulated use testing" and that "All acceptance criteria for performance testing were met." The nature of these acceptance criteria appears to be qualitative or based on successful functionality within the simulated environment.
Criteria Category | Acceptance Criteria (Stated) | Reported Device Performance |
---|---|---|
Performance Testing | Demonstrated safety based on current industry standards. Functionality in simulated use. | "Simulated use testing was completed for clinically relevant cases using both electron and photon radiation therapy. All acceptance criteria for performance testing were met." "The VHA Radiotherapy Bolus was deemed fit for clinical use by radiation therapy professionals." |
Biocompatibility | Compliance with ISO 10993-1, ISO 10993-5, and ISO 10993-10 standards. Biocompatible for intact skin contact. | "All acceptance criteria for biocompatibility were met and the testing adequately addresses biocompatibility for the output devices and their intended use." Data leveraged from predicate device due to identical materials and manufacturing. |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: Not specified. The document mentions "clinically relevant cases" for simulated use testing but does not provide a number or details about these cases.
- Data Provenance: Not applicable in the traditional sense for an AI/ML model's test set. The "testing" refers to physical performance and biocompatibility of the 3D-printed bolus.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not explicitly stated or applicable for a physical device. The device is "designed by radiation therapy professionals" and "must be verified and approved by the radiation therapy professional prior to use on a patient." The "ground truth" for its performance would implicitly be its ability to correctly modify dose distribution per a treatment plan, which is verified by radiation therapy professionals.
4. Adjudication method for the test set
Not applicable as it's not a study involving human readers or AI output adjudication. The verification is done by a "radiation therapy professional" for the physical bolus.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. This is not an AI/ML device. Therefore, no MRMC comparative effectiveness study was conducted or is relevant.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
Not applicable. This is a physical device, not an algorithm. Its function is to be placed on a patient.
7. The type of ground truth used
The "ground truth" for this device's performance is its ability to accurately alter the radiation dose distribution as intended by the radiation therapy professional's treatment plan. This is verified indirectly through "simulated use testing" and the requirement for verification by a "radiation therapy professional" via a CT scan prior to first treatment. It's not a 'ground truth' in the context of diagnostic imaging outcomes (e.g., pathology, clinical outcomes).
8. The sample size for the training set
Not applicable. This is a physical device, not an AI/ML algorithm that requires a training set.
9. How the ground truth for the training set was established
Not applicable. This is a physical device, not an AI/ML algorithm that requires a training set and associated ground truth establishment.
Ask a specific question about this device
(157 days)
VHA DEAN
The OMF ASP System is intended for use as a software system and image segmentation system for the transfer of imaging information from a medical scanner such as a CT based system. The input data file is processed by the OMF ASP System, and the result is an output data file that may then be provided as digital models or used as input to an additive manufacturing portion of the system that produces physical outputs including anatomical models, templates, and surgical guides for use in maxillofacial surgery. The OMF ASP System is also intended as a pre-operative software tool for simulating/evaluating surgical treatment options.
The Oromaxillofacial Advanced Surgical Planning (OMF ASP) System utilizes Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input, and to produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides for planning and performing maxillofacial surgeries.
The system utilizes medical imaging, such as CT-based imaging data of the patient's anatomy to create surgical plans with input from the physician to inform surgical decisions and guide the surgical procedure. The system produces a variety of patient specific to the maxillofacial region including anatomic models (physical and digital), physical surgical templates and/or guides, and patient specific case reports. The system utilizes additive manufacturing to create patient specific guides, and anatomical models.
The provided text describes the OMF ASP system and its substantial equivalence to a predicate device but does not contain the detailed information necessary to fully answer all aspects of your request regarding acceptance criteria and the study proving it.
Specifically, the document states:
- "All acceptance criteria for design validation were met."
- "All acceptance criteria for performance testing were met."
- "All acceptance criteria for software verification testing were met."
- "All acceptance criteria for the cleaning validation were met."
- "All acceptance criteria for the steam sterilization validation were met."
- "All acceptance criteria for biocompatibility were met and the testing adequality addresses biocompatibility for the output devices and their intended use."
However, it does not explicitly state what those specific acceptance criteria were (e.g., quantifiable metrics like accuracy, sensitivity, specificity, or specific error margins for measurements). It also does not provide the reported device performance in measurable terms against those criteria.
Therefore, I can only provide a partial answer based on the available information.
1. A table of acceptance criteria and the reported device performance
Based on the provided text, specific quantifiable acceptance criteria and reported device performance (e.g., specific accuracy percentages, dimensions, etc.) are not detailed. The document only broadly states that "All acceptance criteria... were met" for various validation tests.
Acceptance Criteria Category | Nature of Acceptance Criteria (as implied) | Reported Device Performance (as implied) |
---|---|---|
Design Validation | Device designs conform to user needs and intended use for maxillofacial surgeries, identical to predicate in indications, design envelope, worst-case configuration, and post-processing conditions. | All acceptance criteria were met. |
Performance Testing | Manufacturing process assessment; operator repeatability within the digital workflow; digital and physical outputs verified against design specifications. | All acceptance criteria were met. |
Software Verification | Compliance with FDA guidance for software in medical devices (Moderate Level of Concern). | All acceptance criteria were met. |
Cleaning Validation | Bioburden, protein, and hemoglobin levels within acceptable limits post-cleaning (in accordance with AAMI TIR 30). | All acceptance criteria were met. |
Sterilization Validation | Sterility assurance level (SAL) of 10^-6 achieved for dynamic-air-removal cycle (in accordance with ISO 17665-1). | All acceptance criteria were met. |
Biocompatibility Testing | Compliance with ISO 10993-1, -5, -10, -11 requirements for biological safety. | All acceptance criteria were met. |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
The document mentions "Cases used for testing were representative of the reconstruction procedures within the subject device's intended use" for performance testing, but does not specify the sample size used for any test sets, nor the data provenance (country of origin, retrospective/prospective nature).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
The document does not provide any information regarding the number or qualifications of experts used to establish ground truth for any test sets.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
The document does not provide any information regarding an adjudication method for test sets.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance was not conducted or reported. The document states, "No clinical data were provided in order to demonstrate substantial equivalence." The device is described as a "software system and image segmentation system for the transfer of imaging information" and a "pre-operative software tool for simulating/evaluating surgical treatment options," indicating it's primarily a planning and manufacturing aid, not an AI diagnostic tool intended to assist human readers in interpretation that would typically necessitate an MRMC study.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document describes the OMF ASP System as utilizing "Commercial Off-the-Shelf (COTS) software to manipulate 3D medical images (CT-based systems) with surgeon input," and notes that its primary function is to "produce digital and physical patient specific outputs including surgical plans, anatomic models, templates, and surgical guides." It also mentions "operator repeatability within the digital workflow" during performance testing. This suggests a human-in-the-loop process for surgical planning and model generation, rather than a standalone algorithm performance evaluation. However, the exact nature of the "performance testing" to verify digital and physical outputs might imply some level of standalone assessment against design specifications, but this is not explicitly detailed as an "algorithm only" performance study.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used for any of the validation processes. It refers to "design specifications" against which outputs were verified, but the origin of these specifications (e.g., derived from expert consensus, anatomical measurements, etc.) is not detailed.
8. The sample size for the training set
The document does not mention or provide any information about a training set or its sample size. This is consistent with the description of the system using "Commercial Off-the-Shelf (COTS) software" rather than a custom-developed AI algorithm that would typically require a specific training phase.
9. How the ground truth for the training set was established
Since no training set is mentioned (see point 8), this information is not applicable and not provided.
Ask a specific question about this device
Page 1 of 1