Search Results
Found 2 results
510(k) Data Aggregation
(60 days)
RADVISION
The intended use of Acoustic MedSystems, Inc. RadVision Dose Planning and Treatment Systems is to provide patient-specific planning, imaging, and implants or applicator device alignment for treating cancer using radioactive seed implants or HDR afterloader devices. In addition to planning and treatment, RadVision also allows volume and dose calculations, 2D and 3D analomy and dose visualizations, and post treatment seed localization.
RadVision is a general purpose brachytherapy planning system used for prospective and confirmation dose calculations for patients undergoing a course of brachytherapy using either temporary or permanent implants of various radioisotopes.
The RadVision system is a brachytherapy dose planning and treatment guidance system. The system consists of a computer, video capture device and software tools. The required software tools are installed in the computer. The system can be used for pre-treatment realtime dose planning in the Operating Room (OR), and for permanent seed implants performing post-implant seed localization assessment and post-implant dose distribution analysis.
The provided document is a 510(k) summary for the RadVision Dose Planning and Treatment System. It focuses on demonstrating substantial equivalence to predicate devices for regulatory clearance and details verification and validation testing of its features. It does not contain information about acceptance criteria or a study proving that the device meets specific performance metrics in a clinical or simulated clinical setting. The document primarily confirms that the software functionalities were tested and passed.
Therefore, for aspects related to "acceptance criteria," "reported device performance," "sample sizes for test and training sets," "data provenance," "number and qualifications of experts," "adjudication method," "MRMC study," "stand-alone study," and "ground truth establishment," the information is not available in the provided text.
Here's a breakdown of what is available and what is not:
1. A table of acceptance criteria and the reported device performance
Not Available. The document states that "RadVision passed all the verification and validation tests successfully," but it does not specify the quantitative or qualitative acceptance criteria for these tests, nor does it provide detailed performance metrics against those criteria. The tests listed are functional and system-level checks rather than performance against pre-defined clinical or dose calculation accuracy thresholds.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
Not Available. The document does not mention a "test set" in the context of clinical data, nor does it refer to any human subjects or patient data. The "verification and validation testing" refers to software and system testing, not a study involving patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
Not Available. As there is no mention of a test set involving human data or expert review, this information is not present.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
Not Available. This is not applicable as no test set with expert adjudication is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not Available. This document describes a brachytherapy dose planning and treatment system, which is a software tool, not an AI-assisted diagnostic tool that would typically undergo MRMC studies comparing human reader performance. There is no mention of AI or human reader improvement.
6. If a standalone (i.e. algorithm only, without human-in-the-loop performance) was done
Not Available. While the document describes the functions of the software, it does not present a "standalone" study in the typical sense of evaluating algorithm performance on a dataset against ground truth. The "Verification and Validation test procedures" are functional tests of the software system.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not Available. No "ground truth" in the clinical data sense is mentioned or implied for the testing described. The "verification and validation" likely used predetermined correct outputs for the software functions (e.g., correct dose calculations for given inputs, proper display of images).
8. The sample size for the training set
Not Available. There is no mention of a "training set" or machine learning in the document.
9. How the ground truth for the training set was established
Not Available. There is no mention of a "training set" or associated ground truth.
Ask a specific question about this device
(32 days)
RADVISION ET DIAGNOSTIC X-RAY SYSTEM
This radiographic system is intended for use by a qualified/trained doctor or technician on both adult and pediatric subjects for taking diagnostic radiographic exposures of the skull, spinal column, chest, abdomen, extremities, and other body parts. Applications can be performed with the patient sitting, standing, or lying in the prone or supine position.
This diagnostic x-ray system consists of a tubehead/collimator assembly mounted on a ceiling suspension along with a generator, generator control, and an elevating x-ray table. Power ratings for the available generators are in the rage of 32 kw to 80 kW. Exposure voltage range varies from 40 - 125 KV or 40 - 150 kV with current of 300 -100 mA. Exposure time is 1 ms - 10 s.
This 510(k) submission describes a diagnostic X-ray system and does not involve Artificial Intelligence (AI) or machine learning. Therefore, many of the requested criteria, such as those related to AI performance, sample sizes for training/test sets in an AI context, expert ground truth establishment for AI, MRMC studies for AI, or standalone AI performance, are not applicable to this document.
The acceptance criteria and "device performance" described in this document relate to the substantial equivalence of the new device (RADVISION ET) to a predicate device (RADVISION E and RADVISION EU) based on safety and effectiveness.
Here's the information that can be extracted based on the provided text:
1. A table of acceptance criteria and the reported device performance
The acceptance criteria here are implicitly met if the new device is deemed "substantially equivalent" to the predicate device. The performance is compared based on functional and safety characteristics.
Characteristic | Acceptance Criteria (Predicate Device Performance) | Reported Device Performance (RADVISION ET) |
---|---|---|
Intended Use | Intended for diagnostic radiographic exposures of skull, spinal column, chest, abdomen, extremities, and other body parts on adult and pediatric subjects, with patient sitting, standing, or lying in prone/supine position. | SAME (substantially equivalent) |
Configuration | Column mount | Ceiling suspension (Technological difference, deemed not to raise new safety/effectiveness questions) |
Performance Standard | 21 CFR 1020.30 | SAME (substantially equivalent) |
Generator | High frequency generator made by Sedecal | Uses same generator made by Sedecal (substantially equivalent) |
Electrical Safety | Electrical Safety per IEC-60601, UL listed | SAME (substantially equivalent) |
The "study that proves the device meets the acceptance criteria" is described as:
- "The results of bench and test laboratory indicates that the new device is as safe and effective as the predicate devices."
- "After analyzing bench and external laboratory testing to applicable standards, it is the conclusion of Almana Medical Imaging that the RADVISION ET Diagnostic X-Ray Systems are as safe and effective as the predicate device, have few technological differences, and has no new indications for use, thus rendering them substantially equivalent to the predicate devices."
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The submission references "bench and test laboratory" studies without specifying sample sizes for physical testing.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This is not applicable as this is a traditional medical device submission, not an AI/ML submission requiring expert ground truth for image interpretation.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This is not applicable as this is a traditional medical device submission.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is not applicable as this is a traditional medical device submission, not involving AI assistance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This is not applicable as this is a traditional medical device submission, not involving an algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The "ground truth" in this context refers to the safety and effectiveness of the predicate device, established through its existing legal marketing and compliance with standards. The new device is compared to this established benchmark through technical specifications and bench/laboratory testing. There's no specific "ground truth" for diagnostic accuracy in the way it's used for AI algorithms.
8. The sample size for the training set
This is not applicable as this is a traditional medical device submission, not an AI/ML submission.
9. How the ground truth for the training set was established
This is not applicable as this is a traditional medical device submission, not an AI/ML submission.
Ask a specific question about this device
Page 1 of 1