Search Results
Found 1 results
510(k) Data Aggregation
(421 days)
APEX 6
Intended for use for lesioning of neural tissue and for pain management. It is indicated for use in the peripheral nervous system. The APEX 6 is to be used with LCCS electrodes and cannulae and Conmed Thermogard Dispersive Electrodes.
The RF Innovations APEX 6 is a desktop RF lesioning generator which is used for the lesioning of neural tissue. The APEX 6 is a multi-lesioning, 6 channel portable generator that can provide continuous or pulsed RF output at 460 kHz, and monopolar or dual electrode modes. The device includes sensory and motor stimulation functions to fine tune electrode placement for procedures. Based on performance testing the device is designed to connect to LCCS FDA cleared lesioning probes which are inserted into patients for lesioning of neural tissue during medical procedures. Device features include a touch screen monitor incorporating microprocessor and graphics display for user interface as well as self-diagnostics, calibration checks, and recordkeeping functions.
This document is a 510(k) summary for the APEX 6 Lesioning Generator. It outlines the device's characteristics, its intended use, and how it compares to a predicate device (NeuroTherm NT 2000 RF Lesioning Generator) to establish substantial equivalence.
Here's an analysis of the acceptance criteria and the supporting study, based on the provided text:
Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Lesion Size (comparison to predicate device) | Affected area differences between APEX 6 and predicate device should be minimal (implied: insignificant for clinical equivalence). | "All affected area differences were less than 1 mm." The study confirms "the Apex 6 and the predicate device are substantially equivalent in regards to affected tissue size." |
Comparison of Treatment Times (comparison to predicate device) | Time to ramp up to set temperature and treatment times for APEX 6 should be substantially equivalent to the predicate device under worst-case conditions. | "Results demonstrated that the APEX 6 was substantially equivalent in terms of treatment times compared to the predicate device." |
Design Validation Review | All documented device requirements (Requirement Specification, Traceability Matrix, Software Specifications, Validation Plans, Instructions for Use) must be met. | "The finished unit design review verified that all documented device requirements were met." |
Every Unit Functional Test | Each manufactured unit must pass a full functional test regimen (Voltage checks, Program and Impedance Testing, Software testing, Main GUI, Final testing: Impedance/Temperature measurement, Electrical safety testing) prior to shipment. | "Each unit must pass all tests prior to shipment." |
Safety (Bench Testing) | Compliance with ANSI/AAMI 60601-1, CAN/CSA-C22.2 No. 60601-1, ANSI/AAMI 60601-2-2, and CAN/CSA-C22.2 No. 60601-2-2. | "Tested to ANSI/AAMI 60601-1:2005 + C1:2009 + A2:2010 + A1:2012, CAN/CSA-C22.2 & 60601-1:2014 AND ANSI/AAMI 60601-2-2:2017, CAN/CSA-C22.2 No. 60601-2-2:2019." The document states compliance implicitly by reporting "Tested to...". |
Electromagnetic Compatibility (EMC) (Bench Testing) | Compliance with IEC 60601-1-2 / EN 60601-1-2. | "EMC was tested in accordance with IEC 60601-1-2:2014/ EN 60601-1-2:2015/IEC 60601-2-2." The document states compliance implicitly by reporting "Tested in accordance with...". |
Software Performance (Validation to FDA Moderate Level of Concern) | Software performs as expected per validation to FDA guidance for Moderate Level of Concern. | "Software testing supports that the APEX 6 performs as expected. Validation was performed to the FDA Moderate Level of Concern per the FDA software guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." |
Study Details:
The provided document describes bench testing to demonstrate the substantial equivalence of the APEX 6
to its predicate device, primarily focusing on physical performance characteristics and safety standards. There is no clinical study (human-in-the-loop or standalone AI performance) described for this device, as it is a medical device generator and not an AI/Software as a Medical Device (SaMD).
Here's an breakdown of the specific points requested, based on the provided text:
-
A table of acceptance criteria and the reported device performance: (See table above)
-
Sample size used for the test set and the data provenance:
- Sample Size: The document does not specify exact sample sizes for the "Lesion size" or "Comparison of treatment times" tests. It refers to "minimal, typical, and maximum energy delivery" for lesion size and "worse case conditions" for treatment times, implying a set of controlled experimental conditions rather than a large clinical "test set" in the common AI/SaMD sense. For "Every unit functional test," the sample size is "Every unit manufactured."
- Data Provenance: The tests are described as "Bench testing," indicating they were performed in a laboratory or controlled environment. There is no information regarding country of origin or whether it's retrospective or prospective, as it's not a clinical data study.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not applicable to the type of testing described. The "ground truth" for a device like a radiofrequency lesion generator is based on physical measurements (e.g., lesion dimensions, electrical parameters, time) and established engineering/safety standards. It's not about expert interpretation of medical images or clinical outcomes that would require human expert consensus for ground truth.
-
Adjudication method (e.g., 2+1, 3+1, none) for the test set:
- This is not applicable. Adjudication methods are typically used in clinical studies where multiple human readers interpret data, and significant disagreements require a tie-breaking mechanism or consensus. The tests described are bench tests with objective physical measurements.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC study was not done. This type of study is relevant for AI/SaMD devices where the AI assists human readers in diagnostic or screening tasks. The
APEX 6
is a physical medical device generator, not an AI software.
- No, an MRMC study was not done. This type of study is relevant for AI/SaMD devices where the AI assists human readers in diagnostic or screening tasks. The
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- While the
APEX 6
itself performs functions autonomously (e.g., generating RF energy), the "standalone" concept usually applies to AI algorithms whose performance is evaluated independently of human input. No such "algorithm-only" performance study is described here, as the device's function is the generation and control of RF energy, which is evaluated through bench tests against physical parameters and predicate device performance. Software functions were validated, but this is different from a standalone AI performance study.
- While the
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth for the bench tests was based on physical measurements (e.g., linear dimensions in millimeters, time in seconds, electrical parameters like impedance, voltage, current, power) and compliance with established international and national standards (e.g., IEC 60601 series, ANSI/AAMI, CAN/CSA).
- For "Lesion size" and "Comparison of treatment times," the "ground truth" was derived from the performance of the legally marketed predicate device, used as a benchmark for substantial equivalence.
-
The sample size for the training set:
- This information is not applicable. The device is not an AI/machine learning model that requires a "training set" in the sense of data used to train a statistical or learning algorithm. Its design and validation rely on engineering principles, product specifications, and comparisons to predicate devices.
-
How the ground truth for the training set was established:
- This information is not applicable, as there is no training set for an AI/ML model for this device.
Ask a specific question about this device
Page 1 of 1