Search Results
Found 2 results
510(k) Data Aggregation
(28 days)
ISIS Headboxes, ISIS Neurostimulator, ISIS Xpert Plus, ISIS Xpert, ISIS Xpress
The product is used for intraoperative monitoring and testing during surgical procedures
· to examine neuronal tissue (central and peripheral nervous system) by recording and stimulation
The product can be used during surgical procedures that justify non-therapeutic clinical use of the following modalities or their combinations:
- · Measurement:
- o Auditory evoked potentials (AEP)
- o Electroencephalography (EEG)
- o Electrocorticography (ECoG)
- o Electromyography (EMG)
- o Somatosensory evoked potentials (SSEP)
- o Motor evoked potentials (MEP)
- o Train of Four (TOF)
- · Stimulation:
- o Transcranial electrical stimulation (TES)
- o Direct cortical and subcortical stimulation (DCS)
- o Direct nerve stimulation (DNS)
- o Transcutaneous intraoperative nerve stimulation (TINS)
- o Direct muscle stimulation (DMS)
The ISIS Headboxes and the ISIS Neurostimulator constitute multimodal intraoperative neuromonitoring systems called ISIS IOM Systems. These systems consist of custom stimulation and recording hardware, a standard laptop or desktop personal computer running an off-the-shelf operating system, and operating software called NeuroExplorer. As an option, these systems mount on device carriers or housings tailored for intraoperative use.
The ISIS IOM Systems support the following measurement modalities:
- -Auditory Evoked Potentials
- -Transcranial and cortical Motor Evoked Potentials
- Somatosensory Evoked Potentials -
- Free-running and triggered Electromyography -
- Electroencephalography -
- Train of Four -
The provided document is a 510(k) summary for the ISIS Headboxes and ISIS Neurostimulator, which are intraoperative neuromonitoring systems. It focuses on demonstrating substantial equivalence to a predicate device (K212166) rather than establishing novel acceptance criteria and proving performance against them in a clinical study.
The document states that no additional clinical testing was performed for the subject devices. Instead, the submission relies on bench testing against established standards and internal requirements to demonstrate safety, effectiveness, and performance that is "as well as or better than" the predicate device.
Therefore, many of the requested elements for describing "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of a de novo AI/ML device (which often involves clinical performance metrics like sensitivity, specificity, etc.) are not directly applicable or available in this document.
Here's an attempt to extract the relevant information based on the provided text, acknowledging the limitations:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a submission demonstrating substantial equivalence to a predicate device through bench testing rather than a de novo AI/ML device with pre-defined performance metrics, the acceptance criteria are primarily related to adherence to international standards and internal requirements for electrical safety, EMC, and software validation.
Acceptance Criteria Category | Specific Criteria (Implicit or Explicit in Document) | Reported Device Performance |
---|---|---|
Biocompatibility | No patient contact materials | Not applicable |
Software | Conformance to: | Demonstrated compliance with predetermined specifications, applicable guidance documents, and standards. |
- FDA Guidance: Content of Premarket Submissions for Device Software Functions, Jun 14, 2023 | ||
- FDA guidance: Off-the-shelf software use in medical devices, August 11, 2023 | ||
- FDA guidance: General principles of software validation: Final guidance for industry and FDA staff, Jan 02, 2002 | ||
- FDA guidance: Content of premarket submissions for management of cybersecurity in medical devices, Oct 02, 2014 | ||
- IEC 62304:2006/AMD1:2015, Medical device software - Software life cycle processes | ||
Electrical Safety | Conformance to: | Test results demonstrate compliance with applicable standards. |
- IEC 60601-1:2005 + CORR. 1:2006 + CORR. 2:2007 + A1:2012 (or IEC 60601-1: 2012 reprint) | ||
- IEC 80601-2-26:2019 (Electroencephalographs) | ||
- IEC 60601-2-40:2016 (Electromyographs and evoked response equipment) | ||
Electromagnetic Compatibility | Conformance to: | Test results demonstrate compliance with applicable standards. |
- IEC 60601-1-2:2014 | ||
Performance Testing – Bench | Fulfillment of internal requirement specifications for: | Products successfully underwent bench testing, confirming fulfillment of requirements. |
- Electrical medical systems | ||
- System carrier | ||
- Amplifier (ISIS Headboxes) and stimulator (ISIS Neurostimulator) modules | ||
- Operating software (NeuroExplorer) incl. firmware | ||
- Accessories (adaptor boxes) | ||
- Custom Microsoft® Windows 10 image | ||
Human Factors | Demonstrates safety and no need for further UI improvement | Testing confirms products are safe to use. |
Overall Performance | As safe, as effective, and performs as well as or better than the legally marketed predicate. | Demonstrated. |
2. Sample Size Used for the Test Set and the Data Provenance
- Sample Size: Not applicable in the traditional sense of a clinical test set with patient data. The testing was primarily bench and software validation. The "sample size" would refer to the number of units tested, which is not specified, but implied to be sufficient for type testing.
- Data Provenance: The data refers to the results of internal bench tests and software validation activities, conducted by the manufacturer (inomed Medizintechnik GmbH) in Germany. It is entirely retrospective in the sense that it's performed on a developed product to meet predefined standards.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
- This information is not provided because no clinical test set with expert-established ground truth was used. The ground truth for functional verification would be the expected output or behavior according to engineering specifications and regulatory standards.
4. Adjudication Method for the Test Set
- Not applicable as there was no clinical test set requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No MRMC comparative effectiveness study was done. The device is a neuromonitoring system, not an AI-assisted diagnostic or interpretive tool that would inherently involve "human readers" in that sense. The submission explicitly states: "No additional clinical testing was performed".
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- The document implies that the "software validation" and "bench testing" constitutes a standalone performance evaluation of the device's functional integrity as an algorithm/system. The software (NeuroExplorer) and hardware components were tested to ensure they meet their predetermined specifications and comply with relevant standards independently of human interpretation of clinical outcomes. However, this is not an "AI algorithm only" type of standalone performance, but rather functional performance of medical device software/hardware.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For the bench and software testing, the "ground truth" would be established by the engineering specifications, international performance standards (e.g., IEC 60601 series), and documented functional requirements of the device. This is a technical (rather than clinical/biological) ground truth.
8. The sample size for the training set
- This device is not described as an AI/ML device that requires a "training set" in the context of machine learning model development. Therefore, this question is not applicable. The software development follows a traditional software lifecycle process, not a machine learning training paradigm.
9. How the ground truth for the training set was established
- Not applicable, as there is no mention of an AI/ML training set.
Ask a specific question about this device
(179 days)
ISIS Headboxes, ISIS Neurostimulator, ISIS Xpert Plus, ISIS Xpert, ISIS Xpress
ISIS Headbox 5042xx products:
The products are intended for intraoperative neuromonitoring; for recording of electrophysiological signals and stimulating of nerve and muscle tissues.
The products are intended for use in the operating room to measure and display the electrical signals generated by muscle, peripheral nerves and the central nervous system. The products support the clinical application of Electroencephalography (EEG), Electromyography (EMG), Somatosensory Evoked Potentials (SEP), Motor Evoked Potentials (MEP), and Auditory Evoked Potentials (AEP).
The products are not intended for monitoring life-sustaining functions.
ISIS Neurostimulator 504180:
The ISIS Neurostimulator is intended for provision of neurophysiological stimulation when used in surgical procedures and for diagnostics. It is suitable for continuous operation and can be used in the following fields:
- Transcranial electrical stimulation (TES)
- Direct cortical stimulation (DCS)
- Direct nerve stimulation (DNS)
- Transcutaneous electrical nerve stimulation (TNS)
- Direct muscle stimulation (DMS)
The ISIS Headboxes and the ISIS Neurostimulator constitute multimodality intraoperative neuromonitoring systems called ISIS IOM Systems. These systems consist of custom stimulation and recording hardware, a standard laptop or desktop personal computer running an off-the-shelf operating system, and operating software called NeuroExplorer. As an option, these systems mount on device carriers or housings tailored for intraoperative use.
The ISIS IOM Systems support the following measurement modalities:
- Auditory Evoked Potentials
- Transcranial and cortical Motor Evoked Potentials
- Somatosensory Evoked Potentials
- Freerunning and triggered Electromyography
- Electroencephalography
- Train of Four
The provided text describes a 510(k) summary for the Inomed Medizintechnik GmbH's "ISIS Headboxes and ISIS Neurostimulator" (ISIS IOM Systems: ISIS Xpert®, ISIS Xpert®Plus, ISIS Xpress). This document is a premarket notification to the FDA to demonstrate substantial equivalence to a legally marketed predicate device (Cadwell Industries Inc./Cascade Intraoperative Monitor K162199).
Crucially, this document does not contain acceptance criteria or a study proving device performance in the way a clinical study for a new diagnostic or AI-driven imaging device would. Instead, it focuses on demonstrating substantial equivalence to a predicate device through engineering performance testing (bench testing, electrical safety, EMC, software validation) and a comparison of technical specifications and intended uses.
Therefore, I cannot populate all sections of your requested table and provide information on aspects like sample size for test sets, ground truth establishment, expert adjudication, or MRMC studies, as these types of studies were explicitly not performed for this 510(k) submission. The document states: "No additional clinical testing was performed for the ISIS Headboxes and ISIS Neurostimulator... Therefore, this section does not apply."
Here's the information that can be extracted relevant to your request:
1. Table of Acceptance Criteria and Reported Device Performance
Since this is a substantial equivalence submission relying on engineering performance and comparison to a predicate, the "acceptance criteria" are primarily adherence to relevant electrical safety and electromagnetic compatibility (EMC) standards, and meeting predetermined specifications during bench testing and software validation. The "reported device performance" is the successful compliance with these standards and specifications.
Acceptance Criteria Type | Specific Criteria / Standard | Reported Device Performance | Notes |
---|---|---|---|
Software Validation | - Compliance with predetermined specifications. |
- Adherence to FDA guidance: "The content of premarket submissions for software contained in medical devices, May 11, 2005"
- Adherence to FDA guidance: "Off-the-shelf software use in medical devices, Sep 27, 2019"
- Adherence to FDA guidance: "General principles of software validation: Final guidance for industry and FDA staff, Jan 02, 2002"
- Adherence to FDA guidance: "Content of premarket submissions for management of cybersecurity in medical devices. Oct 02, 2014"
- Adherence to IEC 62304:2006, Medical device software - Software life cycle processes | "Test results demonstrate that the inomed NeuroExplorer Software complies with its predetermined specifications, the applicable guidance documents, and standards." | The software ("NeuroExplorer") is categorized as a "MODERATE level of concern software." |
| Electrical Safety | - Compliance with IEC 60601-1:2005 (Third Edition) + CORR. 1:2006 + CORR. 2:2007 + A1:2012 (or IEC 60601-1: 2012 reprint). - Compliance with IEC 80601-2-26:2019 (electroencephalographs).
- Compliance with IEC 60601-2-40:2016 (electromyographs and evoked response equipment). | "Test results demonstrate that the products comply with the applicable standards." | The document compares specific technical aspects like A/D resolution, hardware bandpass, sampling frequency, notch filter, CMRR, and amplifier noise to the predicate device, indicating equivalent or compliant performance with relevant standards. |
| Electromagnetic Compatibility (EMC) | - Compliance with IEC 60601-1-2:2014. | "Test results demonstrate that the products comply with the applicable standards." | |
| Bench/Performance Testing | - Fulfillment of requirements formulated at multiple levels: electrical medical systems, system carrier, amplifier (Headboxes) and stimulator (Neurostimulator) modules, operating software (NeuroExplorer) incl. firmware, and accessories. - Assessment of human factors influence on safety. | "The products successfully underwent the bench testing to confirm the fulfillment of the requirements at these levels as part of the verification and validation process."
"The testing of the influence of human factors on the devices demonstrates that the products are safe to use and that no further improvement of the user interface design relating to safety is necessary." | This testing is internal to the manufacturer's verification and validation process. The specific quantitative "requirements" are not detailed in this public summary but are typically part of internal design specifications. |
| Biocompatibility | N/A | N/A | The devices do not have patient contact materials, therefore testing was not applicable. |
2. Sample Size Used for the Test Set and the Data Provenance
This information is not applicable and not provided in the document. As stated, "No additional clinical testing was performed for the ISIS Headboxes and ISIS Neurostimulator... Therefore, this section does not apply." The testing described are engineering verification and validation tests, not clinical studies with patients or data sets in the typical sense for AI/diagnostic devices.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts
This information is not applicable. The performance testing was based on engineering standards and internal requirements, not expert-established ground truth from clinical data.
4. Adjudication Method for the Test Set
This information is not applicable. There was no clinical test set requiring adjudication in the context of this 510(k) submission.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No MRMC comparative effectiveness study was done. This device is an intraoperative neuromonitoring system, not an AI-assisted diagnostic tool that would typically involve human readers interpreting AI output.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This information is not applicable. This device is an intraoperative neuromonitoring system, not primarily an algorithm performing a standalone diagnostic task. Its function is to measure and display electrophysiological signals and provide stimulation.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The ground truth or reference for device performance was established through engineering standards, technical specifications, and internal functional requirements. For example, electrical outputs meet specified ranges, and input signals are recorded with specified fidelity according to IEC standards. This is not clinical ground truth (e.g., pathology report, expert diagnosis).
8. The Sample Size for the Training Set
This information is not applicable. The document does not describe the use of an AI algorithm that requires a training set in the context of a machine learning model for diagnosis or interpretation. The software validation refers to standard software development practices, not AI model training.
9. How the Ground Truth for the Training Set Was Established
This information is not applicable, as there was no mention of a training set for an AI algorithm.
Ask a specific question about this device
Page 1 of 1