K Number
K233292
Date Cleared
2023-10-27

(28 days)

Product Code
Regulation Number
882.1870
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

The product is used for intraoperative monitoring and testing during surgical procedures
· to examine neuronal tissue (central and peripheral nervous system) by recording and stimulation

The product can be used during surgical procedures that justify non-therapeutic clinical use of the following modalities or their combinations:

  • · Measurement:
  • o Auditory evoked potentials (AEP)
  • o Electroencephalography (EEG)
  • o Electrocorticography (ECoG)
  • o Electromyography (EMG)
  • o Somatosensory evoked potentials (SSEP)
  • o Motor evoked potentials (MEP)
  • o Train of Four (TOF)
  • · Stimulation:
  • o Transcranial electrical stimulation (TES)
  • o Direct cortical and subcortical stimulation (DCS)
  • o Direct nerve stimulation (DNS)
  • o Transcutaneous intraoperative nerve stimulation (TINS)
  • o Direct muscle stimulation (DMS)
Device Description

The ISIS Headboxes and the ISIS Neurostimulator constitute multimodal intraoperative neuromonitoring systems called ISIS IOM Systems. These systems consist of custom stimulation and recording hardware, a standard laptop or desktop personal computer running an off-the-shelf operating system, and operating software called NeuroExplorer. As an option, these systems mount on device carriers or housings tailored for intraoperative use.

The ISIS IOM Systems support the following measurement modalities:

  • -Auditory Evoked Potentials
  • -Transcranial and cortical Motor Evoked Potentials
  • Somatosensory Evoked Potentials -
  • Free-running and triggered Electromyography -
  • Electroencephalography -
  • Train of Four -
AI/ML Overview

The provided document is a 510(k) summary for the ISIS Headboxes and ISIS Neurostimulator, which are intraoperative neuromonitoring systems. It focuses on demonstrating substantial equivalence to a predicate device (K212166) rather than establishing novel acceptance criteria and proving performance against them in a clinical study.

The document states that no additional clinical testing was performed for the subject devices. Instead, the submission relies on bench testing against established standards and internal requirements to demonstrate safety, effectiveness, and performance that is "as well as or better than" the predicate device.

Therefore, many of the requested elements for describing "acceptance criteria and the study that proves the device meets the acceptance criteria" in the context of a de novo AI/ML device (which often involves clinical performance metrics like sensitivity, specificity, etc.) are not directly applicable or available in this document.

Here's an attempt to extract the relevant information based on the provided text, acknowledging the limitations:

1. Table of Acceptance Criteria and Reported Device Performance

Since this is a submission demonstrating substantial equivalence to a predicate device through bench testing rather than a de novo AI/ML device with pre-defined performance metrics, the acceptance criteria are primarily related to adherence to international standards and internal requirements for electrical safety, EMC, and software validation.

Acceptance Criteria CategorySpecific Criteria (Implicit or Explicit in Document)Reported Device Performance
BiocompatibilityNo patient contact materialsNot applicable
SoftwareConformance to:Demonstrated compliance with predetermined specifications, applicable guidance documents, and standards.
- FDA Guidance: Content of Premarket Submissions for Device Software Functions, Jun 14, 2023
- FDA guidance: Off-the-shelf software use in medical devices, August 11, 2023
- FDA guidance: General principles of software validation: Final guidance for industry and FDA staff, Jan 02, 2002
- FDA guidance: Content of premarket submissions for management of cybersecurity in medical devices, Oct 02, 2014
- IEC 62304:2006/AMD1:2015, Medical device software - Software life cycle processes
Electrical SafetyConformance to:Test results demonstrate compliance with applicable standards.
- IEC 60601-1:2005 + CORR. 1:2006 + CORR. 2:2007 + A1:2012 (or IEC 60601-1: 2012 reprint)
- IEC 80601-2-26:2019 (Electroencephalographs)
- IEC 60601-2-40:2016 (Electromyographs and evoked response equipment)
Electromagnetic CompatibilityConformance to:Test results demonstrate compliance with applicable standards.
- IEC 60601-1-2:2014
Performance Testing – BenchFulfillment of internal requirement specifications for:Products successfully underwent bench testing, confirming fulfillment of requirements.
- Electrical medical systems
- System carrier
- Amplifier (ISIS Headboxes) and stimulator (ISIS Neurostimulator) modules
- Operating software (NeuroExplorer) incl. firmware
- Accessories (adaptor boxes)
- Custom Microsoft® Windows 10 image
Human FactorsDemonstrates safety and no need for further UI improvementTesting confirms products are safe to use.
Overall PerformanceAs safe, as effective, and performs as well as or better than the legally marketed predicate.Demonstrated.

2. Sample Size Used for the Test Set and the Data Provenance

  • Sample Size: Not applicable in the traditional sense of a clinical test set with patient data. The testing was primarily bench and software validation. The "sample size" would refer to the number of units tested, which is not specified, but implied to be sufficient for type testing.
  • Data Provenance: The data refers to the results of internal bench tests and software validation activities, conducted by the manufacturer (inomed Medizintechnik GmbH) in Germany. It is entirely retrospective in the sense that it's performed on a developed product to meet predefined standards.

3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

  • This information is not provided because no clinical test set with expert-established ground truth was used. The ground truth for functional verification would be the expected output or behavior according to engineering specifications and regulatory standards.

4. Adjudication Method for the Test Set

  • Not applicable as there was no clinical test set requiring expert adjudication.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

  • No MRMC comparative effectiveness study was done. The device is a neuromonitoring system, not an AI-assisted diagnostic or interpretive tool that would inherently involve "human readers" in that sense. The submission explicitly states: "No additional clinical testing was performed".

6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

  • The document implies that the "software validation" and "bench testing" constitutes a standalone performance evaluation of the device's functional integrity as an algorithm/system. The software (NeuroExplorer) and hardware components were tested to ensure they meet their predetermined specifications and comply with relevant standards independently of human interpretation of clinical outcomes. However, this is not an "AI algorithm only" type of standalone performance, but rather functional performance of medical device software/hardware.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

  • For the bench and software testing, the "ground truth" would be established by the engineering specifications, international performance standards (e.g., IEC 60601 series), and documented functional requirements of the device. This is a technical (rather than clinical/biological) ground truth.

8. The sample size for the training set

  • This device is not described as an AI/ML device that requires a "training set" in the context of machine learning model development. Therefore, this question is not applicable. The software development follows a traditional software lifecycle process, not a machine learning training paradigm.

9. How the ground truth for the training set was established

  • Not applicable, as there is no mention of an AI/ML training set.

§ 882.1870 Evoked response electrical stimulator.

(a)
Identification. An evoked response electrical stimulator is a device used to apply an electrical stimulus to a patient by means of skin electrodes for the purpose of measuring the evoked response.(b)
Classification. Class II (performance standards).