K Number
K173690
Device Name
Grass TWin
Date Cleared
2018-03-09

(98 days)

Product Code
Regulation Number
882.1400
Panel
NE
Reference & Predicate Devices
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

This software is intended for use by qualified research and clinical professionals with specialized training in the use of EEG and PSG recording instrumentation for the digital recording, playback, and analysis of physiological signals. It is suitable for digital acquisition, display, comparison, and archiving of EEG potentials and other rapidly changing physiological parameters.

Device Description

The Natus Medical Incorporated (Natus) DBA Excel-Tech Ltd. (XLTEK) Grass® TWin ® (Grass TWin) is a comprehensive software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). TWin is incredibly powerful and flexible, but also designed for easy and efficient day-to-day use. Grass TWin is a software product only, and does not include any hardware.

AI/ML Overview

This document is a 510(k) summary for the Grass TWin, a software program intended for Electroencephalography (EEG), Polysomnography (PSG), and Long-term Epilepsy Monitoring (LTM). The focus of the provided text is on demonstrating the device's substantial equivalence to a predicate device and its compliance with regulatory standards for software and usability.

Based on the provided text, the Grass TWin software does not appear to have detailed acceptance criteria or a specific study proving device performance against those criteria in the typical sense of a clinical performance study with metrics like sensitivity, specificity, or accuracy. Instead, the "performance testing" described focuses on software verification and validation and bench testing for compliance with pre-determined specifications and regulatory standards.

Here's an analysis of the information, addressing your requests based only on the provided text:

1. A table of acceptance criteria and the reported device performance

The document does not explicitly present a table of quantitative "acceptance criteria" and "reported device performance" in terms of clinical outcomes or diagnostic accuracy. Instead, the acceptance criteria are framed as compliance with internal requirements and regulatory standards for software development, usability, and safety.

Acceptance Criteria CategoryReported Device Performance
Software Development"The Grass TWin software was designed and developed according to a robust software development process, and was rigorously verified and validated." "Results indicate that the Grass TWin software complies with its predetermined specifications, the applicable guidance documents, and the applicable standards." (referencing FDA guidance documents and IEC 62304: 2006)
Usability"The Grass TWin was verified for performance in accordance with internal requirements and the applicable clauses of the following standards: IEC 60601-1-6: 2010, Am1: 2013, Medical electrical equipment -Part 1-6: General requirements for basic safety and essential performance – Collateral standard: Usability; IEC 62366: 2007, Am1: 2014, Medical devices – Application of usability engineering to medical devices." "Results indicate that the Grass TWin complies with its predetermined specifications and the applicable standards."
Safety & Effectiveness"Verification and validation activities were conducted to establish the performance and safety characteristics of the device modifications made to the Grass TWin. The results of these activities demonstrate that the Grass TWin is as safe, as effective, and performs as well as or better than the predicate devices."

2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

The document does not describe a "test set" in the context of clinical data or patient samples. The performance evaluation focuses on software verification/validation and bench testing. Therefore, information about sample size, data provenance, or whether it was retrospective/prospective is not applicable as described in this document.

3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

This information is not provided because the performance testing described is not based on a clinical test set requiring expert-established ground truth.

4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

This information is not provided because the performance testing described is not based on a clinical test set requiring expert adjudication.

5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

The document does not mention an MRMC comparative effectiveness study, nor does it refer to AI or assistance for human readers. The device is software for recording, playback, and analysis of physiological signals, not an AI-driven interpretive tool.

6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done

While the Grass TWin is "software only" and can be considered a standalone algorithm in that it performs its functions without direct hardware integration, the performance evaluation documented here describes its compliance with specifications and standards, not a specific standalone clinical performance study with metrics like sensitivity or specificity. The "Indications for Use" explicitly state it is "intended for use by qualified research and clinical professionals with specialized training," implying human-in-the-loop operation.

7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

The concept of "ground truth" as typically used in clinical performance studies (e.g., against pathology or expert consensus) is not directly applicable to the performance testing described. The "truth" against which the software was evaluated was its predetermined specifications and compliance with regulatory standards (e.g., correct operation of software features, adherence to usability principles).

8. The sample size for the training set

The document does not refer to a "training set" for an algorithm, as it describes a software application that is verified and validated rather than trained using machine learning.

9. How the ground truth for the training set was established

As there is no mention of a training set, this information is not provided.

In summary, the provided document describes a regulatory submission for software (Grass TWin) that demonstrates substantial equivalence by focusing on:

  • Technology Comparison: Showing direct equivalence in intended use and technological characteristics with a predicate device, noting minor differences that do not raise new questions of safety or effectiveness (e.g., operating system, additional features like PTT Trend Option, Montage Editor Summation Feature).
  • Software Verification and Validation: Adherence to robust software development processes and compliance with general FDA guidance documents and international standards (IEC 62304 for software lifecycle processes, IEC 60601-1-6 and IEC 62366 for usability).
  • Bench Performance Testing: Verification against internal requirements and applicable standards, specifically for usability.

The detailed clinical performance metrics typical for diagnostic or AI-assisted devices that you've asked about are not present in this 510(k) summary, as the device's nature (recording, playback, and analysis software for existing physiological signals, rather than a novel diagnostic algorithm) and the context of a substantial equivalence submission likely did not require them.

§ 882.1400 Electroencephalograph.

(a)
Identification. An electroencephalograph is a device used to measure and record the electrical activity of the patient's brain obtained by placing two or more electrodes on the head.(b)
Classification. Class II (performance standards).