Search Results
Found 2 results
510(k) Data Aggregation
(66 days)
EVEREST BIOMEDICAL INSTRUMENTS CO.
The SNAP II is intended to monitor the state of the brain by data acquisition of EEG signals. A dcrived measure provided by the SNAP II, the SNAP Index, indicates the patient's brain activity level.
The SNAP II is intended for use under the direct supervision of a licensed healthcare practitioner or by personnel trained in its proper use, within a hospital or medical facility providing patient care.
The Everest Biomedical SNAP II EEG monitor is an Electroencephalograph per 21 CFR 882.1400, which has an intended use that is consistent with the CAV/ classification.
The SNAP II is substantially equivalent in design, construction, materials, intended use and performance characteristics to the predicate devices.
The provided FDA submission for the Everest SNAP II EEG Monitor (K660997) does not contain detailed information regarding acceptance criteria or the specific study that proves the device meets such criteria in terms of clinical performance or accuracy of the "SNAP Index."
Instead, the submission primarily focuses on establishing substantial equivalence to a predicate device (Nicolet SNAP EEG Monitor, K020218) by demonstrating similar design, construction, materials, intended use, and performance characteristics. The "performance" mentioned in this context refers to in-vitro testing confirming the device meets "similar performance specifications" to the predicate, likely related to electrical safety, EMC, and basic signal acquisition, rather than a clinical performance study evaluating the accuracy of the SNAP Index.
Therefore, many of the requested details cannot be extracted from the provided text.
Here is a breakdown of what can and cannot be answered based on the input:
1. A table of acceptance criteria and the reported device performance
The document does not specify quantitative acceptance criteria for the "SNAP Index" or overall clinical performance of the device. It only states: "In vitro testing shows that the device meets similar performance specifications as those for the predicate devices."
Acceptance Criteria | Reported Device Performance |
---|---|
Not specified for clinical performance or SNAP Index accuracy. The document focuses on regulatory compliance and equivalence to predicate device specifications without detailing specific performance metrics for the SNAP Index. | "In vitro testing shows that the device meets similar performance specifications as those for the predicate devices." |
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
This information is not provided in the document. The submission mentions "in vitro testing" but gives no details about participant sample size, characteristics, or data provenance for any performance evaluation specific to the SNAP Index.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided. The document makes no mention of expert-established ground truth or clinical studies with expert reviewers for the SNAP Index.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not provided. No adjudication method is described, as there are no details of a clinical test set with expert review.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study is not mentioned or implied. The device is an EEG monitor with a derived index, not an AI-assisted diagnostic tool for human readers in the context of the typical MRMC study design.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
A standalone performance study specifically for the "SNAP Index" (algorithm only) is not detailed. The submission primarily focuses on the device's electrical safety and functional equivalence, not the clinical accuracy or performance of the SNAP Index as a standalone measure.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
The type of ground truth used for evaluating the "SNAP Index" is not specified. The document does not describe how the accuracy or clinical utility of the SNAP Index was validated with a ground truth.
8. The sample size for the training set
This information is not provided. The document does not discuss any training sets, implying that the device's development and validation for regulatory purposes did not involve machine learning models with specific training data as might be expected in modern AI submissions.
9. How the ground truth for the training set was established
This information is not provided. As no training set is mentioned, the method for establishing its ground truth is also absent.
Ask a specific question about this device
(77 days)
EVEREST BIOMEDICAL INSTRUMENTS CO.
The Audioscreener OAE+ABR may be used for patients of all ages, from newborn infants through adults. The Distortion Product Otoacoustic Emissions, Transient Evoked Otoacoustic Emissions, and Auditory Brainstem Response tests are indicated for use in screening individuals for hearing loss for whom behavioral audiometric responses are deemed to be unreliable, such as in infants, young children, and uncooperative or cognitively impaired adults.
The Audioscreener OAE+ABR is a Otoacoustic Emissions and Audiotory Brainstem Response testing device to be used in the evaluation of hearing function. This device is essentially the Audioscreener OAE+ABR unit with additional software required to perform a Transient Evoked OAE in addition to the Distortion Product OAE test.
The provided document is a 510(k) Premarket Notification summary for the AUDIOscreener OAE+ABR device. It primarily focuses on demonstrating substantial equivalence to a predicate device and adherence to various safety and performance standards.
Based on the provided text, there is no specific acceptance criteria table or a study described that proves the device meets specific performance acceptance criteria in the manner an AI/ML clinical study would.
This 510(k) submission does not detail a clinical performance study with acceptance criteria, sample sizes, ground truth establishment, or expert adjudication that would be typical for evaluating the diagnostic accuracy of a new AI/ML device.
Here's an analysis based on the information provided, addressing the requested points:
1. Table of Acceptance Criteria and Reported Device Performance
Not applicable. The document does not describe specific clinical performance acceptance criteria and reported device performance in terms of diagnostic accuracy metrics (e.g., sensitivity, specificity, AUC) for the AUDIOscreener OAE+ABR. The submission focuses on demonstrating substantial equivalence to a predicate device (K001058) and compliance with safety and electrical performance standards.
2. Sample Size Used for the Test Set and Data Provenance
Not applicable. No distinct "test set" for evaluating clinical performance is described. The submission focuses on substantial equivalence to a predicate device and compliance with physical and electrical standards.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable. No clinical "ground truth" establishment by experts is described for a performance study.
4. Adjudication Method for the Test Set
Not applicable. No adjudication method is mentioned as there is no described clinical performance test set.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No. An MRMC comparative effectiveness study is not mentioned. This type of study is more relevant for diagnostic imaging or interpretation devices where human readers' performance with and without AI assistance is evaluated.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
This concept is not directly applicable to the device described. The AUDIOscreener OAE+ABR is an instrument that performs objective physiological tests (OAE and ABR) to screen for hearing loss. It is not an AI algorithm making diagnostic interpretations independently. Its "performance" would relate to its ability to accurately measure OAEs and ABRs and to appropriately classify results (pass/refer) based on predefined physiological thresholds, which are typically established through scientific literature and clinical practice, not through a standalone algorithm study in this context.
7. The Type of Ground Truth Used
Not applicable in the context of a diagnostic accuracy study. The "ground truth" for OAE and ABR tests is typically defined by the physiological presence or absence of a response within certain parameters, often correlated with behavioral audiometry. However, the 510(k) does not describe a study to establish this.
8. The Sample Size for the Training Set
Not applicable. This device is not an AI/ML algorithm that requires a training set in the conventional sense.
9. How the Ground Truth for the Training Set was Established
Not applicable.
Summary of the Document's Approach to Acceptance Criteria and Study:
The 510(k) summary for the AUDIOscreener OAE+ABR primarily relies on:
- Predicate Device Equivalence: Stating that the device is "similar in its intended use to predicate devices and existing methodologies" (specifically mentioning K001058).
- Compliance with Standards: Listing adherence to various international (UL, CSA, IEC) and national (ANSI, FDA guidance) safety and performance standards for medical electrical equipment, audiometers, and evoked response equipment. These standards themselves contain specifications that could be considered "acceptance criteria" for the physical and electrical performance of the device, rather than its clinical diagnostic accuracy in a study setting.
The absence of a detailed clinical performance study with acceptance criteria, sample sizes, and ground truth establishment indicates that the review focused on the device's technical specifications, safety, and functional equivalence to already marketed devices, rather than a de novo clinical validation of its diagnostic accuracy as one might see for novel AI/ML diagnostics.
Ask a specific question about this device
Page 1 of 1