Search Filters

Search Results

Found 85 results

510(k) Data Aggregation

    K Number
    K242954
    Manufacturer
    Date Cleared
    2024-12-19

    (85 days)

    Product Code
    Regulation Number
    882.1900
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K242954

    Trade/Device Name: Integrity V500 ("Integrity", "Integrity with VEMP") Regulation Number: 21 CFR 882.1900
    Stimulator, Auditory, Evoked Response |
    | Device Class | Class II (According to 21 CFR 882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Integrity with VEMP provides testing that is intended to assist in the assessment of vestibular function. The target patient population ranges from school age children to geriatric adults who can complete the testing tasks. The Integrity with VEMP is intended to be used by a variety of professionals, such as clinical practitioners specialized in balance disorders, Audiologists, and Otolaryngologists (ENT doctors) with prior knowledge of the medical and scientific knowledge about the VEMP procedure.

    Device Description

    VEMP (Vestibular Evoked Myogenic Potential) is a short latency response generated either from sternocleidomastoid muscle (cVEMP) or oblique muscle (oVEMP) which is typically evoked by high level acoustic or vibratory stimulation. VEMP is a non-invasive test that is used for diagnosis of vestibular disorders such as superior canal dehiscence, Menier's disease, vestibular neuritis, and, among others. VEMP test can be done in clinics (ENT/audiology) and hospitals provided that the users have adequate knowledge and background about underlying process of VEMP and recording auditory evoked responses.

    The Integrity V500 is indicated for auditory evoked potential testing. "Integrity with VEMP" (this submission) is the addition of the VEMP test modality to the Integrity V500. The Integrity V500 is an auditory evoked auditory system response and Integrity with VEMP is an auditory evoked vestibular system response. Since the vestibular system is connected to the auditory system, a loud stimulus to the auditory system also simultaneously stimulates the vestibular system. However, the evoked signals are measured at different electrode sites. It is important to note that the methods of stimulation and data acquisition are the same for the VEMP modality as for the ABR modality (one of the Integrity V500's auditory evoked testing modalities), with differences being in electrode montage and patient's physical state. As such, it is possible to also get a VEMP response while testing in the Integrity V500's ABR modality.

    Integrity with VEMP is a PC based device which uses the same hardware and similar software for evoking the stimulus, collecting the response, processing the data, and displaying the outcome on the screen as does the Integrity V500. The hardware used are: a VivoLink (patient interface device), air and bone conduction transducers (to stimulate the vestibular system), CV-Amp (bio-amplifier), and a computer. The response from the muscle is picked up by a bio-amplifier attached to the neck (for cVEMP) or under eyes (for oVEMP) and forehead with electrodes. The data is the VivoLink for pre-processing and then transferred to the PC via Bluetooth connection for full processing using the algorithm designed for VEMP processing. Through the test, the processed sweeps are filtered using the filter setting defined by the user and the averaged response is shown on the screen.

    The primary diagnostic component of a VEMP measurement is the comparison of the amplitude of the primary peak of the VEMP response between right and left sides, defined as an asymmetry ratio, where a significant amount of asymmetry is indicative of a vestibular disorder. The amplitude of the vestibular response peaks is also dependent on the contraction level of the muscles involved in the recording. To minimize the muscular response biasing the result, it is important to have equal contraction levels for both sides; this is especially physically challenging to generate equivalent neck contractions needed for cVEMP. To overcome this issue, two key features were added to the Integrity with VEMP compared to Integrity V500: a biofeedback EMG monitor (which displays real-time muscular activity) and VEMP response normalization (scaling based on muscular contraction levels). The EMG monitors can be used as a guidance to the clinicians and patients for the level of muscle contraction. Normalization automatically scales the recorded sweeps based on the energy of the corresponding EMG which helps to compensate for imbalance contraction from the two sides.

    AI/ML Overview

    Here's an analysis of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly listed as a table of thresholds to be met, but rather are inferred from the clinical studies' conclusions regarding substantial equivalence and test reliability. The reported device performance is presented in tables comparing the device's measurements to a predicate device and to literature norms, as well as its reliability metrics against industry standards.

    Acceptance Criteria (Inferred)Reported Device Performance Against Predicate (Study 1)Reported Device Performance for Reliability & Reproducibility (Study 2)
    1. Substantial Equivalence to Predicate Device (Interacoustics Eclipse K162037)cVEMP vs. Eclipse (K162037):
    • P1 [ms]: Integrity VEMP (15.6 ± 1.3, 15.8 ± 1.4) vs. Eclipse (14.7 ± 1.2, 15.1 ± 1.5). Both within literature norms (11-17 ms). ICC: 0.72 (Right), 0.91 (Left).
    • N1 [ms]: Integrity VEMP (23.5 ± 1.6, 23.9 ± 1.5) vs. Eclipse (23.6 ± 1.4, 23.7 ± 1.8). Both within literature norms (18-27 ms). ICC: 0.95 (Right), 0.80 (Left).
    • P1-N1 [µV]: Integrity VEMP (209.6 ± 105.3, 198.7 ± 106) vs. Eclipse (195.5 ± 54.4, 168.8 ± 48.8). Both within literature norms (28-300 µV). ICC: 0.67 (Right), 0.71 (Left).

    oVEMP vs. Eclipse (K162037):

    • N1 [ms]: Integrity VEMP (11.2 ± 0.8, 11.4 ± 0.9) vs. Eclipse (10.2 ± 1.1, 10.4 ± 1.1). Both within literature norms (9-19 ms). ICC: 0.63 (Right), 0.78 (Left).
    • P1 [ms]: Integrity VEMP (15.9 ± 1.7, 16.3 ± 1.6) vs. Eclipse (15.2 ± 1.7, 15.2 ± 1.6). Both within literature norms (12-22 ms). ICC: 0.89 (Right), 0.76 (Left).
    • N1-P1 [µV]: Integrity VEMP (17.4 ± 8.5, 17.3 ± 8.3) vs. Eclipse (16.7 ± 10.6, 14.1 ± 6.6). Both within literature norms (2-45 µV). ICC: 0.90 (Right), 0.91 (Left).

    Conclusion: Both systems' results consistent with literature; statistical analysis shows good to excellent reliability. Substantial equivalency criteria met. | Test-Retest Reliability (One Session):

    • cVEMP (Air Conduction): Mean (μ) = 0.92, Median = 0.94, Std Dev = 0.05, IQR = 0.07. (Meets industry std: Mean ≥ 0.879, IQR ≤ 0.111)
    • oVEMP (Air Conduction): Mean (μ) = 0.89, Median = 0.89, Std Dev = 0.05, IQR = 0.08. (Meets industry std)
    • cVEMP (Bone Conduction): Mean (μ) = 0.92, Median = 0.92, Std Dev = 0.05, IQR = 0.12. (Mean meets, IQR slightly high but deemed acceptable due to bone conduction variability)

    Test-Retest Reproducibility (Two Sessions 24-72 hrs apart):

    • cVEMP (Air Conduction): Mean (μ) = 0.92, Median = 0.93, Std Dev = 0.05, IQR = 0.08. (Meets industry std: Mean ≥ 0.879, IQR ≤ 0.111)
    • oVEMP (Air Conduction): Mean (μ) = 0.89, Median = 0.87, Std Dev = 0.06, IQR = 0.11. (Meets industry std)

    Conclusion: Reliability and reproducibility values are similar to those from other industry products (Eclipse K162037 document). Good test-retest reliability and reproducibility are concluded. |
    | 2. Test-Reliability and Reproducibility comparable to industry standards | (See above) | (See above) |

    1. Sample sizes used for the test set and the data provenance:

    • Study 1 (Substantial Equivalence):
      • cVEMP: 13 normal adult participants.
      • oVEMP: 9 adult participants.
      • Data Provenance: Not explicitly stated (e.g., country of origin, prospective/retrospective). Implied to be prospective for the purpose of this comparative study.
    • Study 2 (Reliability & Reproducibility):
      • US site: 33 normal hearing adults for cVEMP (AC); 17 normal hearing adults for oVEMP (AC).
      • Canada Site: 9 school-age normal hearing children for cVEMP (BC); 8 pre-school age normal hearing children for cVEMP (AC).
      • Data Provenance: Prospective, collected from two sites (US and Canada).

    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • The ground truth in this context is established by the physiological measurements themselves, not by expert interpretation of images or other subjective data. The "normal adult/hearing" participant selection implies a pre-qualification based on typical physiological parameters, but no specific "expert" panel is described as establishing or adjudicating individual VEMP responses for ground truth. The comparison is against established literature norms and a predicate device's measured performance.

    3. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    • Not applicable as this is a device performance study based on objective physiological measurements (latencies, amplitudes) rather than subjective assessments requiring expert consensus or adjudication. The data analysis involved statistical comparisons (mean, standard deviation, ICC, IQR).

    4. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, this type of study was not performed. The device is not an AI-powered diagnostic tool that assists human readers in interpreting complex data like medical images. It's a medical device that measures physiological responses (VEMP). The study focuses on the device's ability to accurately capture and report these responses, which are then interpreted by clinicians.

    5. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    • The device inherently involves a human-in-the-loop for proper patient setup, transducer placement, monitoring for muscle contraction via biofeedback, and ultimately interpreting the results. The "performance" being evaluated is the system's ability to consistently provide objective VEMP measurements. There isn't an "algorithm only" mode separate from its intended use with a human operator. The software's processing of data from the hardware is integral to its function.

    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    • The ground truth is based on physiological measurement consistency and comparison to established literature norms and a legally marketed predicate device.
      • For Study 1, the ground truth for "normal" VEMP responses is referenced against "Norms from Literature" for P1/N1 latencies and P1-N1 amplitudes. The predicate device's performance also serves as a comparative "ground truth" for substantial equivalence.
      • For Study 2, "industry-accepted standards" for test-retest repeatability and reproducibility (derived from the predicate device's 510(k) document) served as the benchmark for reliability.

    7. The sample size for the training set:

    • This information is not provided. The document describes clinical studies that served as part of the regulatory submission (verification and validation), not studies for training a machine learning model.

    8. How the ground truth for the training set was established:

    • This information is not provided, as the studies described are related to clinical validation for regulatory clearance, not the development or training of a machine learning model. If any internal models (e.g. for artifact rejection or signal processing) were trained, their training data and ground truth establishment are not detailed in this document.
    Ask a Question

    Ask a specific question about this device

    K Number
    K243495
    Date Cleared
    2024-12-12

    (30 days)

    Product Code
    Regulation Number
    882.1870
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    | OLT | 21CFR §882.1400 |
    | | GWJ | 21CFR §882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The UltraPro is intended for the acquisition, display, analysis, storage, reporting, and management of electrophysiological information from the human nervous and muscular systems including Nerve Conduction (NCS), Electromyography (EMG), Evoked Potentials (EP), Autonomic Responses and Intra-Operative Monitoring including Electroencephalography (EEG).

    Evoked Potentials (EP) includes Visual Evoked Potentials (VEP), Auditory Evoked Potentials (AEP), Somatosensory Evoked Potentials (SEP), Electroretinography (ERG), Electrooculography (EOG), P300. Motor Evoked Potentials (MEP) and Contingent Negative Variation (CNV). The UltraPro with Natus Elite Software may be used to determine autonomic responses to physiologic stimuli by measuring the change in electrical resistance between two electrodes (Galvanic Skin Response and Sympathetic Skin Response). Autonomic testing also includes assessment of RR Interval variability. The UltraPro with Natus Elite Software is used to detect the physiologic function of the nervous system, for the location of neural structures during surgery, and to support the diagnosis of neuromuscular disease or condition.

    The listed modalities do include overlap in functionality. In general, Nerve Conduction Studies measure the electrical responses of the nerve; Electromyography measures the electrical activity of the muscle and Evoked Potentials measure electrical activity from the Central Nervous System.

    The UltraPro with Natus Elite Software is intended to be used by a qualified healthcare provider.

    Device Description

    The UltraPro S100 system is designed for the acquisition, display, analysis, reporting, and management of electrophysiological information from the human nervous and muscular systems. The system is designed to perform Nerve Conduction (NCS). Electromyography (EMG), Evoked Potentials (EP), and Autonomic Responses. UltraPro S100 system provides a variety of tests spanning the various modalities.

    The UltraPro S100 system consists of the following major components:

    • Main unit (also known as base unit or main base unit) with integrated control panel; ●
    • Amplifier (3- or 4-channel);
    • . Computer- laptop or desktop (with keyboard and mouse)
    • Display Monitor (for desktop system)
    • . Application Software (Natus Elite)

    The UltraPro S100 has the following optional accessories/ components:

    • Audio stimulators (Headphones or other auditory transducers)
    • Visual stimulators (LED goggles or stimulus monitor)
    • . Electrical stimulators (RS10 probes, stimulus probe with controls)
    • Cart and associated accessories when using cart such as isolation transformer
    • Miscellaneous accessories such as Patient Response button, Triple footswitch, Reflex hammer, temperature probe and adapter, ultrasound device, printer, etc.

    The electrodiagnostics system is powered by a connection to mains.

    The entire user interface of UltraPro S100 system consists of two major elements:

    • The primary means to interact with the system is via a personal computer (PC) running ● Natus Elite.
    • The second means of interaction is the user interface elements on the hardware.

    The UltraPro S100 is intended to be used by a qualified healthcare provider. This device does not provide any diagnostic conclusion about the patient's condition to the user. The intended use environment is in a professional healthcare facility environment.

    AI/ML Overview

    The provided text is a 510(k) Summary for the Natus Ultrapro S100 device. While it describes the device's indications for use and compares its technological characteristics to predicate devices, it does not contain information about the acceptance criteria or the specific study that proves the device meets those criteria, such as a clinical performance study with defined metrics like sensitivity, specificity, or accuracy. This document focuses on demonstrating substantial equivalence to a predicate device primarily through technical specifications and intended use.

    Therefore, I cannot provide a table of acceptance criteria, reported device performance, sample sizes used for test/training sets, data provenance, number or qualifications of experts, adjudication methods, or details about MRMC or standalone studies based on the provided text. The document is primarily a comparison of features and intended use.

    The "Conclusion" section on page 14 states: "Verification and validation activities were conducted to establish the performance and safety characteristics of the UltraPro S100. The results of these activities demonstrate that the UltraPro S100 is safe, effective, and performance is substantially equivalent to the predicate devices." However, it does not elaborate on what these activities entailed or the specific criteria and results.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240420
    Manufacturer
    Date Cleared
    2024-09-20

    (220 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | Classification
    name and
    product code | 882.1400
    Electroencephalograph
    OLU

    882.1900
    | 882.1400
    Electroencephalograph
    GWQ

    882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The NeuroField Analysis Suite is to be used by qualified clinical professionals for the display and storage of electrical activity of a patient's brain, including the post-hoc statistical evaluation of the human electroencephalogram (EEG) and event-related potentials (ERP).

    Device Description

    The NeuroField Analysis Suite is a Normalizing Quantitative Electroencephalograph (QEEG) Software that can (1) execute EEG analysis and (2) conduct ERP test and ERP analysis. The NeuroField Analysis Suite is Software as a Medical Device (SaMD). The NeuroField Analysis Suite consists of two modules, the NF EEG Analysis Module and the NF ERP Module. The NF EEG Analysis Module is a separate analysis module that integrates with the Q21 EEG system by adding "Analysis", "Report", and "Tools" menu items and toolbars. It performs real-time and offline analysis functions and displays analysis results in separate windows in the UI, which can be accessed via the "Analysis" and "Reports" menu items. The NF ERP Module is a separate evoked response potential (ERP) module that can control and get data from the Q21 EEG system and performs typical ERP functions like stimulus presentation, EEG epoching, epoch averaging, reaction time, and ERP display. The NF ERP Module is a separate stand-alone application.

    AI/ML Overview

    The NeuroField Analysis Suite, comprising the NF EEG Analysis Module and the NF ERP Module, was evaluated for performance. Here's a breakdown of the acceptance criteria and the study details:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA document doesn't explicitly state acceptance criteria in terms of numerical performance metrics (e.g., specific accuracy, sensitivity, specificity values). Instead, substantial equivalence is claimed based on comparable functionality and performance to predicate devices. The "performance" reported reflects this comparative approach.

    Feature CategoryAcceptance Criteria (Implied from Predicate Comparison)Reported Device Performance
    NF EEG Analysis Module (vs. NeuroGuide Analysis System)
    EEG Analysis FunctionalityComparable re-montaging, filtering, event markers, and data analysis modes (marker-based or whole data).Performs re-montaging, filtering, adding event markers, and analysis based upon markers or whole data.
    Z-Score GenerationComparable generation of Z-Scores for absolute and relative power of EEG bands/frequencies.Generates Z-Scores for absolute power and relative power of EEG bands or individual frequencies.
    VisualizationComparable generation of head maps and FFT spectra with equivalent frequency resolution.Generates head maps and FFT spectra, and with equivalent frequency resolution (0.5 Hz).
    ReportingComparable tabular export and automated report generation.Provides for tabular export and automated report generation.
    Inverse Solution (LORETA)Comparable calculation of inverse solution (Key Institute 2394 LORETA model) and generation of current densities/powers of ROIs using Brodmann Atlas.Can calculate the inverse solution (using the standard LORETA Key Institute 2394 LORETA model) and generate the current densities and powers of the ROIs (Region of Interest) over the cortex using the Brodmann Atlas.
    Supported File FormatsComparable ability to read standard EDF and EDF+ files.Reads standard EDF and EDF+ files, additionally supports XDF.
    Signal/Spectrum ComparisonComparable signals on EEG Plot and comparable spectrum/sidelobes for real and simulated data.Shows comparable signals on the EEG Plot for the same EDF file as Neuroguide. Spectrum and sidelobes for real and simulated data are comparable.
    Z-Score ComparisonComparable Z-Scores for real and simulated data.Shows comparable Z-Scores.
    Headmap TopographyComparable topographies over standard EEG bands.Visualization of headmaps shows comparable topographies over the standard EEG bands.
    NF ERP Module (vs. eVox System)
    Averaged ERP GraphsComparable generation of averaged ERP graphs, allowing user to measure latency and magnitude.Generates averaged ERP graphs for EEG channels, allowing user to measure latency and magnitude.
    Session LengthComparable customizable session length.Allows for customizable session length.
    Oddball ParadigmComparable application of auditory and visual oddball paradigms.Applies auditory and visual oddball paradigm variations.
    Parameter SettingsComparable ability to set parameters for visual and auditory oddball paradigm.Allows the user to set parameters for the visual and auditory oddball paradigm. More user-definable parameters than the predicate (e.g., visual filtering, oddball period, Go probability, customizable audio/visual stims).
    Mathematical CorrectnessMathematically correct generated averaged ERPs.Demonstrated mathematically correct generated averaged ERPs through mathematical validation.
    ERP Component ElicitationElicitation of correct ERP components.Analysis of real data shows it can elicit correct ERP components.
    Stimulus ReliabilityReliable generated stimulus.Confirmed via analysis of real data.
    Test ReliabilityConfirmed test reliability (split-half reliability).Split-half reliability measure confirms test reliability.

    2. Sample Size Used for the Test Set and Data Provenance

    • NF EEG Analysis Module: The document states that "Comparative performance evaluations showed that the same EDF file, loaded to NF-EEG and Neuroguide shows comparable signals on the EEG Plot." It also mentions "real and simulated data" for spectrum and sidelobe calculations. No specific sample size (number of EDF files or simulated data instances) is provided.
      • Data Provenance: Not explicitly stated, but the use of "real and simulated data" suggests a mix. "Same EDF file" implies existing data, which could be retrospective.
    • NF ERP Module:
      • "Mathematical validation of NF-ERP was performed." - No specific sample size mentioned, as this is a theoretical assessment.
      • "Analysis of real data shows that the NF-ERP can elicit correct ERP components, the generated stimulus is reliable, and the split-half reliability measure confirms test reliability." - No specific sample size (number of cases/patients) is provided for this "real data" analysis.
      • Data Provenance: "Real data" is used, but country of origin or whether it's retrospective/prospective is not specified.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not mention the use of experts to establish a "ground truth" for the comparative performance evaluations of either the EEG or ERP modules. The comparison is primarily against the output of the predicate devices.

    4. Adjudication Method for the Test Set

    Not applicable, as no external "ground truth" established by experts or adjudication process is described for the test set. The comparison is directly between the outputs of the NeuroField Analysis Suite and the predicate devices (Neuroguide or eVox).

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human readers' improvement with AI vs. without AI assistance was not conducted or described in this document. The study focuses on the standalone performance and comparison to predicate devices, not human-in-the-loop performance.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) was Done

    Yes, a standalone performance evaluation was done for both modules.

    • NF EEG Analysis Module: Directly compared its outputs (EEG plots, spectrums, sidelobes, Z-scores, headmaps) against those of the Neuroguide Analysis System.
    • NF ERP Module: Underwent mathematical validation to confirm the correctness of averaged ERPs and analysis of real data to demonstrate correct ERP component elicitation, stimulus reliability, and test reliability using split-half reliability.

    7. The Type of Ground Truth Used

    The "ground truth" for the performance evaluations is implicitly the output or established functionality of the predicate devices (Neuroguide for EEG and eVox for ERP). For the NF ERP module, mathematical correctness and consistency with known physiological responses (correct ERP components, reliable stimulus) served as the "ground truth" for its internal validation. There's no mention of pathology, outcomes data, or external expert consensus as a ground truth.

    8. The Sample Size for the Training Set

    The document does not provide any information regarding a training set size. This suggests that the NeuroField Analysis Suite, as a QEEG and ERP analysis software, likely relies on established algorithms and mathematical models rather than machine learning models that require extensive training data.

    9. How the Ground Truth for the Training Set Was Established

    Not applicable, as no training set or machine learning model is described. The device appears to be based on deterministic algorithms for signal processing and statistical evaluation of EEG/ERP data.

    Ask a Question

    Ask a specific question about this device

    K Number
    K234095
    Manufacturer
    Date Cleared
    2024-06-21

    (178 days)

    Product Code
    Regulation Number
    874.1050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | Classification Name: | Audiometer / Evoked response auditory stimulator
    (874.1050 & 882.1900
    | Audiometer 21 CFR 874.1050, Evoked response auditory stimulator 21 CFR
    882.1900
    | Audiometer 21 CFR 874.1050, Evoked response auditory stimulator 21 CFR
    882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    This Otonova Pro device is indicated for use when there is a requirement to screen for hearing disorders by objective and non-invasive means. ABR, TEOAE and DPOAE screening test results are automatically interpreted and a clear "Pass' or 'Refer' result is presented to the user. Use of the device is indicated when the patient is unable to give reliable voluntary responses to sound, especially with infants.

    Use of the device facilitates the early detection of hearing loss and its characterization. Where the individual to be screened is healthy with no medical conditions related to the ear, as in the case of well-baby hearing screening, the user can be a trained screener. In all other cases the user should be an audiologist or medical professional.

    The TEOAE and DPOAE analytical functions of the device are indicated when objective non-invasive clinical investigations require the characterization and monitoring of the peripheral auditory function. For this purpose, the device is intended to be used by audiologists or other professionals skilled in audiology.

    These TEOAE and DPOAE tests are applicable to populations of any age to obtain objective evidence of peripheral auditory function.

    Device Description

    OtoNova is a compact, portable battery-powered electronic device which records physiological responses to sound for the purpose of hearing testing. It Is controlled wirelessly from a local controlling device.

    OtoNova has two hardware variants: OtoNova and OtoNova Pro.

    Both the OtoNova and OtoNova Pro devices have been directly engineered from Otodynamics' currently marketed Otoport OAE+ABR device, retaining all the testing algorithms of the Otoport OAE+ABR device. The primary aim of the development was to physically separate the control console from the testing device while maintaining the same performance and effectiveness.

    Like the predicate Otoport OAE+ABR device, both OtoNova devices can record two different physiological indicators of a functioning auditory system's peripheral response to sound namely a) Otoacoustic emission (OAEs), which are small sounds made by the inner ear in response to acoustic stimulation, and b) Auditory brainstem responses (ABRs) are tiny electrical signals emanating from the auditory brainstem in response to sound. Automatic recognition of an ABR response is referred to as AABR.

    During ABR or OAE testing, low-level sounds are delivered to the ear. The responses to multiple presentations of these sounds (either acoustic or electrical responses) are recorded digitally and added together to enhance repeated responses with respect to the random/ noise signals that are always present. The averaged signal is automatically analysed by the device to identify and quantify true physiological response component and to assess the degree of noise contamination. This allows the quality/ accuracy of the recording to be determined for evidence of response validity. The processed data is reported to and displayed on the controlling device.

    AI/ML Overview

    The provided text describes the regulatory clearance (K234095) for the Otodynamics OtoNova/OtoNova Pro device, comparing it to its predicate device, the Otoport/Otocheck OAE+ABR (K143395). The document focuses on demonstrating substantial equivalence, rather than detailing a specific clinical study with predefined acceptance criteria for AI model performance.

    However, based on the information provided, we can infer the acceptance criteria and the study that "proves" the device meets them, primarily through the lens of functional equivalence and clinical agreement with a well-established predicate device. The core argument is that the OtoNova/OtoNova Pro performs audiological tests substantially the same as the predicate device, despite changes in physical design and user interface.

    Here's an analysis based on the provided text, structured to answer your questions:


    Inferred Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated with quantitative metrics for the new device. Instead, the performance is deemed acceptable if it is "substantially the same" or "similar" to the predicate device, which is already legally marketed and presumed efficacious. The "study" is a combination of bench testing and a small clinical validation aimed at demonstrating this equivalence.

    Here's a table based on the comparisons made in the document:

    Acceptance Criteria (Inferred from Predicate Equivalence)Reported Device Performance (OtoNova/OtoNova Pro)
    Electrical Driving Signals Equivalence: Stimulator probe transducer electrical driving signals must be substantially the same as the predicate (within 1dB) across the functional frequency range for TEOAE, DPOAE, and ABR."Found to be substantially the same (to within 1dB) across the functional frequency range."
    Acoustic Stimulation Equivalence: Acoustic stimulation delivered by the probe into a calibrated ear simulator must be substantially the same as the predicate (within 1dB) across the functional frequency range for TEOAE, DPOAE, and ABR (including ABR with ear-cup)."Found to be substantially the same (to within 1dB) across the functional frequency range."
    Sensitivity to Simulated Responses Equivalence: OAE and ABR responses recorded from a factory-reference 'response simulator' must be substantially the same levels/waveforms as the predicate (within 1dB)."Responses recorded by the OtoNova Pro were substantially the same levels (within 1dB) across the functional frequency for OAEs, and the ABR recorded had substantially the same size and waveform for ABR (within 1dB)."
    Clinical Screening Test Result Agreement: OtoNova's Nova-Link should yield the same "Pass," "Refer," or "Invalid test result" as the predicate device under the same screening criteria."OtoNova’s Nova-Link gives same screening test result under the same screening criteria (i.e. clear response, no clear response, invalid result) as the predicate device."
    Clinical Physical Characteristics of Recorded Responses Agreement: Physical characteristics of recorded responses (OAE, ABR waveforms) should be similar to the predicate, with marginal variability no wider than the predicate."The physical characteristics of the recorded responses were similar on each device. In the case of marginal response levels, where variability is to be expected, the range of marginality was no wider than for the Otoport OAE+ABR."
    Clinical Reported Response Levels (OAE) Agreement: Reported OAE response levels should be the same across frequency as the predicate, within expected tolerance due to subject movement."In the recording of OAE response for clinical purposes the OtoNova and OtoNova Pro the reported response levels were the same across frequency as with the Otoport OAE +ABR device within the tolerance expected due to subject movement."
    Clinical Reported Noise Levels Agreement: Reported noise levels should be similar to those reported by the predicate, within the expected intrinsic variability of noise."The reported noise levels reported by Novalink were similar to those reported by the Otoport C. within the expected intrinsic variability of noise."
    Usability (Human Factors) Acceptance: Users should be able to sufficiently understand the product/IFU to successfully record tests and use the device per its intended use with no substantial issues."All the 16 users were able to sufficiently understand the OtoNova product/ IFU, to successfully record tests and use the medical device per its intended use... There were no substantial issues found during this OtoNova summative evaluation."

    Study Details:

    1. A table of acceptance criteria and the reported device performance: See table above.

    2. Sample sized used for the test set and the data provenance:

      • Clinical Testing: "data collected from 20 volunteer adult subjects" for functional equivalence comparison between OtoNova Pro and the predicate Otoport. The text does not specify the country of origin, but the company is based in the UK and the prior validation of the predicate included trials in "USA, Brazil, Israel and UK," implying international data.
      • Clinical Testing (Predicate Validation): The predicate's ABR infant screening algorithm was validated on "70 infants performed at Otodynamics Ltd" and independently trialed in "collaborating hospitals in USA, Brazil, Israel and UK." The algorithm was validated from "1078 tests files." This data is retrospective for the predicate's prior clearance, but serves as the basis for asserting current device's equivalence.
      • Human Factors/Usability Testing: "16 participants external to Otodynamics." Provenance not specified but likely conducted in the UK given the company's location.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • For the current device's clinical testing, the "ground truth" is highly comparative to the predicate device's output. The subjective assessment of "similar," "no wider than," and "within expected tolerance" implies expert judgment, but the number and qualifications of experts involved in data interpretation for this specific equivalence trial are not provided.
      • For the predicate's ABR template, it was derived from a database of "1000 infant's ABR screening response waveforms independently collected using the Otodynamics ILO88 instrument... as part of a multicenter investigation into the Identification of Neonatal Hearing Impairment." This suggests a consensus-based ground truth from a large research study, likely involving multiple clinical experts (audiologists, researchers specialized in neonatal hearing). Their specific qualifications aren't listed, but the citation to a scientific publication (Norton et al., 2000) implies peer-reviewed clinical expertise.
    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set: Not explicitly stated. Given the nature of objective audiometric measurements (Pass/Refer based on algorithms comparing to criteria, and quantitative signal levels), adjudication in the typical sense of human reader consensus for subjective interpretations (like radiology reads) is less applicable. The "ground truth" is intrinsically linked to the device algorithms and their comparison to the predicate's algorithms, which are well-established.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance: Not applicable. This device is not an AI interpretation model for human readers. It's a diagnostic/screening device that produces objective measurements and automatically interpreted Pass/Refer results. The "AI" (automated interpretation) is core to the device's function, not an assistance tool for human interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done: Yes, in essence, the "bench tests" and the comparison of the algorithms' outputs (Pass/Refer, quantitative levels) can be considered a standalone assessment of the device's core functionality as an automated system. The device automatically analyzes the recorded physiological responses and presents a clear "Pass" or "Refer" result, which is the algorithm's standalone output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Primary Ground Truth: For the current device, the ground truth is established by functional and clinical equivalence to a legally marketed predicate device. The presumption is that the predicate's performance is already validated against a clinical ground truth.
      • Underlying Ground Truth (for predicate's algorithms):
        • OAE Screening: Based on the "Rhode Island Hearing Screening Assessment Project" and its reported algorithm, which was verified against clinical outcomes or established audiometric standards for hearing screening.
        • ABR Screening: Derived from a database of "1000 infant's ABR screening response waveforms Independently collected" as part of a "multicenter investigation into the Identification of Neonatal Hearing Impairment" (Norton et al., 2000). This implies a large-scale clinical dataset with established diagnoses/outcomes as the ultimate ground truth for the ABR algorithm's development.
    8. The sample size for the training set:

      • The document explicitly states that the OtoNova/OtoNova Pro uses the "same DSP firmware algorithms" as the predicate device. Therefore, there was no new training set specifically for this device's algorithms.
      • The training data implied for the predicate's ABR algorithm development was a database of "1000 infant's ABR screening response waveforms." This served as the basis for the "newborn ABR template" used by both devices.
      • The OAE algorithm's "training" or validation was performed as part of the "Rhode Island Hearing Screening Assessment Project," though no specific sample size for a "training set" is provided for that.
    9. How the ground truth for the training set was established:

      • For the ABR "training" (template creation): The "newborn ABR template" was derived from "1000 infant's ABR screening response waveforms independently collected." The ground truth for these 1000 waveforms would have been established through a comprehensive clinical protocol, likely involving repeated measures, follow-up diagnostics, and potentially consensus interpretation by expert audiologists from the multicenter investigation. The goal was to characterize "normal" ABR responses in neonates.
      • For the OAE algorithm: While not explicitly detailed as a "training set," the underlying principles and validation came from the "Rhode Island Hearing Screening Assessment Project," suggesting that the "Pass/Refer" criteria were correlated with actual hearing status as determined by more definitive diagnostic tests, forming the ground truth.
    Ask a Question

    Ask a specific question about this device

    K Number
    K234092
    Date Cleared
    2024-04-19

    (115 days)

    Product Code
    Regulation Number
    882.1870
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | Regulation
    Number (21
    CFR) | §882.1870, §870.2700, §874.1820, §882.1890,
    §882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The SafeOp 3: Neural Informatix System is intended for use in monitoring neurological status by recording transcranial motor evoked potentials (MEP), somatosensory evoked potentials (SSEP), electromyography (EMG), or assessing the neuromuscular junction (NMJ). Neuromonitoring procedures include intracranial, intratemporal, extratemporal, neck dissections, upper and lower extremities, spinal degenerative treatments, pedicle screw fixation, intervertebral fusion cages, rhizotomy, orthopedic surgery, open/percutaneous, lumbar, thoracic, and cervical surgical procedures.

    SafeOp 3 Accessories: The SafeOp Accessories are utilized in spine surgical procedures to assist in location of the nerves during or after preparation and placement of implants (intervertebral fusion cages and pedicle screw fixation devices) in open and percutaneous minimally invasive approaches.

    Device Description

    The SafeOp™ 3: Neural Informatix System (SafeOp 3 System), consists of the SafeOp patient interface with power supply and IV pole mount, the Alpha Informatix Tablet with docking station and power supply and a data transfer USB cable. Associated disposable accessories consists of an electrode harness, surface and/or subdermal needle electrodes, MEP Activator, Cranial Hub, PMAP Dilators and stimulating probe or clip contained in various kits.

    The subject device is intended for use by trained healthcare professionals, clinical neurophysiologists/technologists and appropriately trained non-clinical personnel. The subject device is intended for use in operating room environments of hospitals and surgical centers. System setup may be performed by both clinical and trained non-clinical personnel.

    The subject device records the following modalities:

    • Somatosensory evoked potentials (SSEP)
    • Motor evoked potentials (MEP),
    • . Train-of-four neuromuscular junction (TO4),
    • Triggered electromyography (tEMG) and ●
    • . Free run electromyography (sEMG)
    AI/ML Overview

    The provided text does not contain detailed information about specific acceptance criteria for the device's performance, nor does it describe a study that rigorously proves the device meets such criteria through a clinical validation or similar performance evaluation.

    The document is a 510(k) premarket notification summary for the "SafeOp 3: Neural Informatix System." Its primary purpose is to demonstrate substantial equivalence to a previously cleared predicate device (SafeOp2: Neural Informatix System, K213849, and reference device Cascade IOMAX Intraoperative Monitor, K162199), rather than to present a full clinical performance study with defined acceptance criteria and detailed results.

    Here's a breakdown of what the document does say, and what it lacks in relation to your request:

    What the document provides:

    • Device Name: SafeOp 3: Neural Informatix System
    • Intended Use/Indications for Use: Monitoring neurological status by recording transcranial motor evoked potentials (MEP), somatosensory evoked potentials (SSEP), electromyography (EMG), or assessing the neuromuscular junction (NMJ) during various surgical procedures.
    • Technological Comparison: A table comparing the SafeOp 3 System to predicate and reference devices, focusing on technical specifications like monitoring modalities, amplifier channels, stimulation parameters (voltage, current, pulse duration, repetition rate), and filter ranges. This comparison primarily aims to establish that the differences in technology do not raise new questions of safety or effectiveness.
    • Performance Data (Non-clinical): Mentions that "Nonclinical performance testing demonstrates that the subject SafeOp 3 System meets the functional, system, and software requirements." It also states "EMC and Electrical Safety Testing... was performed to ensure all functions... are electrically safe, and comply with recognized electrical safety standards." Usability testing was also performed.
    • Clinical Information Disclaimer: Explicitly states, "Determination of substantial equivalence is not based on an assessment of clinical performance data."

    What the document lacks significantly for your request:

    • A table of acceptance criteria and reported device performance: This is the most significant omission for your request. The document details technical specifications and comparisons but does not provide quantitative performance metrics (e.g., accuracy, sensitivity, specificity, or specific error rates) against pre-defined acceptance thresholds for any of its functionalities (MEP, SSEP, EMG, NMJ). The performance data mentioned are non-clinical (functional, system, software, EMC, electrical safety, usability), not clinical performance metrics.
    • Sample size used for the test set and data provenance: Since specific clinical performance studies are not detailed, this information is not provided.
    • Number of experts used to establish ground truth and qualifications: Not applicable as a clinical ground truth establishment process for performance evaluation is not described.
    • Adjudication method for the test set: Not applicable.
    • MRMC comparative effectiveness study: No such study is mentioned or detailed.
    • Standalone (algorithm only) performance: While the device is an "algorithm only" in a sense (it processes physiological signals), its performance isn't quantified in a standalone clinical evaluation or comparative study.
    • Type of ground truth used: No clinical ground truth is described for performance evaluation.
    • Sample size for the training set: Not applicable, as this is related to AI/ML development and training, which is not described. The device is a neuromonitoring system, not explicitly stated to be an AI/ML device in the context of this submission.
    • How the ground truth for the training set was established: Not applicable.

    Why this information is missing:

    The FDA 510(k) pathway for "substantial equivalence" often relies on demonstrating that a new device is as safe and effective as a legally marketed predicate, without necessarily requiring new clinical trials or detailed performance studies if the technological differences are minor and well-understood. The focus is on showing that any differences do not introduce new safety or effectiveness concerns.

    In summary, based solely on the provided text, I cannot complete the table of acceptance criteria or describe a study that proves the device meets these criteria in a clinical performance context. The document focuses on demonstrating substantial equivalence through technical comparison and non-clinical testing, rather than presenting clinical performance metrics.

    Ask a Question

    Ask a specific question about this device

    K Number
    K240430
    Device Name
    Otoport Pro
    Manufacturer
    Date Cleared
    2024-03-15

    (30 days)

    Product Code
    Regulation Number
    874.1050
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    |
    | Classification Name: | Audiometer/ Evoked response auditory stimulator (874.1050 & 882.1900
    |
    | Classification
    Regulation | 874.1050
    882.1900
    | 874.1050
    882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Otoport Pro device is indicated for use when there is a requirement to screen for hearing disorders by objective and non-invasive means. ABR, TEOAE and DPOAE screening test results are automatically interpreted and a clear "Pass' or 'Refer' result is presented to the device is indicated when the patient is unable to give reliable voluntary responses to sound, especially with infants. Use of the device facilitates the early detection of hearing loss and its characterization.

    Where the individual to be screened is healthy with no medical conditions related to the ear, as in the case of well-baby hearing screening, the user can be a trained screener. In all other cases the user should be an audiologist or medical professional.

    The TEOAE and DPOAE analytical functions of the device are indicated when objective non-invasive clinical investigations require the characterization and monitoring of the peripheral auditory function. For this purpose, the device is intended to be used by audiologists or other professionals skilled in audiology. These TEOAE and DPOAE tests are applicable to populations of any age to obtain objective evidence of peripheral auditory function.

    Device Description

    The Otodynamics Ltd ("Otodynamics") Otoport Pro device is a compact handheld device capable of high quality OAE measurements for clinical purposes and also automated ABR and OAE testing for fast infant screening.

    Responses to sound are recorded via an applied earphone and or adhesive surface electrode pad. Specifically, the device can record Otoacoustic Emissions (type DPOAEs or TEOAEs) and auditory brainstem responses (ABRs) to sound. These responses are especially useful in the hearing of infants for deafness. The more detailed analysis of DPOAE and TEOAE responses is additionally useful as a component of the audiological diagnostic test battery.

    The Otoport Pro is simple to use with customizable automation to make testing easy and the results clear. It has user access controls, graphical display panel, and extensive test database features. The Otoport Pro when configured for clinical use has advanced test features including extensive raw data capture for offline review and analysis if required.

    Otoport Pro device is a hardware/ software revision of the currently marketed Otoport OAE+ABR, having the same performance and intended uses as the Otoport OAE+ABR device.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Otoport Pro device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document describes the performance of the Otoport Pro in comparison to its predicate device (Otoport OAE+ABR, K143395) rather than setting distinct acceptance criteria with specific threshold values. The primary acceptance criterion appears to be demonstrating substantial equivalence in performance to the predicate device.

    Test Modality & MetricAcceptance Criteria (Implied: Substantial Equivalence to Predicate)Reported Device Performance (Otoport Pro vs. Predicate)
    Acoustic Stimulus DifferencesDifferences in amplitude and waveform of acoustic stimulus (TEOAE, DPOAE, AABR) when using the same probe.TEOAE:
    Ask a Question

    Ask a specific question about this device

    K Number
    K233649
    Date Cleared
    2024-03-08

    (115 days)

    Product Code
    Regulation Number
    882.1900
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    K233649

    Trade/Device Name: ALGO Pro Newborn Hearing Screener (ALGO Pro) Regulation Number: 21 CFR 882.1900
    Newborn Hearing Screener Common Name: Stimulator, Auditory, Evoked Response Regulation Number: 21 CFR 882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ALGO Pro Newborn Hearing Screener is a mobile, noninvasive instrument used to screen infants for hearing loss. The screener uses Automated Auditory Brainstem Response (AABR®) for automated analysis of Auditory Brainstem Response (ABR) signals recorded from the patient. The screener is intended for babies between the ages of 34 weeks (gestation age) and 6 months. Babies should be well enough to be ready for discharge from the hospital, and should be asleep or in a quiet state at the time of screener is simple to operate. It does not require special technical skills or interpretation of results. Basic training with the equipment is sufficient to learn how to screen infants who are in good health and about to be discharged from the hospital. A typical screening process can be completed in 15 minutes or less. Sites appropriate for screening include the well-baby nursery, NICU, mother's bedside, audiology suite, outpatient clinic, or doctor's office.

    Device Description

    The ALGO® Pro is a fully automated hearing screening device used to screen infants for hearing loss. It provides consistent, objective pass/refer results. The ALGO Pro device utilizes Auditory Brainstem Response (ABR) as the hearing screening technology, which allows the screening of the entire hearing pathway from outer ear to the brainstem. The ABR signal is evoked by a series of acoustic broadband transient stimulus (clicks) presented to a subject's ears using acoustic transducers and recorded by sensors placed on the skin of the patient. The ALGO Pro generates each click stimulus and presents to the patient's ear using acoustic transducers attached to disposable acoustic earphones. The click stimulus elicits a sequence of distinguishable electrophysiological signals produced as a result of signal transmission and neural responses within the auditory nerve and brainstem of the infant. Disposable sensors applied to the infant's skin pick up this evoked response, and the signal is transmitted to the screener via the patient electrode leads. The device employs advanced signal processing technology such as amplification, digital filtering, artifact rejection, noise monitoring and noise-weighted averaging, to separate the ABR from background noise and from other brain activity. The ALGO Pro uses a statistical algorithm based on binomial statistics to determine if there is a response to the stimulus that matches to the ABR template of a normal hearing newborn. If a response is detected that is consistent with the ABR template derived from normal hearing infants (automated auditory brainstem response technology, AABR), the device provides an automated 'Pass' result. A 'Refer' result is automatically generated if the device cannot detect an ABR response with sufficient statistical confidence or one that is consistent with the template.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the ALGO Pro Newborn Hearing Screener, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    The FDA submission primarily focuses on demonstrating substantial equivalence to a predicate device (ALGO 5) rather than setting specific, numerical acceptance criteria for a new clinical performance study. The "acceptance criteria" here are implied through the comparison with the predicate device's established performance and the demonstration that the ALGO Pro performs comparably.

    Acceptance Criterion (Implied)Reported Device Performance (ALGO Pro / Comparative)
    SafetyComplies with: IEC 60601-1 Ed. 3.2, IEC 60601-2-40 Ed. 2.0, IEC 60601-1-6, IEC 62366-1, IEC62304, IEC 62133-2, IEC 60601-1-2 Ed. 4.1, IEC 60601-4-2, FCC Part 15.
    BiocompatibilityPassed Cytotoxicity, Sensitization, and Irritation tests (ISO 10993-1:2018 for limited contact).
    Mechanical IntegrityPassed drop and tumble, cable bend cycle, electrode clip cycle, power button cycle, connector mating cycle, bassinet hook cycle, and docking station latch/pogo pin cycle testing.
    Effectiveness (AABR Algorithm Performance)Utilizes the exact same AABR algorithm as predicate ALGO 5.
    Algorithmic Sensitivity99.9% for each ear (using binomial statistics, inherited from ALGO AABR algorithm).
    Overall Clinical Sensitivity98.4% (combined results from independent, peer-reviewed clinical studies using the ALGO AABR algorithm, e.g., Peters (1986), Herrmann et al. (1995)).
    Specificity96% to 98% (from independent, peer-reviewed clinical studies using the ALGO AABR algorithm).
    Performance Equivalence to PredicateBench testing confirmed equivalence of acoustic stimuli, recording of evoked potentials, and proper implementation of ABR template and algorithm, supporting device effectiveness.
    Software PerformanceSoftware Verification and Validation testing conducted, Basic Documentation Level provided.
    UsabilityFormative and summative human factors/usability testing conducted, no concerns regarding safety and effectiveness raised.

    2. Sample Size Used for the Test Set and Data Provenance

    No new clinical "test set" was used for the ALGO Pro in the context of a prospective clinical trial. The performance data for the AABR algorithm (sensitivity and specificity) are derived from previously published, peer-reviewed clinical studies that validated the underlying ALGO AABR technology.

    • Sample Size for AABR Algorithm Development: The ABR template, which forms the basis of the ALGO Pro's algorithm, was determined by superimposing responses from 35 neonates to 35 dB nHL click stimuli.
    • Data Provenance for ABR Template: The data for the ABR template was collected at Massachusetts Eye and Ear Infirmary during the design and development of the original automated infant hearing screener.
    • Data Provenance for Clinical Performance (Sensitivity/Specificity): The studies cited (Peters, J. G. (1986) and Herrmann, Barbara S., Aaron R. Thornton, and Janet M. Joseph (1995)) are generally long-standing research from various institutions. The document doesn't specify the exact country of origin for the studies cited beyond the development of the template in the US. These studies would be retrospective relative to the ALGO Pro submission, as they describe the development and validation of the original ALGO technology.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and the Qualifications of Those Experts

    This information is not provided in the document for the studies that established the ground truth for the ABR template or the clinical performance of the ALGO AABR algorithm. The template was derived from "normal hearing" neonates, implying a clinical assessment of their hearing status, but the specifics of how that ground truth was established (e.g., specific experts, their qualifications, or methods other than the ABR itself) are not detailed within this submission summary.

    4. Adjudication Method for the Test Set

    Not applicable, as no new clinical "test set" requiring adjudication for the ALGO Pro itself was conducted or reported in this submission. The historical studies developing the AABR algorithm would have defined their own ground truth and validation methods, but these are not detailed here.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    Not applicable. The ALGO Pro is an automated hearing screener that provides a "Pass" or "Refer" result without requiring human interpretation of the ABR signals themselves. It is not an AI-assisted human reading device, but rather a standalone diagnostic aid.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    Yes, a standalone performance assessment of the AABR algorithm (which is essentially the "algorithm only" component) was done indirectly through historical studies and directly through bench testing.

    • The core AABR algorithm has a 99.9% algorithmic sensitivity (based on binomial statistics).
    • Historically, independent clinical studies (cited) showed an overall clinical sensitivity of 98.4% and specificity of 96% to 98% for the ALGO AABR technology when used in clinical settings.
    • For the ALGO Pro specifically, bench testing was performed to confirm the equivalence of the acoustical stimuli, recording of evoked potentials, and proper implementation of the ABR template and algorithm between the ALGO Pro and its predicate device (ALGO 5). This bench testing effectively confirmed the standalone performance of the ALGO Pro's algorithm against the established performance of the predicate.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • For ABR Template Development: The ABR template was based on the morphology of ABR waveforms from normal hearing neonates. This implies a ground truth established by clinical assessment of "normal hearing" status.
    • For Clinical Performance (Sensitivity/Specificity): The clinical studies cited (Peters, Herrmann et al.) would have established ground truth for hearing status through follow-up diagnostic audiologic evaluations, which could include behavioral audiometry, auditory steady-state response (ASSR) testing, or other objective measures (likely expert consensus based on these diagnostic tests). The document does not specify the exact ground truth methodology of these historical studies.

    8. The Sample Size for the Training Set

    The document states that the ABR template, which underpins the algorithm, was derived by superimposing responses from 35 neonates. This set of 35 neonates effectively served as the "training set" or foundational data for the ABR template.

    9. How the Ground Truth for the Training Set Was Established

    The ground truth for the "training set" (the 35 neonates used to derive the ABR template) was established based on their status as "normal hearing" infants. This implies a determination of their hearing status through established clinical methods for neonates at the time (e.g., standard audiologic evaluation to confirm normal hearing), though the specific details of these diagnostic methods are not provided in this summary.

    Ask a Question

    Ask a specific question about this device

    Why did this record match?
    510k Summary Text (Full-text Search) :

    CFR 882.1400 | burst suppression detection software for electroencephalograph |
    | GWJ - 21 CFR 882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Indications for Use for CARESCAPE Canvas 1000:

    CARESCAPE Canvas 1000 is a multi-parameter patient monitor intended for use in multiple areas within a professional healthcare facility.

    CARESCAPE Canvas 1000 is intended for use on adult, pediatric, and neonatal patients one patient at a time.

    CARESCAPE Canvas 1000 is indicated for monitoring of:

    · hemodynamic (including ECG, ST segment, arrhythmia detection, ECG diagnostic analysis and measurement, invasive pressure, non-invasive blood pressure, pulse oximetry, regional oxygen saturation, total hemoglobin concentration, cardiac output (thermodilution and pulse contour), temperature, mixed venous oxygen saturation, and central venous oxygen saturation),

    · respiratory (impedance respiration, airway gases (CO2, O2, N2O, and anesthetic agents), spirometry, gas exchange), and

    · neurophysiological status (including electroencephalography, Entropy, Bispectral Index (BIS), and neuromuscular transmission).

    CARESCAPE Canvas 1000 is able to detect and generate alarms for ECG arrhythmias: atrial fibrillation, accelerated ventricular rhythm, asystole, bigeminy, bradycardia, ventricular couplet, irregular, missing beat, multifocal premature ventricular contractions (PVCs), pause, R on T, supra ventricular tachycardia, trigeminy, ventricular bradycardia, ventricular fibrillation/ ventricular tachycardia, ventricular tachycardia, and VT>2. CARESCAPE Canvas 1000 also shows alarms from other ECG sources.

    CARESCAPE Canvas 1000 also provides other alarms, trends, snapshots and events, and calculations and can be connected to displays, printers and recording devices.

    CARESCAPE Canvas 1000 can interface to other devices. It can also be connected to other monitors for remote viewing and to data management software devices via a network.

    CARESCAPE Canvas 1000 is intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a professional healthcare facility.

    CARESCAPE Canvas 1000 is not intended for use in an MRI environment.

    Indications for Use for CARESCAPE Canvas Smart Display:

    CARESCAPE Canvas Smart Display is a multi-parameter patient monitor intended for use in multiple areas within a professional healthcare facility.

    CARESCAPE Canvas Smart Display is intended for use on adult, pediatric, and neonatal patients one patient at a time.

    CARESCAPE Canvas Smart Display is indicated for monitoring of:

    · hemodynamic (including ECG, ST segment, arrhythmia detection, ECG diagnostic analysis and measurement, invasive pressure, non-invasive blood pressure, pulse oximetry, regional oxygen saturation, total hemoglobin concentration, cardiac output (thermodilution), and temperature, and · respiratory (impedance respiration, airway gases (CO2)

    CARESCAPE Canvas Smart Display is able to detect and generate alarms for ECG arrhythmias: atrial fibrillation, accelerated ventricular rhythm, asystole, bigeminy, bradycardia, ventricular couplet, irregular, missing beat, multifocal premature ventricular contractions (PVCs), pause, R on T, supra ventricular tachycardia, trigeminy, ventricular bradycardia, ventricular fibrillation/ ventricular tachycardia, ventricular tachycardia, and VT>2. CARESCAPE Canvas Smart Display also shows alarms from other ECG sources.

    CARESCAPE Canvas Smart Display also provides other alarms, trends, snapshots and events. CARESCAPE Canvas Smart Display can use CARESCAPE ONE or CARESCAPE Patient Data Module (PDM) as patient data acquisition devices. It can also be connected to other monitors for remote viewing and to data management software devices via a network.

    CARESCAPE Canvas Smart Display is intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a professional healthcare facility.

    CARESCAPE Canvas Smart Display is not intended for use in an MRI environment.

    Indications for Use for CARESCAPE Canvas D19:

    CARESCAPE Canvas D19 is intended for use as a secondary display with a compatible host device. It is intended for displaying measurement and parametric data from the host device and providing visual and audible alarms generated by the host device.

    CARESCAPE Canvas D19 enables controlling the host device, including starting and discharging a patient case, changing parametric measurement settings, changing alarm limits and disabling alarms.

    Using CARESCAPE Canvas D19 with a compatible host device enables real-time multi-parameter patient monitoring and continuous evaluation of the patient's ventilation, oxygenation, hemodynamic, circulation, temperature, and neurophysiological status.

    Indications for Use for F2 Frame; F2-01:

    The F2 Frame, module frame with two slots, is intended to be used with compatible GE multiparameter patient monitors to interface with two single width parameter modules, CARESCAPE ONE with a slide mount, and recorder.

    The F2 Frame is intended for use in multiple areas within a professional healthcare facility. The F2 Frame is intended for use under the direct supervision of a licensed healthcare practitioner, or by person trained in proper use of the equipment in a professional healthcare facility.

    The F2 Frame is intended for use on adult, pediatric, and neonatal patients and on one patient at a time.

    Device Description

    Hardware and software modifications carried out on the legally marketed predicate device CARESCAPE B850 V3.2, resulted in new products CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display, along with the CARESCAPE Canvas D19 and F2 Frame (F2-01) all of which are the subject of this submission.

    CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display are new modular multi-parameter patient monitoring systems. In addition, the new devices CARESCAPE Canvas D19 and F2 Frame (F2-01) are a new secondary display and new module frame respectively.

    The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display patient monitors incorporates a 19-inch display with a capacitive touch screen and the screen content is user-configurable. They have an integrated alarm light and USB connectivity for other user input devices. The user interface is touchscreen-based and can be used also with a mouse and a keyboard or a remote controller. The system also includes the medical application software (CARESCAPE Software version 3.3). The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display include features and subsystems that are optional or configurable.

    The CARESCAPE Canvas 1000 and CARESCAPE Canvas Smart Display are compatible with the CARESCAPE Patient Data Module and CARESCAPE ONE acquisition device via F0 docking station (cleared separately).

    For the CARESCAPE Canvas 1000 patient monitor, the other type of acquisition modules, E-modules (cleared separately) can be chosen based on care requirements and patient needs. Interfacing subsystems that can be used to connect the E-modules to the CARESCAPE Canvas 1000 include a new two-slot parameter module F2 frame (F2-01), a five-slot parameter module F5 frame (F5-01), and a seven-slot parameter module F7 frame (F7-01).

    The CARESCAPE Canvas 1000 can also be used together with the new secondary CARESCAPE Canvas D19 display. The CARESCAPE Canvas D19 display provides a capacitive touch screen, and the screen content is user configurable. The CARESCAPE Canvas D19 display integrates audible and visual alarms and provides USB connectivity for other user input devices.

    AI/ML Overview

    Please note that the provided text is a 510(k) summary for a medical device and primarily focuses on demonstrating substantial equivalence to a predicate device through non-clinical bench testing and adherence to various standards. It explicitly states that clinical studies were not required to support substantial equivalence. Therefore, some of the requested information regarding clinical studies, human expert involvement, and ground truth establishment from patient data will likely not be present.

    Based on the provided text, here's the information regarding acceptance criteria and device performance:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of specific, quantifiable acceptance criteria alongside reported performance data. Instead, it states that various tests were conducted to demonstrate that the design meets specifications and complies with consensus standards. The performance is generally reported as "meets the specifications," "meets the EMC requirements," "meets the electrical safety requirements," and "fulfilled through compliance."

    However, we can infer some "acceptance criteria" based on the standards and tests mentioned:

    CategoryInferred Acceptance Criteria (Based on Compliance)Reported Device Performance
    General PerformanceDevice design meets specifications relevant to its intended use (multi-parameter patient monitoring, ECG, ST segment, arrhythmia detection, various physiological measurements)."demonstrating the design meets the specifications"
    HardwareHardware functions as intended and meets safety/performance standards."Hardware Bench Testing conducted"
    AlarmsAlarm system (classification, notification, adjustment, critical limits, On/Off, audio silencing) functions correctly and meets relevant standards (IEC 60601-1-8)."Alarms Bench Testing conducted." "Alarm management core functionalities: Classification and notification of alarms, Adjustment of alarm settings, Possibility to set critical alarm limits, Alarm On/Off functionality and audio silencing - Identical (to predicate)." "meets the specifications listed in the requirements." "Additional data is provided for compliance to: IEC 60601-1-8: 2020..."
    EMCMeets Electromagnetic Compatibility (EMC) requirements as per IEC 60601-1-2 Edition 4.1 2020 and FDA guidance."meet the EMC requirements described in IEC 60601-1-2 Edition 4.1 2020." "evaluated for electromagnetic compatibility and potential risks from common emitters."
    Electrical SafetyMeets electrical safety requirements as per IEC 60601-1:2020 "Edition 3.2" and 21 CFR Part 898, § 898.12 (electrode lead wires and cables)."meet the electrical safety requirements of IEC 60601-1:2020 'Edition 3.2'." "performed by a recognized independent and Certified Body Testing Laboratory (CBTL)." "fulfilled through compliance with IEC 60601-1:2020... clause 8.5.2.3."
    Specific ParametersMeets performance standards for various physiological measurements (ECG, ST segment, NIBP, SpO2, temp, etc.) as detailed by specific IEC/ISO standards (e.g., IEC 60601-2-25, IEC 60601-2-27, IEC 80601-2-30, ISO 80601-2-55, etc.). Includes the EK-Pro arrhythmia detection algorithm performing equivalently to the predicate."Additional data is provided for compliance to: IEC 60601-2-25:2011, IEC 60601-2-27:2011, IEC 80601-2-30: 2018, IEC 60601-2-34: 2011, IEC 80601-2-49: 2018, ISO 80601-2-55: 2018, ISO 80601-2-56: 2017+AMD1:2018, ISO 80601-2-61: 2017, IEC 80601-2-26:2019, IEC 60601-2-40: 2016, ANSI/AAMI EC57:2012." "EK-Pro arrhythmia detection algorithm: EK-Pro V14 - Identical (to predicate)."
    EnvironmentalOperates and stores safely within specified temperature, humidity, and pressure ranges. Withstands mechanical stress, fluid ingress, and packaging requirements."confirmed to meet the specifications listed in the requirements." "Environmental (Mechanical, and Thermal Safety) testing" conducted. "Fluid ingress." "Packaging Bench Testing."
    ReprocessingReprocessing efficacy validation meets acceptance criteria based on documented instructions and worst-case devices/components, following FDA guidance "Reprocessing Medical Devices in Health Care Settings: Validation Methods and Labeling.""Reprocessing efficacy validation has been conducted." "The reprocessing efficacy validation met the acceptance criteria for the reprocessing efficacy validation tests."
    Human Factors/UsabilityMeets usability requirements as per IEC 60601-1-6: 2020 and IEC 62366-1: 2020, and complies with FDA guidance "Applying Human Factors and Usability Engineering to Medical Devices.""Summative Usability testing has been concluded with 16 US Clinical, 16 US Technical and 15 US Cleaning users." "follows the FDA Guidance for Industry and Food and Drug Administration Staff 'Applying Human Factors and Usability Engineering to Medical Devices'."
    SoftwareComplies with FDA software guidance documents (e.g., Content of Premarket Submissions for Software, General Principles of Software Validation, Off-The-Shelf Software Use) and software standards IEC 62304: 2015 and ISO 14971:2019, addressing patient safety, security, and privacy risks."follows the FDA software guidance documents as outlined in this submission." "Software testing was conducted." "Software for this device is considered as a 'Major' level of concern." "Software standards IEC 62304: 2015 ... and risk management standard ISO 14971:2019 ... were also applied." "patient safety, security, and privacy risks have been addressed."

    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: The document implies that the "test set" for performance evaluation was the device itself and its components as described ("CARESCAPE Canvas 1000, CARESCAPE Canvas Smart Display, CARESCAPE Canvas D19 and F2 Frame (F2-01)").
      • For usability testing, "16 US Clinical, 16 US Technical and 15 US Cleaning users" were involved.
    • Data Provenance: The testing described is non-clinical bench testing.
      • For usability testing, the users were located in the US.
      • No direct patient data or retrospective/prospective study data is mentioned beyond the device's inherent functional characteristics being tested according to standards.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not applicable in the context of establishing "ground truth" for patient data, as no clinical studies with patient data requiring expert adjudication were conducted or reported to establish substantial equivalence.
    • For usability testing, "16 US Clinical, 16 US Technical and 15 US Cleaning users" participated. Their specific qualifications (e.g., years of experience, types of healthcare professionals) are not detailed in this summary.

    4. Adjudication Method for the Test Set

    • Not applicable, as no clinical studies with patient data requiring adjudication were conducted or reported.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance

    • No MRMC study was done, as the document explicitly states: "The subjects of this premarket submission... did not require clinical studies to support substantial equivalence." The device is a patient monitor, not an AI-assisted diagnostic tool for image interpretation or similar.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    • The performance evaluations mentioned (e.g., for general device functionality, electrical safety, EMC, specific parameter measurements like ECG/arrhythmia detection) represent the device's standalone performance in a bench setting, demonstrating its adherence to established standards and specifications. There is no separate "algorithm only" performance study reported distinctly from integrated device testing. The EK-Pro V14 algorithm, which is part of the device, is noted as "identical" to the predicate, implying its performance characteristics are maintained.

    7. The Type of Ground Truth Used

    • For the non-clinical bench testing, the "ground truth" was established by conformance to internationally recognized performance and safety standards (e.g., IEC, ISO, AAMI/ANSI) and the engineering specifications of the device/predicate. These standards define the acceptable range of performance for various parameters.
    • For usability testing, the "ground truth" was the successful completion of tasks and overall user feedback/satisfaction as assessed by human factors evaluation methods.
    • No ground truth from expert consensus on patient data, pathology, or outcomes data was used, as clinical studies were not required.

    8. The Sample Size for the Training Set

    • Not applicable. This document describes a 510(k) submission for a patient monitor, not a machine learning or AI model trained on a dataset. The device contains "Platform Software that has been updated from version 3.2 to version 3.3," but this refers to traditional software development and not a machine learning model requiring a "training set" in the AI sense.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable, as there is no mention of a "training set" in the context of machine learning. The software development likely followed conventional software engineering practices, with ground truth established through design specifications, requirements, and verification/validation testing.
    Ask a Question

    Ask a specific question about this device

    K Number
    K220139
    Device Name
    QScreen
    Manufacturer
    Date Cleared
    2022-08-03

    (197 days)

    Product Code
    Regulation Number
    882.1900
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    Germering, Bavaria 82110 Germany

    Re: K220139

    Trade/Device Name: QScreen Regulation Number: 21 CFR 882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The QSCREEN device is a hand-held, portable hearing screener intended for recording and automated evaluation of Otoacoustic Emissions (OAE) and Auditory Brainstem Responses (ABR). Distortion Product Otoacoustic Emission (DPOAE) and Transient Evoked Otoacoustic Emission (TEOAE) tests are applicable to obtain objective evidence of peripheral auditory function. ABR tests are applicable to obtain objective evidence of peripheral and retro-cochlear auditory function including the auditory nerve and the brainstem. QSCREEN is intended to be used in subjects of all ages. It is especially indicated for use in testing individuals for whom behavioral results are deemed unreliable.

    Device Description

    The QScreen is a hand-held and portable audiometric examination device offering test methods for the measurement of Otoacoustic Emissions (OAE) including transitory evoked otoacoustic emissions (TEOAE), distortion product otoacoustic emissions (DPOAE) and Auditory Evoked Responses like Auditory Brainstem Responses (ABR) in patients of all ages. It has a touch screen display and can be used with different accessories, such as its Docking Station, Ear Coupler Cable, Ear probe, Insert earphone, and Electrode cable.

    QScreen is a battery powered device which is charged by the Docking Station wirelessly and communicates with the Docking Station via Bluetooth. The Docking Station can be connected to a personal computer (PC) via USB cable on which patient and test data can be reviewed and managed with the optional software. Additionally, device and user profile configurations can be conducted with the software. Printing the data is also possible and carried out by a label printer that can be connected to the Docking Station. The QScreen device also contains a camera on the back side to read linear bar codes and QR codes. All materials that come into contact with human skin are selected to be biocompatible.

    The operating system on the QScreen is a self-contained firmware. The user is guided by the menu on the touch screen through the measurement. The results are evaluated on the base of signal statistics. The device offers an automatically created result, which can have the values "PASS" (Clear Response), "REFER" (No Clear Response) or "INCOMPLETE" (Test aborted).

    AI/ML Overview

    The provided text is a 510(k) summary for the PATH MEDICAL GmbH's QScreen device. It states that "No clinical performance data was collected for the subject device QScreen. Substantial equivalence was shown through bench testing and compliance to international standards..." As such, the document does not contain the information required to answer the prompt regarding acceptance criteria and the study that proves the device meets the acceptance criteria (specifically clinical performance data).

    Therefore, I cannot provide:

    1. A table of acceptance criteria and the reported device performance
    2. Sample sized used for the test set and the data provenance
    3. Number of experts used to establish the ground truth
    4. Adjudication method
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done
    6. If a standalone performance study was done
    7. The type of ground truth used
    8. The sample size for the training set
    9. How the ground truth for the training set was established

    The document explicitly states that no clinical performance data was collected to demonstrate the device meets acceptance criteria via a clinical study. Instead, substantial equivalence to a predicate device (Sentiero) was shown through:

    • Bench testing: This included tests for "frequency, timing and sound level of the stimulus as well as noise resistance and the lowest response signal detectable by the device."
    • Compliance to international standards: IEC 60645-6:2009 (OAE) and IEC 60645-7:2009 (ABR).
    • Biocompatibility testing: According to ISO 10993-1:2018 (Cytotoxicity, Sensitization, Irritation).
    • Electrical safety and electromagnetic compatibility (EMC) testing: According to IEC 60601-1:2005/AMD1:2012, IEC 60601-1-2:2014, IEC 60601-2-40: 2016.
    • Software Verification and Validation Testing: According to FDA's guidance and IEC 62304:2015.
    • Usability Testing: According to EN 62366:2015.
    • Mechanical and Acoustic Testing: Including maximum sound level, push, drop, and mould stress relief tests, and frequency content, timing, sound level, and repetition rate of stimuli.
    • Literature Review: Citing publications on ABR algorithm and automated infant screening.

    The comparison to the predicate device focuses on technical characteristics, intended use (where QScreen is a subset of Sentiero's functionality), and accessories, stating that these differences "do not raise different question of safety or effectiveness."

    Ask a Question

    Ask a specific question about this device

    K Number
    K213181
    Date Cleared
    2022-04-13

    (196 days)

    Regulation Number
    870.1025
    Reference & Predicate Devices
    Why did this record match?
    510k Summary Text (Full-text Search) :

    , measurement, blood-pressure, noninvasive 21 CFR 870.2910 thermometer, electronic, clinical 21 CFR 882.1900

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CARESCAPE B650 is a multi-parameter patient monitor intended for use in multiple areas and intrahospital transport within a professional healthcare facility.

    The CARESCAPE B650 is intended for use on adult, pediatric, and neonatal patients and on one patient at a time. The CARESCAPE B650 is indicated for monitoring of:

    · hemodynamic (including ECG, ST segment, arrhythmia detection, ECG diagnostic and measurement, invasive pressure, non-invasive blood pressure, pulse oximetry, regional oxygen saturation, total hemoglobin concentration, cardiac output (thermodilution and pulse contour), temperature, mixed venous oxygen saturation, and central venous oxygen saturation),

    · respiratory (impedance respiration, airway gases (CO2, O2, N2O, and anesthetic agents), spirometry, gas exchange), and

    · neurophysiological status (including electroencephalography, Entropy, Bispectral Index (BIS), and neuromuscular transmission).

    The CARESCAPE B650 can be a stand-alone monitor or interfaced to other devices. It can also be connected to other monitors for remote viewing and to data management software devices via a network.

    The CARESCAPE B650 is able to detect and generate alarms for ECG arrhythmias: atrial fibrillation, accelerated ventricular rhythm, asystole, bigeminy, bradycardia, ventricular couplet, missing beat, multifocal premature ventricular contractions (PVCs), pause, R on T, supra ventricular tachycardia, trigeminy, ventricular bradycardia, ventricular fibrillation/ventricular tachycardia, ventricular tachycardia, and VT>2. The CARESCAPE B650 also shows alarms from other ECG sources.

    The CARESCAPE B650 also provides other alarms, trends, snapshots and calculations, and can be connected to displays, printers and recording devices.

    The CARESCAPE B650 is intended for use under the direct supervision of a licensed healthcare practitioner, or by personnel trained in proper use of the equipment in a professional healthcare facility.

    Contraindications for using CARESCAPE B650:

    The CARESCAPE B650 is not intended for use in a controlled MR environment.

    Device Description

    CARESCAPE B650 is a new version of a portable multi-parameter patient monitoring system. The CARESCAPE B650 includes the monitor with built-in CPU, power unit, a 15 inch touch display, the CARESCAPE Software and the battery. CARESCAPE B650 is equipped with two module slots where patient data acquisition modules (E-Module type) can be connected to perform patient monitoring. CARESCAPE B650 is equipped with the ePort interface that supports use of PDM or CARESCAPE ONE patient data acquisition devices. In addition to the ePort interface the PDM module can be also connected directly to the CARESCAPE B650 via special slide mount connector which is in the back of the monitor. The CARESCAPE B650 includes features and subsystems that are optional or configurable.

    AI/ML Overview

    The provided text is a 510(k) Summary for the GE Healthcare CARESCAPE B650 patient monitor. It focuses on demonstrating substantial equivalence to a predicate device, rather than presenting a detailed study of acceptance criteria and device performance. Therefore, the information requested in your prompt is largely not available within this document.

    Here's a breakdown of what can and cannot be extracted based on the provided text:

    1. A table of acceptance criteria and the reported device performance

    The document does not provide a specific table of acceptance criteria with corresponding reported device performance values in the format you requested. It states: "Bench testing related to software, hardware and performance including applicable consensus standards was conducted on the CARESCAPE B650, demonstrating the design meets the specifications." This is a general statement about testing without specific criteria or performance metrics.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)

    This information is not provided in the document. The document mentions "Bench testing related to software, hardware and performance," but does not detail the nature of the test sets, their size, or their origin.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)

    This information is not provided. As this is a 510(k) submission for a patient monitor, the primary evidence relies on engineering and performance testing against established standards, not typically on expert consensus for "ground truth" in the way it might be for an AI diagnostic device.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set

    This information is not provided. Adjudication methods are typically relevant for studies involving human interpretation or subjective assessments, which are not detailed here.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    A multi-reader multi-case (MRMC) comparative effectiveness study was not done, and it is not applicable to this submission. The device is a patient monitor, not an AI-assisted diagnostic tool that would involve human readers. The document explicitly states: "The subject of this premarket submission, CARESCAPE B650 did not require clinical studies to support substantial equivalence."

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    The document describes "Bench testing related to software, hardware and performance" and "Software testing included software design, development, verification, validation and traceability." This implies standalone testing of the device's algorithms and functionality. However, specific details about the results of such standalone performance are not provided in a quantifiable manner against acceptance criteria.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    Given the nature of the device (a multi-parameter patient monitor), "ground truth" would likely be established through:

    • Reference measurement devices/standards: For parameters like ECG, blood pressure, oxygen saturation, temperature, etc., the device's measurements would be compared against validated reference devices or established physical standards.
    • Simulated physiological signals: For arrhythmia detection, the device would be tested with simulated ECG waveforms containing known arrhythmias.

    However, the specific types of "ground truth" used are not explicitly elaborated beyond "bench testing" and "applicable consensus standards."

    8. The sample size for the training set

    This information is not provided and is generally not applicable in the context of a patient monitor's 510(k) submission unless specific machine learning algorithms requiring training data were a novel aspect of the submission, which is not indicated here. The document describes modifications to software and hardware, implying updates to existing functionalities rather than the introduction of new, data-trained AI models.

    9. How the ground truth for the training set was established

    This information is not provided and is not applicable for the reasons stated in point 8.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 9