Search Filters

Search Results

Found 2 results

510(k) Data Aggregation

    K Number
    K161027
    Date Cleared
    2016-11-08

    (210 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K031149

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    1. Cadwell AmpliScan is a software-only device indicated for use with electroencephalographic (EEG) data from Cadwell Arc application software. Cadwell AmpliScan is distributed solely for use with Cadwell Arc software.
    2. The Cadwell AmpliScan device is for prescription use only by qualified medical practitioners, trained in Electroencephalography, who will exercise professional judgement when using the information.
    3. This device does not provide any diagnostic conclusion about the patient's condition to the user.
    4. Cadwell AmpliScan uses electroencephalographic (EEG) data to calculate and display a quantitative aEEG measure. This quantitative measure should always be interpreted by the user in conjunction with review of the original EEG waveforms. The aEEG quantitative measure of Cadwell AmpliScan is intended to monitor the state of the brain.
    Device Description

    Cadwell AmpliScan is a software-only device distributed solely for use with the application software commonly known and marketed as Cadwell Arc software. Cadwell Ampliscan software is installed with installation of Arc software, and does not require installation or removal separate from the Arc application. The Cadwell AmpliScan software-only device applies the Amplitude-Integrated EEG (aEEG) algorithm referred to as the Cerebral Function Monitor (CFM) to stored data within the Cadwell Arc software, and stores and displays the results.

    AI/ML Overview

    The provided text appears to be a 510(k) summary for the Cadwell AmpliScan, a software-only device. This document focuses on demonstrating substantial equivalence to a predicate device rather than presenting a detailed study with specific acceptance criteria and detailed performance metrics as one might find in a clinical trial report for an innovative device.

    Therefore, the information requested by the user, particularly regarding acceptance criteria, sample sizes, expert qualifications, and detailed performance metrics, is not explicitly provided in the document in the format anticipated for a standalone clinical study. The document focuses on comparing the Cadwell AmpliScan to a predicate device based on technological characteristics and software verification/validation.

    Here's a breakdown of what can be extracted based on the provided text, and what cannot:

    1. Table of Acceptance Criteria and Reported Device Performance:

    The document does not explicitly state acceptance criteria or quantify device performance in terms of metrics like sensitivity, specificity, accuracy, or similar measures commonly found in a study proving a device meets specific criteria. Instead, it relies on demonstrating substantial equivalence to a predicate device.

    The "Performance Data" section states: "In the Substantial Equivalence Discussion, a comparison of outputs from Cadwell AmpliScan and the predicate with like input data demonstrate the resulting equivalence of analysis and display." This implies that the 'acceptance criterion' was that the output of Cadwell AmpliScan should be equivalent to that of the predicate device when given the same input data.

    Acceptance Criterion (Implied)Reported Device Performance
    Outputs are equivalent to predicate device for like input data."Comparison of outputs from Cadwell AmpliScan and the predicate with like input data demonstrate the resulting equivalence of analysis and display."
    Software Verification and Validation conducted as per FDA guidance."Software Verification and Validation Testing were conducted and documentation was provided as recommended by FDA's Guidance for Industry and FDA staff."
    No new issues of safety or effectiveness introduced by differences."No new issues of safety or effectiveness are introduced by the differences."

    2. Sample Size Used for the Test Set and Data Provenance:

    The document does not specify a "test set" in terms of patient data. The evaluation was likely performed using various EEG data files (the "like input data") to compare outputs, but the number of such files or their origin (country, retrospective/prospective) is not mentioned. Given it's a software device processing existing data, the data would inherently be retrospective.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    The concept of "ground truth" established by experts for a test set is not discussed in this document. The evaluation was focused on the software's ability to produce equivalent outputs to an existing, legally marketed device (the predicate). The assessment of equivalence typically involves technical comparisons of algorithm implementation and output, not expert clinical interpretation of novel results.

    4. Adjudication Method for the Test Set:

    Not applicable, as no expert-adjudicated test set is described.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done:

    No, an MRMC study was not done. The document does not mention human readers or AI assistance in a comparative effectiveness study. The device is software that calculates and displays a quantitative aEEG measure, not an AI to assist human readers directly in diagnosis.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    Yes, this was a standalone (algorithm only) evaluation. The device itself is "software-only" and its performance was assessed by comparing its outputs to a predicate device. The document explicitly states, "Cadwell AmpliScan uses electroencephalographic (EEG) data to calculate and display a quantitative aEEG measure." It also clarifies that "This device does not provide any diagnostic conclusion about the patient's condition to the user." and "This quantitative measure should always be interpreted by the user in conjunction with review of the original EEG waveforms." This indicates that the device operates as an algorithm generating a measure, with interpretation left to a qualified medical practitioner.

    7. The Type of Ground Truth Used:

    The "ground truth" in this context is the output generated by the predicate device for the same input data, as the study aims to show equivalence. The document states a "comparison of outputs from Cadwell AmpliScan and the predicate with like input data demonstrate the resulting equivalence of analysis and display." This implies the predicate's output served as the reference for equivalence.

    8. The Sample Size for the Training Set:

    The document does not mention any "training set." This type of 510(k) submission, particularly for a device implementing a known algorithm (Cerebral Function Monitor/CFM), typically doesn't involve machine learning training on a large dataset. The substantial equivalence is based on the algorithm's implementation matching that of the predicate.

    9. How the Ground Truth for the Training Set was Established:

    Not applicable, as no training set or ground truth for a training set is mentioned.

    Ask a Question

    Ask a specific question about this device

    K Number
    K152301
    Date Cleared
    2016-06-03

    (294 days)

    Product Code
    Regulation Number
    882.1400
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K071449, K031149, K791580, K983229, K031149

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Background Pattern Classification algorithm is intended for:
    · Neonatal patients, defined as from birth to 28 days post-delivery, and corresponding to a post-conceptual age of 37 to 46 weeks, in clinical environments such as the intensive care unit, operating room, and for clinical research.
    • To analyze and identify background patterns in aEG, including continuous and discontinuous activity, burst suppression, low voltage, and inactive patterns. The aEEG must be obtained from a pair of parietal electrodes located at positions corresponding with P3 and P4 of the International 10/20 System. The background pattern classification algorithm must be reviewed and interpreted by qualified clinical practitioners.
    The device does not provide any diagnostic conclusion about the patient's condition.

    Device Description

    BPc™ is a software only product that identifies background patterns seen on aEEG signal recorded from a pair of parietal electrodes (P3-P4) in neonates, defined as from birth to 28 days post-delivery, and corresponding to a post-conceptual age of 37 to 46 weeks. The classification of aEEG background pattern into one of five different classes is done in accordance with the scoring scheme described in the following table:

    1. Continuous (C): Continuous activity with lower (minimum) amplitude around (5 to) 7 to 10 µV and maximum amplitude of 10 to 25 (to 50) µV.
    2. Discontinuous (DC): Discontinuous background with minimum amplitude variable, but below 5 µV, and maximum amplitude above 10 µV.
    3. Burst-suppression (BSA): Discontinuous background with minimum amplitude without variability at 0 to 1 (2) µV and bursts with amplitude >25 µV. BS+ denotes burst density >100 bursts/h, and BS- means burst density
    AI/ML Overview

    1. Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategoryAcceptance Criteria (Implicit)Reported Device Performance (BPc™ Algorithm)
    Positive Percent Agreement (PPA)Not explicitly stated but inferred from comparison to inter-rater performanceOverall PPA: 77% (95% CI: 72 – 82)
    False Detection Rate (FDR)Not explicitly stated but inferred from comparison to inter-rater performanceOverall FDR: 2.5 FD/hr (95% CI: 1.6 – 3.5)

    Detailed PPA and FDR by Pattern:

    PatternReported PPA (%) (95% CI)Reported FDR (FD/hr) (95% CI)
    Continuous (C)86 (77 - 94)0.3 (0.1 - 0.7)
    Discontinuous (D)64 (51 - 77)0.1 (0.1 - 0.3)
    Burst-suppression (BS)89 (78 - 99)4.4 (1.5 - 5.0)
    Low Voltage (LV)66 (50 - 83)4.2 (2.3 - 4.8)
    Inactive, flat (FT)80 (63 - 96)4.2 (1.2 - 4.8)

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size: Not explicitly stated but derived from the information on "EEG studies" for the clinical validation. Given the gender distribution (36 female/28 male), the test set involved 64 patients/EEG studies.
    • Data Provenance: Not explicitly stated, but the study was conducted by Natus Medical Incorporated in Canada, suggesting the data may be from Canada or a similar clinical environment. The study is retrospective, as it uses de-identified, randomized EEG studies that were provided to experts.

    3. Number of Experts and Qualifications

    • Number of Experts: 3
    • Qualifications of Experts: "board certified neurophysiologists"

    4. Adjudication Method for the Test Set

    The adjudication method was not explicitly a "2+1" or "3+1" approach. Instead, it seems to have used a "consensus-based" ground truth methodology. The "panel of 3 EEG board certified medical professionals" independently, blindly, and manually marked background pattern states. The "Gold standard" for comparison was defined as "background pattern as classified by a panel of 3 EEG board certified medical professionals." While the exact mechanism for how the three independent markings were combined to form the "gold standard" is not detailed (e.g., majority vote, discussion to consensus), it implies a form of expert consensus without a clear formal adjudication rule like 2+1. The results report "Inter Rater Performance" for each reviewer against a collective "gold standard" (likely the consensus or majority of the other two, though not explicitly stated for this table).

    5. Multi Reader Multi Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was done involving human readers with and without AI assistance. The study focuses purely on comparing the standalone performance of the AI algorithm against a "gold standard" established by human experts. It also includes inter-rater variability among human experts.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance study was done. The "Algorithm Performance Comparison" table directly reports the diagnostic performance (PPA and FDR) of the BPc™ algorithm when compared to the "gold standard" established by the panel of experts.

    7. Type of Ground Truth Used

    The type of ground truth used was expert consensus. It was established by "a panel of 3 EEG board certified medical professionals" who independently, blindly, and manually marked background pattern states.

    8. Sample Size for the Training Set

    The sample size for the training set is not provided in the document. The document describes the clinical validation dataset (test set) but no information regarding the dataset used to train the algorithm.

    9. How the Ground Truth for the Training Set Was Established

    As the sample size and nature of the training set are not provided, how the ground truth for the training set was established is also not detailed in the document.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1