Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN150013
    Date Cleared
    2015-10-08

    (182 days)

    Product Code
    Regulation Number
    866.3970
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The FilmArray Meningitis/Encephalitis (ME) Panel is a qualitative multiplexed nucleic acid-based in vitro diagnostic test intended for use with FilmArray and FilmArray 2.0 systems. The FilmArray ME Panel is capable of simultaneous detection and identification of multiple bacterial, viral, and yeast nucleic acids directly from cerebrospinal fluid (CSF) specimens obtained via lumbar puncture from individuals with signs and/or symptoms of meningitis and/or encephalitis. The following organisms are identified using the FilmArray ME Panel:

    Bacteria:
    Escherichia coli K1 Haemophilus influenzae Listeria monocytogenes Neisseria meningitidis (encapsulated) Streptococcus agalactiae Streptococcus pneumoniae

    Viruses: Cytomegalovirus Enterovirus Herpes simplex virus 1 Herpes simplex virus 2 Human herpesvirus 6 Human parechovirus Varicella zoster virus
    Yeast:

    Cryptococcus neoformans/gattii

    The FilmArray ME Panel is indicated as an aid in the diagnosis of specific agents of meningitis and/or encephalitis and results are meant to be used in conjunction with other clinical, epidemiological, and laboratory data.

    Results from the FilmArray ME Panel are not intended to be used as the sole basis for diagnosis, treatment, or other patient management decisions. Positive results do not rule out co-infection with organisms not included in the FilmArray ME Panel. The agent detected may not be the definite cause of the disease. Negative results do not preclude central nervous system (CNS) infection. Not all agents of CNS infection are detected by this test and sensitivity in clinical use may differ from that described in the package insert.

    The FilmArray ME Panel is not intended for testing of specimens collected from indwelling CNS medical devices.

    The FilmArray ME Panel is intended to be used in conjunction with standard of care culture for organism recovery, serotyping, and antimicrobial susceptibility testing.

    Device Description

    The FilmArray ME Panel is a multiplex nucleic acid-based test designed to be used with FilmArray or FilmArray 2.0 system ("FilmArray systems" or "FilmArray instruments"). The FilmArray ME panel includes a FilmArray ME Panel pouch which contains freeze-dried reagents to perform nucleic acid purification and nested, multiplex polymerase chain reaction (PCR) with DNA melt analysis. The FilmArray ME Panel simultaneously conducts 14 tests for the identification of potential CNS pathogens from CSF specimens obtained via lumbar puncture. Results from the FilmArray ME Panel are available within about one hour.

    A test is initiated by loading Hydration Solution into one port of the pouch and a CSF sample mixed with the provided Sample Buffer ampoules into the other port of the pouch and placing it in the FilmArray Instrument. The pouch contains all of the reagents required for specimen testing and analysis in a freeze-dried format; the addition of Hydration Solution and the Sample Buffer rehydrates the reagents. After the pouch is prepared, the FilmArray Software on the FilmArray systems guides the user though the steps of placing the instrument, scanning the pouch barcode, entering the sample identification, and initiating the run on the FilmArray systems.

    The FilmArray instruments contain a coordinated system of inflatable bladders and seal points, which act on the pouch to control the movement of liquid between the pouch blisters. When a bladder is inflated over a reagent blister, it forces liquid from the blister into connecting channels. Alternatively, when a seal is placed over a connecting channel it acts as a valve to open or close a channel.

    Nucleic acid extraction occurs within the pouch using mechanical and chemical lysis followed by purification using standard magnetic bead technology. After extracting and purifying nucleic acids from the unprocessed sample, a nested multiplex PCR is executed in two stages. The solution is then distributed to each well of the array. Array wells contain sets of primers designed specifically to amplify sequences internal to the PCR products generated during the first stage PCR reaction. The 2nd stage PCR, or nested PCR, is performed in each well of the array. At the conclusion of the 2nd stage PCR, the array is interrogated by melt curve analysis for the detection of signature amplicons denoting the presence of specific targets. A digital camera placed in front of the array captures fluorescent images of the PCR2 reactions and software interprets the data.

    The FilmArray software automatically interprets the results of each DNA melt curve analysis and combines the data with the results of the internal pouch controls to provide a test result for each organism on the panel.

    AI/ML Overview

    This document describes the evaluation of the FilmArray Meningitis/Encephalitis (ME) Panel for a De Novo classification. It includes information on analytical and clinical studies to demonstrate the device's performance.

    Here's the breakdown of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are not explicitly stated as strict numerical thresholds in the provided text for all performance metrics. However, they are implied through the detailed reporting of study results and statements like "The overall success rate for initial specimen tests in the prospective study was 98.9%" or "Overall PPA for clinical and contrived specimens combined was 97.5% with the lower bound of the two-sided 95% confidence interval (95% CI) at 92.9%, and overall NPA was 99.7% with the lower bound of the two-sided 95% CI at 99.3%." For analytical studies, acceptance criteria like "a minimum of 9/10 replicates detected" for specimen stability or "Delta Tm values between the two systems were less than or equal to 0.4℃... passed the study acceptance criteria of less than 0.5℃" are explicitly mentioned.

    For the purpose of this table, I will use the clinical performance reported as the "device performance" and extract the implied acceptance criteria where possible.

    Metric (Implied Acceptance Criteria)Device Performance (FilmArray ME Panel) - OverallComments
    Clinical Performance (Prospective Study)
    Overall Positive Percent Agreement (PPA) (for all analytes in clinical and contrived specimens combined)97.5% (95% CI: 92.9-99.2%)Based on comparison to comparator methods (culture for bacteria, PCR with bi-directional sequencing for viruses/yeast). This combines both clinical and contrived specimens from the comparison studies.
    Overall Negative Percent Agreement (NPA) (for all analytes in clinical and contrived specimens combined)99.7% (95% CI: 99.3-99.9%)Based on comparison to comparator methods.
    Analytical Reproducibility (FilmArray System)
    Initially Valid Runs98.4% (360/366)
    Agreement with Expected Results (at 1x LoD for various organisms during reproducibility)Ranges from 86.7% to 100%Specific criteria often stated as "minimum 9/10 replicates detected" for analytical studies.
    Tm Standard Deviations (for positive results)≤ 0.5℃ for all analytesThe reported performance of ≤ 0.5℃ for all analytes meets the internal acceptance criterion.
    Analytical Reproducibility (FilmArray 2.0 System)
    Initially Valid Runs98.6% (360/365)
    Agreement with Expected Results (at 1x LoD for various organisms during reproducibility)Ranges from 93.3% to 100%Specific criteria often stated as "minimum 9/10 replicates detected" for analytical studies.
    Tm Standard Deviations (for positive results)≤ 0.5℃ for all analytesThe reported performance of ≤ 0.5℃ for all analytes meets the internal acceptance criterion.
    Analytical Limit of Detection (LoD)
    Detection at LoD ConcentrationMostly 20/20 (100%), some 19/20 (95%)The LoD was confirmed for all analytes on both systems, indicating successful detection at the established LoD.
    Analytical Inclusivity
    Detection of tested isolates at 1x to 3x LoDMajority positive results at specified concentrationsOne HPeV strain detected at 10x LoD. In silico analysis was used for less common strains.
    Specimen Stability
    Detection for stored samples (e.g., 1 day ambient, 1, 3, 7 days refrigerated)9/10 or 10/10 replicates detectedStudy acceptance criteria: minimum of 9/10 replicates detected.
    Mean Cp values (between control and stored samples)Consistent (within expected system variability)
    Clinical Comparison (FilmArray vs. FilmArray 2.0)
    Delta Tm values between the two systems≤ 0.4℃ for all analytesPassed study acceptance criteria of < 0.5℃.

    2. Sample Size Used for the Test Set and Data Provenance

    Test Set (Clinical Studies):

    • Prospective Clinical Study: 1560 specimens.
      • Data Provenance: From 11 geographically distinct U.S. study sites. Primarily tested fresh (1015 specimens, 65%) with a portion collected and immediately frozen for later testing (545 specimens, 35%).
    • Preselected Archived Specimens: 235 specimens (210 positive, 25 negative).
      • Data Provenance: Archived clinical specimens.
    • Contrived Specimens (for clinical performance evaluation): Not a single test set, but for analytes with insufficient prevalence in prospective/archived studies, surrogate CSF specimens were created. At least 25 specimens per analyte, spiked at 2x LoD or other concentrations.
      • Data Provenance: Residual CSF specimens that previously tested negative for ME panel analytes. These were prepared, frozen, and distributed to prospective clinical study sites for testing.
    • Clinical Comparison between FilmArray and FilmArray 2.0 Systems: 149 specimens (21 positive clinical specimens + 128 contrived specimens).
      • Data Provenance: Residual, de-identified CSF specimens and contrived CSF specimens. Clinical specimens identified as positive at source laboratories or by culture/PCR comparator methods. Contrived specimens prepared in leftover negative CSF.

    Training Set (Analytical Studies):

    The document does not explicitly delineate a "training set" in the context of machine learning. However, many of the analytical studies (e.g., LoD, inclusivity, cross-reactivity, interfering substances) involve testing a wide range of strains and conditions to characterize the device's analytical performance, which serves as foundational data for the device's algorithms and performance specifications. The "initial melt ranges" for the assay's algorithm were determined based on a "combination of mathematical modeling using known sequence variations... as well as data from testing of clinical specimens and known isolates." This implies an initial dataset used for establishing the algorithm's parameters, which could be considered an internal "training" or development set.


    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document focuses on comparator methods rather than expert consensus for establishing ground truth, especially for the clinical studies.

    • Prospective Clinical Study:
      • Comparator Method for Bacteria (E. coli K1, H. influenzae, L. monocytogenes, N. meningitidis, S. agalactiae, S. pneumoniae): CSF bacterial culture (performed at the source laboratory).
      • Comparator Method for Viruses/Yeast (CMV, EV, HSV-1, HSV-2, HHV-6, HPeV, VZV, C. neoformans/gattii): Two PCR assays with bi-directional sequencing (performed at BioFire Laboratory).
      • Qualifications of Experts for Comparator Methods: Not explicitly stated, but "source laboratories" imply trained laboratory personnel, and "BioFire Laboratory" implies internal molecular diagnostics experts. The comparator methods themselves are "well-accepted comparator methods."
    • Preselected Archived Specimens: The presence (or absence) of expected analytes was verified in each specimen using a "confirmatory molecular test (e.g., PCR with bi-directional sequencing)." This suggests the ground truth was established by laboratory-based molecular methods.
    • Contrived Specimens: Ground truth is established by the known composition of the spiked organisms and their concentrations.
    • For algorithm-specific validation of melt ranges: "expert annotation" was mentioned, but the number and qualifications of these experts are not specified.

    4. Adjudication Method for the Test Set

    The primary method for establishing ground truth for individual results in the clinical studies was objective laboratory comparator methods (culture or PCR with bi-directional sequencing).

    • There is no explicit mention of an "adjudication method" involving multiple human readers to resolve discrepancies between the device and comparator results.
    • However, for certain discrepancies (e.g., false positives for S. pneumoniae, H. influenzae, CMV, EV, HSV-1, HSV-2, HHV-6, HPeV, VZV, and C. neoformans/gattii), further investigations were conducted. These included:
      • Re-testing with independent PCR assays.
      • Review of de-identified subject medical data (e.g., clinical diagnoses, treatment history, CSF pleocytosis).
      • Comparison with other relevant tests (e.g., Cryptococcal Antigen testing).
      • These further investigations serve as a form of retrospective adjudication to understand the true clinical status or the reason for the discrepancy, rather than a prospective consensus-based change to the initial ground truth.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done.

    This document describes the evaluation of an in vitro diagnostic device (IVD), which is an automated molecular diagnostic test. Such devices are typically evaluated for their standalone performance against established laboratory comparator methods, or for equivalence between different systems/platforms of the same device. MRMC studies are generally relevant for diagnostic imaging interpretation or other tasks where human readers interpret data, often with and without AI assistance, to measure the impact of AI on human performance. The FilmArray ME Panel is an automated assay, not an AI interpretation tool for human readers.


    6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done

    Yes, a standalone performance evaluation of the algorithm and device was done.

    The entire analytical and clinical study described in the document evaluates the FilmArray ME Panel as an automated, standalone diagnostic device. The device's software automatically interprets the results of the DNA melt curve analysis and generates a test report without physician intervention or interpretation of raw data. The physician then uses this report in conjunction with other clinical data.

    • The reproducibility studies, limit of detection, inclusivity, cross-reactivity, interfering substances, and specimen stability studies all assess the device's intrinsic analytical performance.
    • The clinical studies compare the device's outputdirectly to established comparator methods (culture and PCR with bi-directional sequencing).
    • The comparison study between FilmArray and FilmArray 2.0 systems also evaluates the standalone agreement between the two versions of the automated device.

    7. The Type of Ground Truth Used

    The ground truth for the device's performance evaluation was established using a combination of methods:

    • Comparator Laboratory Methods:
      • CSF bacterial culture: For bacterial analytes (considered the "gold standard" for these).
      • Two PCR assays with bi-directional sequencing: For viral and yeast analytes. These assays targeted different nucleic acid sequences than the FilmArray ME Panel to ensure independent confirmation.
    • Known Spiked Concentrations: For all contrived specimens used in analytical studies (e.g., LoD, inclusivity, competitive inhibition, interfering substances, specimen stability, matrix equivalence, and a portion of the clinical performance evaluation for rare analytes). Dilutions of known organisms were added to artificial CSF or negative clinical CSF.
    • Historical/Prior Laboratory Testing: For preselected archived specimens, ground truth was based on their previously confirmed positive status for specific analytes, often verified by confirmatory molecular tests prior to FilmArray testing.

    8. The Sample Size for the Training Set

    As mentioned in point 2, the document does not refer to explicit "training sets" in the context of typical machine learning. However, the system's "melt ranges" (a core component of its interpretive algorithm) were established using:

    • "mathematical modeling using known sequence variations of different strains/isolates/variants of targeted organisms"
    • "data from testing of clinical specimens and known isolates."

    The sample size for this initial establishment (which functions like a training phase for the algorithm) is not explicitly quantified as a single number in the document. It's an ongoing process of data collection and refinement based on analytical and early clinical performance data.


    9. How the Ground Truth for the Training Set Was Established

    Similarly, for the purposes of algorithm development and initial melt range establishment:

    • Known Sequence Variations: For mathematical modeling, genetic sequences of relevant organisms and their variants are used, which are inherently "ground truth" derived from genetic sequencing databases.
    • Known Isolates: Lab-characterized organisms with confirmed identity and concentration are used.
    • Early Clinical Specimens: For these, the ground truth would have been established using comparator laboratory methods as described in item 7.

    The "final validation of the melt ranges was performed and included review of data from the Inclusivity study and clinical studies," indicating that these studies further refined and confirmed the algorithm's parameters against ground truth established by comparator methods and known spiked samples.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1