Search Results
Found 2 results
510(k) Data Aggregation
(200 days)
The Simplexa C. auris Direct is a real-time polymerase chain reaction (RT-PCR) assay intended for use on the LIAISON MDX instrument for the direct in vitro qualitative detection of Candida auris DNA from a composite swab of bilateral axilla/groin from patients suspected of C. auris colonization.
The test is intended to aid in the prevention and control of C. auris infection in healthcare settings by detecting C. auris from colonized patients.
Positive results indicate that the patient is colonized with C. auris. A positive result cannot rule out co-colonization with other pathogens. A negative result does not preclude C. auris colonization or infection and should not be used as the sole basis for treatment or other patient management decisions. Results are meant to be used in conjunction with other clinical, epidemiologic, and laboratory information available to the clinician evaluating the patient. The test is not intended to diagnose or monitor treatment for C. auris infection. Concomitant cultures are necessary to recover organisms for epidemiological typing or for antimicrobial susceptibility testing.
The Simplexa C. auris RT-PCR system is intended for the amplification and qualitative detection of nucleic acid from Candida auris in composite bilateral axilla/groin swab specimens and consists of the following:
- The Simplexa C. auris Direct is the RT-PCR assay kit that contains all the reagents for the amplification reaction, including the primers and fluorescent probes for the detection of nucleic acid from Candida auris. The primers and fluorescent probes amplifies the C. auris DNA and Internal Control DNA. In addition, the kit comes with a barcode card, which contains assay specific parameters and lot information.
- The Simplexa C. auris Positive Control Pack is the separately packaged external positive quality control kit for use with the Simplexa C. auris Direct assay.
- The Simplexa C. auris Sample Prep Kit is the enzymatic buffer solution to receive the sample solution (bilateral axilla/groin swab in Amies transport media) from the patient.
The Simplexa C. auris RT-PCR system is for use with the LIAISON MDX instrument (with LIAISON MDX Studio Software), the RT-PCR thermocycler that amplifies the nucleic acid from biological specimens and uses real-time fluorescence detection to identify targets, and the Direct Amplification Disc (DAD), which is the accessory containing the input sample wells for use on the LIAISON MDX. The instrument and accessory were previously cleared under K102314 and K120413. The instrument is controlled by an external laptop running the software. The DAD consumable is compartmentalized into eight (8) separate wedges and can process up to eight (8) separate specimens or controls on each disc. Each wedge contains sample and reagent input wells, microfluidic channels and laser activated valves to control the fluid flow as well as a reaction/detection chamber.
The provided document describes the analytical and clinical performance of the Simplexa C. auris Direct RT-PCR system. However, it does not explicitly state pre-defined acceptance criteria in a table format that the device needed to meet. Instead, it presents the results of various studies (precision, analytical specificity, limit of detection, inclusivity, clinical performance) and then often concludes whether these results are "acceptable."
For example, for precision, it states: "For the multisite study, the test device showed ≥ 98.9% agreement of the qualitative result and ≤ 8.2% CV for each of the variance components, which is acceptable." This implies that ≥ 98.9% agreement and ≤ 8.2% CV were the internal acceptance criteria for precision.
Similarly, for the clinical performance, the reported Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) values are presented as the device's performance, but explicit pre-defined minimum thresholds for PPA and NPA as acceptance criteria are not given. They are implied by the fact that the De Novo request was granted.
Given this, I will infer the acceptance criteria from the reported "acceptable" results where possible and present the device performance based on the clinical study results.
Here's the information requested based on the provided text:
1. Table of Acceptance Criteria (Inferred) and Reported Device Performance
| Performance Characteristic | Inferred Acceptance Criteria (Based on "Acceptable" Results) | Reported Device Performance |
|---|---|---|
| Analytical Performance | ||
| Multisite Precision (% Agreement) | ≥ 98.9% (qualitative result) | 98.9% (Clade I South Asian (LP)) and 100.0% (other variants and controls) |
| Multisite Precision (% CV) | ≤ 8.2% (for each variance component) | ≤ 8.2% (observed max) |
| Lot-to-Lot Precision (% Agreement) | 100% (expected results) | 100% |
| Lot-to-Lot Precision (% CV) | ≤ 7.2% (for all panel members) | ≤ 7.2% (observed max combined) |
| Cross-reactivity & Microbial Interference | No observed cross-reactivity or interference | Not observed with any of the 34 organisms tested (wet testing) and none predicted by in silico analysis for 13 organisms |
| Interfering Substances | Expected detection rates, with documented interferences where applicable | Some interferences noted (anti-breathable deodorant cream @ 10% v/v, Benzalkonium chloride @ 0.07% v/v resulted in invalid results; detection restored at lower concentrations). Other 34 substances showed 100% detection. |
| Specimen Stability (5x LoD) | 100% positivity | 100% |
| Specimen Stability (2x LoD) | ≥ 95% positivity | 93-100% (Clade I 2xLoD fresh result was 93%, others were ≥97%) |
| Specimen Stability (0.5x LoD) | 10-90% positivity (expected variability) | 20-100% (varies by condition and clade) |
| Specimen Stability (Negative Samples) | 0% positivity | 0% |
| Limit of Detection (LoD) | ≥ 95% detection rate for the lowest concentration (confirmatory LoD) | Clade I: 127 CFU/mL (98% detection) Clade IV: 260 CFU/mL (98% detection) |
| Inclusivity (% Detection) | 100% detection of tested strains (wet testing) | 100% (all 9 strains from 6 clades) |
| Inclusivity (% Homology predicted) | ≥ 90% (oligo identity) with full coverage and predicted inclusivity | 98% (721/736 sequences) with two new Clade VI sequences showing 100% homology. |
| Carry-Over/Cross Contamination (% Detection of Negatives) | 0% | 0% (56/56 negative samples) |
| Clinical Performance | ||
| Positive Percent Agreement (PPA) - Prospective Cohort | Implied "acceptable" given clearance | 94.1% (32/34) (95% CI: 80.9% - 98.4%) |
| Negative Percent Agreement (NPA) - Prospective Cohort | Implied "acceptable" given clearance | 98.8% (1874/1896) (95% CI: 98.2% - 99.2%) |
| PPA - Combined Cohort (Prospective, Enriched, Retrospective) | Implied "acceptable" given clearance | 94.8% (55/58) (95% CI: 85.9% - 98.2%) |
| NPA - Combined Cohort (Prospective, Enriched, Retrospective) | Implied "acceptable" given clearance | 98.7% (1937/1962) (95% CI: 98.1% - 99.1%) |
2. Sample Size Used for the Test Set and Data Provenance
Test Set (Clinical Study):
- Total Evaluable Clinical Specimens (Combined Cohort): 2,020 specimens.
- Prospective Cohort: 1,930 evaluable specimens.
- Retrospective/Enriched Cohort: 90 evaluable specimens (11 retrospective pre-selected C. auris positive + 202 enriched specimens, of which 90 were evaluable).
- Data Provenance:
- Geographic Locations: Six study sites across four geographically diverse locations within the United States and one in Italy.
- Type of Data:
- Prospective: Prospectively collected specimens (axilla/groin swabs) from patients suspected of C. auris colonization. Tested either fresh or frozen. Collected from April to July 2023.
- Retrospective/Enriched: Leftover, de-identified composite bilateral axilla/groin swabs. Included 11 pre-selected C. auris positive retrospective specimens and 202 enriched specimens (identified as positive by a laboratory-verified RT-PCR test, then blinded and tested).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document states that the ground truth for the clinical study was established by a reference method consisting of "standard of care (SOC) culture followed by matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) for identification." These are laboratory methods, not human expert interpretation of images or other subjective data.
For discordant analysis, "Bi-directional sequencing (BDS) assays were performed when the candidate assay results differed from the comparator method." Again, this is a laboratory method.
The document does not specify human experts or their qualifications for establishing the ground truth for the test set.
4. Adjudication Method for the Test Set
The primary ground truth was established by the reference method (SOC culture + MALDI-TOF MS). For discordant results between the candidate assay and the reference method, Bi-directional Sequencing (BDS) was performed. However, "The results from discordant analysis were not used to alter the original performance but are provided in footnotes to the performance tables."
Therefore, there wasn't a human-based adjudication method in the traditional sense (e.g., 2+1 or 3+1 radiologists making a consensus decision). The reference method and supplemental BDS purely relied on objective laboratory techniques.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done
No, this document describes a diagnostic device (RT-PCR assay) for detecting nucleic acids of Candida auris. It is not an AI-assisted diagnostic imaging device for which an MRMC study comparing human readers with and without AI assistance would be relevant. The study focuses solely on the direct performance of the molecular diagnostic test against a laboratory reference method.
6. If a Standalone (i.e. Algorithm only without human-in-the-loop performance) was Done
Yes, the entire clinical evaluation (Sections VI.C) and analytical performance studies (Sections VI.A) describe the standalone performance of the Simplexa C. auris Direct RT-PCR system. This device is an automated molecular diagnostic test and does not involve human interpretation in a loop, except for the user performing the test steps according to the instructions. The results (Ct values, positive/negative calls) are generated directly by the instrument platform (LIAISON MDX).
7. The Type of Ground Truth Used
The primary ground truth used for the clinical study was Standard of Care (SOC) culture followed by Matrix-Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS) for identification. For discordant results, Bi-directional Sequencing (BDS) was used as a supplemental ground truth, though these results did not alter the original performance metrics. This is a form of laboratory reference standard.
8. The Sample Size for the Training Set
The document does not explicitly mention a "training set" in the context of an AI/ML algorithm that would undergo a distinct training phase. This document describes a molecular diagnostic assay, not a machine learning model. The development of such assays involves establishing parameters (like fluorescence and Ct thresholds) using initial data, which could be considered an internal "calibration" or "development" process rather than a "training set" in the AI sense.
It states: "The fluorescence and Ct thresholds for C. auris and Internal Control were established using 717 sample runs of No Template Control (NTC), Limit of Detection (LoD), Microbial Inhibition, Interference and Limiting Dilution samples. The established thresholds were then confirmed using an independent data set comprising 2,924 sample runs..." These 717 runs could be considered the data used for establishing (or "training") the assay's interpretive parameters.
9. How the Ground Truth for the Training Set Was Established
Given that this is a molecular diagnostic assay and not an AI/ML system, the concept of "ground truth for a training set" as it pertains to labeled examples for model learning is not directly applicable.
Instead, the "establishment" of the assay's operating parameters (like Ct thresholds) was based on experimental data where the expected outcome (presence/absence of C. auris, inhibition, etc.) was known by design:
- No Template Control (NTC): Expected negative.
- Limit of Detection (LoD): Contrived samples with known concentrations of C. auris.
- Microbial Inhibition/Interference: Samples with C. auris at known concentrations (e.g., 3x LoD) spiked with other substances/organisms.
- Limiting Dilution: Samples serially diluted to determine the lowest detectable concentration.
These experiments provide the "ground truth" for setting the assay's interpretive parameters and analytical performance characteristics. The known concentrations, presence/absence of target nucleic acids, and presence/absence of interfering substances served as the factual basis for defining how the device should interpret its signals (e.g., Ct value thresholds for positivity).
Ask a specific question about this device
(102 days)
The Focus Diagnostics Simplexa™ Flu A/B & RSV Direct assay is intended for use on the 3M Integrated Cycler instrument for the in vitro qualitative detection and differentiation of influenza B virus, and respiratory syncytial virus (RSV) RNA in nasopharyngeal swabs (NPS) from human patients with signs and symptoms of respiratory tract infection in conjunction with clinical and epidemiological risk factors. This test is intended for use as an aid in the differential diagnosis of influenza B, and RSV viral infections in humans and is not intended to detect influenza C.
Negative results do not preclude influenza virus or RSV infection and should not be used as the sole basis for treatment or other patient management decisions.
Performance characteristics for influenza A were established with clinical specimens collected during the 2010/2011 influenza season when 2009 H1N1 influenza and H3N2 were the predominant influenza A viruses in circulation. When other influenza A viruses are emerging, performance characteristics may vary.
If infection with a novel Influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent Influenza viruses and sent to the state or local health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
Simplexa™ Flu A/B & RSV Positive Control Pack REF MOL2660
Focus Diagnostics' Simplexa™ Flu A/B & RSV Positive Control Pack is intended to be used as a control with the SimplexaTM Flu A/B & RSV Direct kit. This control is not intended for use with other assays or systems.
The Simplexa™ Flu A/B & RSV Direct assay system is a real-time RT-PCR system that enables the direct amplification, detection and differentiation of human influenza A (Flu A) virus RNA, human influenza B (Flu B) virus RNA and RSV RNA from unprocessed nasopharyngeal swabs that have not undergone nucleic acid extraction. The system consists of the Simplexa™ Flu A/B & RSV Direct assay, the 3M Integrated Cycler (with Integrated Cycler Studio Software), the Direct Amplification Disc and associated accessories.
In the Simplexa™ Flu A/B & RSV Direct assay, bi-functional fluorescent probe-primers are used together with corresponding reverse primers to amplify Flu A, Flu B, RSV and internal control RNA. The assay provides three results; conserved regions of influenza A viruses (matrix gene), influenza B viruses (matrix gene) and RSV (M gene) are targeted to identify these viruses in the specimen. An RNA internal control is used to detect RT-PCR failure and/or inhibition.
The provided text describes a 510(k) summary for the Simplexa™ Flu A/B & RSV Direct and Simplexa™ Flu A/B & RSV Positive Control Pack. This submission is intended to add eight additional influenza strains to the analytical reactivity of the device and addresses modifications made to the device from an earlier version (K120413).
Here's a breakdown of the requested information based on the document:
1. Table of acceptance criteria and the reported device performance
The document does not explicitly state "acceptance criteria" in the format of pass/fail thresholds for clinical performance. Instead, it presents "Positive Percent Agreement (PPA)" and "Negative Percent Agreement (NPA)" with a predicate device (Gen 1.0) and between device versions (Gen 2.0 and Gen 2.1) as part of method comparison studies, along with analytical reactivity and specificity data.
Method Comparison Results (Performance in relation to K120413 and between versions)
| Target | Comparison | Metric | Reported Device Performance (Gen 2.0 vs Gen 1.0) | 95% CI (Gen 2.0 vs Gen 1.0) | Reported Device Performance (Gen 2.1 vs Gen 2.0) | 95% CI (Gen 2.1 vs Gen 2.0) |
|---|---|---|---|---|---|---|
| Flu A | PPA | Gen 2.0 vs Gen 1.0 | 100.0% (58/58) | 93.0% to 100.0% | - | - |
| NPA | Gen 2.0 vs Gen 1.0 | 95.7% (198/207) | 91.9% to 97.7% | - | - | |
| PPA | Gen 2.1 vs Gen 2.0 | - | - | 100.0% (58/58) | 93.8% to 100.0% | |
| NPA | Gen 2.1 vs Gen 2.0 | - | - | 99.0% (205/207) | 96.5% to 99.7% | |
| Flu B | PPA | Gen 2.0 vs Gen 1.0 | 98.2% (54/55) | 90.4% to 99.7% | - | - |
| NPA | Gen 2.0 vs Gen 1.0 | 95.7% (201/210) | 92.1% to 97.7% | - | - | |
| PPA | Gen 2.1 vs Gen 2.0 | - | - | 100.0% (56/56) | 93.6% to 100.0% | |
| NPA | Gen 2.1 vs Gen 2.0 | - | - | 100.0% (209/209) | 98.2% to 100.0% | |
| RSV | PPA | Gen 2.0 vs Gen 1.0 | 97.8% (45/46) | 88.7% to 99.6% | - | - |
| NPA | Gen 2.0 vs Gen 1.0 | 95.9% (210/219) | 92.4% to 97.8% | - | - | |
| PPA | Gen 2.1 vs Gen 2.0 | - | - | 100.0% (55/55) | 93.5% to 100.0% | |
| NPA | Gen 2.1 vs Gen 2.0 | - | - | 100.0% (210/210) | 98.2% to 100.0% |
Analytical Reactivity (Gen 2.1): All tested influenza A, influenza B, and RSV strains at specified concentrations were detected (100% detection for all, assayed in triplicate). These include:
- 18 Influenza A strains (H1, H3, H7N9)
- 10 Influenza B strains
- 4 RSV strains (A and B)
Cross Reactivity (Analytical Specificity) (Gen 2.1): No cross-reactivity was observed with 32 tested organisms (bacteria and other viruses) at clinically relevant concentrations. All results showed 0% detection for Flu A, Flu B, and RSV, and 100% detection for the Internal Control.
Interference (Gen 2.1): No evidence of interference was observed from potentially interfering substances (e.g., nasal sprays, antiviral drugs, blood, mucin protein) tested in contrived samples. All showed 100% detection for Flu A, Flu B, RSV, and RNA IC.
Limit of Detection (LoD) (Gen 2.1): The LoD for various strains across Gen 1.0, Gen 2.0, and Gen 2.1 are provided (e.g., Influenza A/Hong Kong/8/68 (H3N2) Gen 2.1 LoD: 0.1 TCID50/mL). The criteria for LoD determination was ≥95.0% detection (at least 31/32 replicates).
Precision (Gen 2.1): High reproducibility (low %CV) was observed for Ct values across inter-day, inter-run, inter-lot, and intra-run/lot variations for low and moderate positive samples of Flu A, Flu B, and RSV, as well as positive and negative controls. Qualitatively, all expected positive samples were detected at 100%, and negative samples were not detected for the target analytes.
2. Sample size used for the test set and the data provenance
-
Sample Size (Method Comparison): For each comparison (Gen 1.0 vs Gen 2.0, and Gen 2.0 vs Gen 2.1), 265 archived clinical samples were used.
- Composition: 55 positive for influenza A, 55 positive for influenza B, 55 positive for RSV, and 100 negative for all tested viruses.
- Data Provenance: The samples were "archived clinical samples" in Universal Transport Medium (UTM) or Viral Transport Medium (VTM).
- 131 of these samples for the Gen 1.0 vs Gen 2.0 comparison were originally tested in support of K120413. The remaining 134 included 33 from the 2010-2011 flu season and 101 from the 2013-2014 flu season.
- For the Gen 2.0 vs Gen 2.1 comparison, 125 samples were from K120413 study. The remaining 140 included 48 from the 2010-2011 flu season, 9 from 2012-2013, and 83 from 2013-2014.
- The country of origin is not specified, but the context implies data likely from the USA (given FDA submission). The data is retrospective as it uses archived samples.
-
Sample Size (Analytical Reactivity/Cross Reactivity/Interference):
- Analytical Reactivity: Each viral strain was assayed in triplicate.
- Cross Reactivity: Each organism was tested in triplicate (3 replicates). Baseline negative matrix was tested in five (5) replicates.
- Interference: Each interfering substance was tested in triplicate (3 replicates). Baseline was tested in 15 replicates.
- Limit of Detection: Initially, 4 concentrations per virus tested in triplicate. Confirmatory testing involved 32 replicates for the lowest concentration.
- Precision: Each sample panel member tested in duplicate for each Reaction Mix lot in each run, two runs per day for a total of three days, yielding at least 36 replicates per panel member (41 for one Flu B sample).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not explicitly state the use of "experts" to establish ground truth for the test set in the method comparison studies. The ground truth for these studies appears to be based on results from the predicate device (Simplexa™ Flu A/B & RSV Direct Gen 1.0) and/or other FDA cleared Nucleic Acid Tests (NATs) for discrepant samples. For analytical studies (reactivity, cross-reactivity, interference, LoD, precision), the ground truth is established by the known concentration/presence of the spiked organisms or substances.
4. Adjudication method for the test set
For the method comparison studies, discrepancies between the modified device (Gen 2.0 or Gen 2.1) and the predicate device (Gen 1.0 or Gen 2.0 respectively) were sometimes resolved using another FDA cleared NAT. For example, for Flu A discrepancies in the Gen 1.0 vs Gen 2.0 comparison, "7/9 discrepant (K120413 – Negative and K142365 – Positive) samples were positive for Flu A on another FDA cleared NAT." This suggests a form of adjudication using a third, independent, cleared method.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This section is not applicable. The device described is an in vitro diagnostic (IVD) assay for the detection of viruses using real-time RT-PCR, not an AI-powered diagnostic imaging device involving human readers or interpretation of medical images. Therefore, MRMC studies and the concept of human reader improvement with AI assistance do not apply.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
This concept is not directly applicable in the context of an IVD assay like the Simplexa™ Flu A/B & RSV Direct. The device is essentially a "standalone" algorithm/assay from the perspective of direct human interpretation providing a qualitative result (detected/not detected). The performance metrics (PPA, NPA, analytical reactivity, LoD, etc.) represent the standalone performance of the assay system. There is no human "in the loop" for interpreting the raw assay output; the instrument's software interprets the Ct values to provide a qualitative result.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Clinical Studies (Method Comparison): The ground truth for the clinical sample comparisons was based on:
- The performance of the predicate device (Simplexa™ Flu A/B & RSV Direct Gen 1.0) for direct comparison between versions.
- Other FDA cleared Nucleic Acid Tests (NATs) for resolving discrepant results between the device versions.
- Analytical Studies (Reactivity, Cross-Reactivity, Interference, LoD, Precision): The ground truth was established by known spiked concentrations of characterized viral strains, bacterial organisms, or potentially interfering substances into negative matrix.
8. The sample size for the training set
The document does not explicitly mention a "training set" in the context of machine learning or AI models, as this is an IVD assay. The development and optimization ("changes to the reaction mix formulation and cycling conditions," "changes to the manufacturing process and materials") that led to Gen 2.0 and Gen 2.1 would have involved internal validation and optimization data, which could be considered analogous to training data in a broad sense for assay development. However, specific "training set sizes" are not provided.
9. How the ground truth for the training set was established
As described in point 8, a formal "training set" for an AI model is not applicable here. For the assay development and optimization, ground truth would have been established through controlled laboratory experiments using well-characterized viral isolates and defined concentrations, analogous to how ground truth for the analytical studies (reactivity, LoD) was established.
Ask a specific question about this device
Page 1 of 1