Search Filters

Search Results

Found 19 results

510(k) Data Aggregation

    K Number
    DEN240067
    Manufacturer
    Date Cleared
    2025-09-19

    (301 days)

    Product Code
    Regulation Number
    864.1885
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN210011
    Date Cleared
    2023-09-29

    (914 days)

    Product Code
    Regulation Number
    866.6095
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Invitae Common Hereditary Cancers Panel is a qualitative high-throughput sequencing based in vitro diagnostic test system intended for analysis of germline human genomic DNA extracted from whole blood for detection of substitutions, small insertion and deletion alterations and copy number variants (CNV) in a panel of targeted genes.

    This test system is intended to provide information for use by qualified health care professionals, in accordance with professional guidelines, for hereditary cancer predisposition assessment and to aid in identifying hereditary genetic variants potentially associated with a diagnosed cancer.

    The test is not intended for cancer screening or prenatal testing. Results are intended to be interpreted within the context of additional laboratory results, family history, and clinical findings.

    The test is a single-site assay performed at Invitae Corporation.

    Device Description

    The Invitae Common Hereditary Cancers Panel uses hybridization-based capture, nextgeneration sequencing (NGS), and a custom-built bioinformatics pipeline to compare all positions in targeted regions of 47 genes to a reference sequence and identify variants, including single nucleotide variants (SNVs), insertions and deletions (Indels), and copy number variants (CNVs). Sequence analysis covers clinically important regions of each gene, including coding exons and 10 to 20 base pairs of adjacent intronic sequence on either side of the coding exons in the transcript listed in Table 1. Genes of "high clinical significance" are defined as those for which the test result(s) may lead to prophylactic screening, confirmatory procedures, or treatment that may incur morbidity or mortality to the patient and are shown in bold text. In addition, the analysis covers the select non-coding variants specifically defined in the table. Any variants that fall outside these regions are not analyzed. Identified variants are assessed by clinical professionals using currently available literature and data from public genetic variant databases. Variants are assigned a score, calculated according to an algorithm that weights the available clinical evidence. Possible outcomes include the following, which are based on joint ACMG/AMP Committee guidelines: Benign (not reported), Likely benign (not reported), Likely pathogenic, Pathogenic, Variant of Uncertain Significance. Variants are reported using HGVS nomenclature and the human reference genome GRCh37.

    AI/ML Overview

    The Invitae Common Hereditary Cancers Panel is a high-throughput sequencing-based in vitro diagnostic test system designed for detecting germline substitutions, small insertions and deletions, and copy number variants (CNVs) in 47 targeted genes. It is intended for use by qualified healthcare professionals for hereditary cancer predisposition assessment and to aid in identifying hereditary genetic variants potentially associated with diagnosed cancer.

    Here's a breakdown of the acceptance criteria and the study proving the device meets them:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria for the Invitae Common Hereditary Cancers Panel are primarily based on the analytical performance metrics of Positive Percent Agreement (PPA), Negative Percent Agreement (NPA), and Technical Positive Predictive Value (TPPV), and Technical Negative Predictive Value (TNPV).

    Performance Metric CategorySpecific MetricAcceptance Criteria (Implicit/Explicit)Reported Device PerformanceComments
    Precision/ReproducibilityOverall PPA (SNVs)No explicit threshold given. High PPA expected.99.95% (95% CI 99.92-99.97%)Meets high precision expectation.
    Overall PPA (Indels)No explicit threshold given. High PPA expected.99.57% (95% CI 99.07-99.80%)Meets high precision expectation. Slightly lower confidence interval than SNVs.
    Overall PPA (CNVs)No explicit threshold given. High PPA expected.99.67% (95% CI 98.80-99.91%)Meets high precision expectation.
    Overall NPA (All variant types)No explicit threshold given. High NPA expected.>99.99% (95% CI >99.99-100%)Excellent negative agreement.
    PPA (Deletions 1-5bp)No explicit threshold given. High PPA expected.97.53% (95% CI 94.72-98.86%)Noted as an exception due to low mappability/complexity region, but still high.
    PPA (SDHA gene for SNVs)No explicit threshold given. High PPA expected.99.15% (95% CI 98.59-99.50%)Slightly lower, but still high.
    PPA (SDHA gene for Indels)No explicit threshold given. High PPA expected.68.42% (95% CI 46.01-84.64%)Significantly lower PPA for this specific gene/variant type, indicating a known limitation.
    PPA (NF1 gene for CNVs)No explicit threshold given. High PPA expected.97.30% (95% CI 90.67-99.26%)Slightly lower, but still high.
    DNA InputOverall Concordance>99% compared to standard input>99.99% for all tested concentrations (5ng/uL, 10ng/uL, 46ng/uL)Meets threshold. 1ng/uL excluded as minimum.
    PPA / NPA>99% compared to standard inputSNVs: 99.9-100%, Indels: 100%, CNVs: 95.1-100%CNV deletions at 5ng/uL were 95.1% (95% CI 83.9-98.7%), slightly below the general >99% expectation.
    Failed SamplesLow number expected.0-6 failed samples depending on DNA input levels.1ng/uL and 46ng/uL had failures, supporting the determined optimal range.
    Analytical Specificity/InterferencePPA / NPA / Concordance for various interferentsNo significant impact on performance expected.Mostly 100% for PPA, NPA, and Concordance.Most interferents did not affect performance. Exceptions for K2EDTA & Wash Buffer on CNVs, and Post-PCR Amplicon on CNVs/Indels were identified as requiring control.
    Accuracy (Orthogonal Comparison)Overall TPPV (SNVs)No explicit threshold given. High TPPV expected.99.9% (95% CI 99.7->99.9%)Excellent.
    Overall TPPV (Indels)No explicit threshold given. High TPPV expected.100% (95% CI 99.9-100%)Excellent.
    Overall TPPV (CNVs)No explicit threshold given. High TPPV expected.99.5% (95% CI 99.2-99.7%)Excellent.
    Overall TNPV (SNVs)No explicit threshold given. High TNPV expected.100% (95% CI >99.9%-100%)Excellent.
    Overall TNPV (Indels)No explicit threshold given. High TNPV expected.100% (95% CI >99.9-100%)Excellent.
    Overall TNPV (CNVs)No explicit threshold given. High TNPV expected.99.7% (95% CI 99.6-99.7%)Excellent.
    TPPV (SDHA gene for SNVs)No explicit threshold given. High TPPV expected.99.0% (95% CI 94.4-99.8%)Slightly lower, but still good.
    TPPV (CNVs - SMAD4, TSC2)No explicit threshold given. High TPPV expected.SMAD4: 84.6% (95% CI 57.8-95.7%); TSC2: 88.9% (95% CI 56.5-98.0%)These were specifically highlighted for not meeting the 99% performance expectation, with false positives for single-exon calls.
    TPPV (CNV Duplications <= Single Exon)No explicit threshold given. High TPPV expected.95.5% (95% CI 92.4-97.4%)A specific stratification that showed lower accuracy.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Precision/Reproducibility Test Sets:

      • Set 1: 25 clinical samples.
      • Set 2: 18 samples enriched for Indels and CNVs.
      • Each sample was tested with 14 replicates.
      • Data Provenance: Clinical samples (exact country of origin not specified, but the applicant is Invitae Corporation, a US-based company, suggesting data is likely from the US or a similar regulatory environment). The studies were retrospective for established clinical samples.
    • DNA Input Study Test Set:

      • 8 whole blood clinical specimens.
      • Each specimen tested at 5 input levels (1, 5, 10, 23, and 46 ng/uL), each in triplicate, for a total of 120 samples.
      • Data Provenance: Clinical specimens.
    • Analytical Specificity/Interference Study Test Set:

      • Study 1: 7 interfering substances, each spiked into 5 specimens, tested in 3 replicates. (Plus 10 donor blood samples, 2 replicates each).
      • Study 2: 7 interfering substances, each spiked into 5-6 samples, tested in 2 replicates.
      • Data Provenance: Specimens sourced from a blood bank.
    • Accuracy (Orthogonal Comparison) Test Sets:

      • Non-Clinical Samples:
        • 5 Genome in a Bottle (GIAB) samples.
        • 92 supplemental cell line samples.
      • Clinical Specimens:
        • SNVs and Indels: 6014 clinical samples.
        • CNVs: 3542 clinical samples.
        • Additional 106 clinical specimens with prior negative CNV results for TNPV evaluation.
      • Data Provenance: GIAB samples are publicly available reference materials. Cell line samples are commercial. Clinical specimens "tested at Invitae" or "from patients diagnosed with cancer and individuals tested for predisposition assessment," indicating retrospective clinical data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used to establish the ground truth for the test sets in terms of individual review. Instead:

    • For the Precision/Reproducibility and DNA Input studies, "variants" are stated to have been identified in the samples. The ground truth for these samples (clinical and reference materials) is based on their characterized genetic profiles.
    • For the Accuracy (Orthogonal Comparison) study with non-clinical samples, ground truth was established by "well characterized genome sequence data" for GIAB samples and previous identification for the "at least one variant" in supplemental cell lines. These reference standards typically have their ground truth established by consensus of multiple methods and experts, but the exact number of experts involved in the initial characterization of these reference materials is not detailed in this document.
    • For the Accuracy (Orthogonal Comparison) study with clinical specimens, the ground truth was established by "a validated high-throughput sequencing platform" or "a validated multiplexed PCR based test or a validated microarray." This implies that the ground truth in these cases relied on the established accuracy and validation of these orthogonal methods.
    • For Interpretation Agreement, Invitae's internal classifications were compared to "independently generated prior clinical laboratory testing results" and "ClinVar classifications." ClinVar classifications are curated by various groups, including "Expert Panel submissions" (which imply expert consensus).

    The document states that identified variants "are assessed by clinical professionals using currently available literature and data from public genetic variant databases" and that "Variant interpretation and curation is performed according to controlled SOPs by trained individuals who have passed a competency assessment." This implies that highly qualified professionals are involved in the overall process of variant interpretation, which indirectly contributes to the establishment and verification of ground truth in their internal processes. These professionals are described as "PhD level scientists, genetic counselors, as well as licensed, board-certified clinical molecular geneticists or licensed, board-certified molecular pathologists." No specific number of experts used for each ground truth assessment within the test sets is provided.

    4. Adjudication Method for the Test Set

    The document does not describe an explicit "adjudication method" (like 2+1 or 3+1 for human readers) for establishing the ground truth for the test sets. Instead, it relies on:

    • Reference Standards: For non-clinical samples (GIAB), the ground truth is pre-established "well-characterized genome sequence data," implying a high level of confidence through consensus or advanced methods.
    • Orthogonal Validated Methods: For clinical samples, the comparison study uses "validated" orthogonal methods as the de-facto ground truth. This means the ground truth relies on the established accuracy of those methods, which would have undergone their own validation.
    • Internal Review/SOPs: The internal process for variant interpretation and curation involves review by "PhD level scientists, genetic counselors, as well as licensed, board-certified clinical molecular geneticists or licensed, board-certified molecular pathologists." This implies an implicit adjudication or consensus within their established clinical laboratory practices, but not a formalized adjudication by independent experts for each test set sample specifically for this study.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This study is for a genetic diagnostic panel, not for an AI-assisted diagnostic imaging system that typically involves human readers interpreting images. The evaluation focuses on the analytical performance of the automated sequencing pipeline.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, the studies described are essentially standalone evaluations of the assay's analytical performance, which includes the bioinformatics pipeline (algorithm) that calls variants. The "Invitae Common Hereditary Cancers Panel uses hybridization-based capture, next-generation sequencing (NGS), and a custom-built bioinformatics pipeline to compare all positions in targeted regions... and identify variants." The precision, DNA input, analytical specificity, and accuracy studies directly assess the output of this automated system, comparing it against known ground truths or orthogonal methods. While "Variant Interpretation and Review" does involve human professionals, the core performance metrics (PPA, NPA, TPPV, TNPV) are evaluating the technical ability of the system (including the bioinformatics pipeline) to accurately detect and call variants.

    7. Type of Ground Truth Used

    The ground truth used in the studies includes:

    • Reference Genome Sequence Data: "Genome in a Bottle (GIAB) samples with well characterized genome sequence data."
    • Cell Line Samples: Supplemental cell line samples with "at least one variant that has been identified and reported."
    • Orthogonal Validated Methods: Comparison to results from "a validated high-throughput sequencing platform," "a validated multiplexed PCR based test," or "a validated microarray."
    • Prior Clinical Laboratory Testing Results: For variant interpretation agreement, comparison was made to "independently generated prior clinical laboratory testing results" and "ClinVar classifications" (which are curated by expert panels).

    8. Sample Size for the Training Set

    The document does not provide a specific sample size for a "training set." The listed studies are for the validation of the device. The development of the "custom-built bioinformatics pipeline" and the "algorithm that weights the available clinical evidence" would have implicitly involved training data, but that data and its size are not detailed in this document, which focuses on validation data.

    9. How the Ground Truth for the Training Set Was Established

    Since no specific "training set" is described with sample sizes, the method for establishing its ground truth is also not explicitly stated. However, the general approaches described for variant assessment and interpretation ("assessed by clinical professionals using currently available literature and data from public genetic variant databases," "curated by qualified Invitae staff" according to SOPs, consultation of "external databases") would logically form the basis for establishing ground truth for any data used in the development or "training" of the bioinformatics pipeline and variant interpretation rules.

    Ask a Question

    Ask a specific question about this device

    K Number
    K221869
    Date Cleared
    2023-09-05

    (434 days)

    Product Code
    Regulation Number
    866.6060
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The BCR-ABL1 (p210) %IS Kit (Digital PCR Method) is an in vitro nucleic acid amplification test for the quantitation of BCR-ABL1 and ABL1 transcripts in total RNA from whole blood of diagnosed t (9;22) positive Chronic Myeloid Leukemia (CML) adult patients expressing BCR-ABL1 fusion transcripts type e13a2 and/or e14a2. The BCR-ABL1 (p210) %IS Kit (Digital PCR Method) is a reverse transcription-quantitative PCR performed on the Sniper Digital PCR All-in-One System and is intended to measure BCR-ABL1 to ABL1, expressed as a log molecular reduction (MR value) from a baseline of 100% on the International Scale, in t (9:22) positive CML patients during monitoring of treatment with Tyrosine Kinase Inhibitors (TKIs).

    The BCR-ABL1 (p210) %IS Kit (Digital PCR Method) is intended for use only on the Sniper Digital PCR All-in-One System.

    The test does not differentiate between e13a2 or e14a2 fusion transcripts and does not monitor other rare fusion transcripts resulting from t (9:22). This test is not intended for the diagnosis of CML.

    Device Description

    The BCR-ABL1 (p210) %IS Kit (Digital PCR Method) is designed for detection of the BCR-ABL1 fusion gene (p210) and ABL1 gene, with specific primers and specific fluorescence probes. The test process includes three parts. The first part is to extract ribonucleic acid (RNA) from peripheral blood of CML patients. The second part is to detect BCR-ABL1 fusion gene (p210) and ABL1 internal reference gene in RNA samples by RT-dPCR (Reverse Transcription-Droplet PCR) reaction solution using the Sniper Digital PCR All-in-One System (DQ24-Dx). The third part is to analyze the results.

    The Sniper Digital PCR All-in-One System consists of one instrument, which can be used together with it's supporting consumables and BCR-ABL1 (p210) %IS Kit (Digital PCR Method) to complete the detection of samples.

    The Sniper Digital PCR All-in-One System divides the sample into about 20000 droplets and carries out PCR amplification, read the number of positive and negative droplets through fluorescent signals, and then calculate the concentration of nucleic acid quantitatively according to the volume of the droplets and the principle of Poisson Distribution.

    DQ24-Dx-Sight Software (v1.0.2) is used to control the system and analyze test results. This software is embedded in the Sniper Digital PCR All-in-One System.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study detailed in the provided document:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Precision (CV, %) requirements for MR values in multi-site study:All acceptance criteria for precision were satisfied.
    - MR0.3-MR2.0: ≤ 10%- MR1.0 (e13a2, e14a2, mix): Total CV% range 1.85% - 2.38%
    - MR2.1-MR3.49: ≤ 15%- MR2.0 (e13a2, e14a2, mix): Total CV% range 1.54% - 1.82%
    - MR3.5-MR4.0: ≤ 20%- MR3.0 (e13a2, e14a2, mix): Total CV% range 2.37% - 3.11%
    - LOQ: ≤ 20%- MR4.0 (e13a2, e14a2, mix): Total CV% range 2.31% - 3.43%
    - MR4.5 (e13a2, e14a2, mix): Total CV% range 4.95% - 5.68%
    Controls & Calibrators Precision (CV, %) in multi-site study:Calibrators 10%IS MR CV: 2.10%, %IS CV: 5.00%Calibrators 0.1%IS MR CV: 1.79%, %IS CV: 12.20%Positive control 1 MR CV: 2.14%, %IS CV: 5.18%Positive control 2 MR CV: 2.68%, %IS CV: 23.86%
    Precision (CV, %) requirements for MR values in batch-to-batch study:All acceptance criteria for batch-to-batch precision were satisfied.
    - MR0.3-MR2.0: <10%- MR1.0 (e13a2, e14a2): Total CV% range 2.86% - 3.06%
    - MR2.1-MR3.49: <15%- MR3.0 (e13a2, e14a2): Total CV% range 2.82% - 3.07%
    - MR3.5-MR4.0: <20%- MR4.0 (e13a2, e14a2): Total CV% range 4.52% - 4.55%
    - LOQ: <20%- MR4.5 (e13a2, e14a2): Total CV% range 5.01% - 5.41%
    Controls & Calibrators Precision (CV, %) in batch-to-batch study:Calibrators 10%IS MR CV: 2.59%, %IS CV: 6.30%Calibrators 0.1%IS MR CV: 1.99%, %IS CV: 13.90%Positive control 1 MR CV: 2.74%, %IS CV: 6.66%Positive control 2 MR CV: 3.18%, %IS CV: 28.07%
    RNA Extraction Method CV%: <10%All samples showed CV% less than 10%. (Range 1.33% - 6.91%)
    Linearity/Assay Reportable Range:All acceptance criteria for linearity and assay reportable range were satisfied.
    - Precision: ≤ 10%All samples met precision requirement (range 0.49% - 7.01%).
    - % Deviation: ≤ ± 15%All samples met % deviation requirement (range -10.91% - 5.24%).
    - R2: ≥0.98e13a2: 0.996, e14a2: 0.994, e13a2 & e14a2 together: 0.995 (All ≥0.98)
    - 95% confidence interval for slope: 0.83-1.20e13a2: 0.98-1.02, e14a2: 0.98-1.03, e13a2 & e14a2 together: 0.99-1.02 (All within range)
    Traceability: Correlation with R2 values of 0.989-0.997R2 values of 0.989-0.997 reported.
    Detection Limit:All acceptance criteria for detection limit were satisfied.
    - Limit of Blank (LoB): No detectable BCR-ABL values in negative samples.138 out of 144 negative test results had no detectable BCR-ABL values. LoB is 0 copy.
    - Limit of Detection (LoD): Hit rate ≥95%MR4.7 samples hit rates between 97% and 98%. LoD of 4.7 supported.
    - Limit of Quantitation (LoQ): Hit rate 100% and CV% ≤10%MR4.5 samples hit rates 100% and precision between 3.47% and 4.03%. LoQ of 4.5 supported.
    Analytical Specificity (Interference):All samples passed the acceptance criteria.
    - MR values: Mean test MR value and 95% CI within 95% CI ±0.5Log of control.All interfering substances met this criterion.
    - %IS values: 95% CI of mean test %IS intersects detected range of control.All interfering substances met this criterion.
    Primer Specificity:All acceptance criteria for primer specificity were met.
    - p190 and p230 samples: Negative specificity ≥95%100% negative specificity for p190 and p230 samples.
    - p210 samples: Positive specificity 100% and CV% ≤10%100% positive specificity and CV% <10% for all p210 samples tested.
    Carryover Contamination: No significant signal in negative wells.No signal measured in 32 negative wells out of 64 replicates.
    RNA Input: Optimal RNA input identified. Sensitivity, deviation, and precision for optimal input.Optimal RNA input determined to be 500ng, with 100% positive detection rate, deviation within ±0.5, and precision ≤10%.
    Stability Studies:All acceptance criteria for stability were met.
    - Real-Time Stability (kit, calibrators): Controls, calibrators, samples values within pre-established ranges (deviation within ±0.5 log), CV% ≤10%. Mean MR value and 95% CI within ±0.5Log of T0.Performance met criteria for 12 months at -20°C±5°C. Precision between 0.56% and 5.95%.
    - Freeze-thaw Stability (kit, calibrators): Controls, calibrators, samples values within pre-established ranges (deviation within ±0.5 log), CV% ≤10%. Mean MR value and 95% CI within ±0.5Log of 0 time.Stable performance for at least 5 freeze-thaw cycles. Precision between 0.83% and 5.95%.
    - Specimen Stability (Peripheral blood): CV% ≤10%. Mean MR value and 95% CI within ±0.5Log of 0 day.Peripheral blood samples stored for 1 day at 2-8°C are stable. Precision between 0.92% and 5.75%.
    Method Comparison with Predicate Device:The device demonstrated substantial equivalence to the predicate.
    - Passing-Bablok regression: Intercept A (95% CI) and slope B (95% CI) close to 0 and 1, respectively. Spearman correlation coefficient > 0.95.Intercept A (95% CI): 0.17 (0.13-0.22), Slope B (95% CI): 0.99 (0.97-1.01). Spearman correlation coefficient: 0.988 (P<0.0001).

    Study Details

    1. A table of acceptance criteria and the reported device performance: See table above.

    2. Sample size used for the test set and the data provenance:

      • Precision/Reproducibility (Multi-site): 5 positive pools (3 variants, 5 different MR concentrations). 36 replicates per sample (2 replicates/run, 2 runs/day, 3 days, 3 sites). Total 540 observations.
      • Precision between batches: 2 positive pools (2 variants, 4 different MR concentrations). 108 replicates per sample (3 replicates/run, 2 runs/day, 3 days, 1 site with 2 instruments, 3 reagent lots). Total 864 observations.
      • RNA Extraction Method: 3 positive pools (3 variants, 5 different MR concentrations) each derived from multiple positive peripheral blood samples (6 e13a2, 7 e14a2) or K562 cells. Total of 180 results (samples extracted 2 times by 2 operators/day for 3 days).
      • Linearity/Assay reportable range: 2 positive pools (2 variants, 10 different MR concentrations). 4 replicates per sample.
      • Limit of Blank: 144 negative samples.
      • Limit of Detection/Limit of Quantitation: 2 positive pools (2 variants, 3 different MR concentrations). 20 replicates per day for 3 days with 2 reagent lots. Total 120 replicates.
      • Analytical Specificity (Interference): 1 sample pool (MR ~3.0). 2 replicate extractions, each tested in 3 replicates (6 tests per sample type). Potential interfering substances evaluated.
      • Primer Specificity: 4 positive samples (p190, p230, p210 e13a2, p210 e14a2) at 4 different concentrations. 4 replicates per sample.
      • Carryover Contamination: 1 high positive pool and 1 negative pool. 64 total test samples (32 high positive wells and 32 negative wells).
      • RNA Input: 2 positive pools (2 variants, 4 different MR concentrations). 3-5 replicates per run across 6 different RNA input amounts.
      • Real-Time Stability: 1 positive pool (1 variant, 3 different MR concentrations). 3 lots tested across 7 time points (T0, T3, T6, T9, T11, T12, T13).
      • Freeze-thaw Stability: Same samples as real-time stability. 1 lot tested with 3, 5, and 6 freeze-thaw cycles.
      • Specimen Stability: 3 fresh peripheral blood samples (1 variant, 3 different MR values). RNA extracted on Day 0, 1, 2. Each RNA sample tested 6-8 replicates.
      • Method Comparison with Predicate Device: 112 clinical samples (retrospective) collected from 2 hospitals. All samples were from patients diagnosed with t(9;22) positive CML with MR values distributed between 0.32 and 4.47.
      • Data Provenance: The document explicitly mentions that for the Method Comparison Study, clinical samples were collected from 2 hospitals and were retrospective. Other analytical studies used various types of prepared RNA samples or pools. The specific country of origin is not explicitly stated for all samples, but the submitting company is Suzhou Sniper Medical Technologies Co., Ltd. from China, suggesting the data collection likely occurred there.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Ground Truth Type: For the analytical studies, the ground truth was established by known concentrations/MR values of prepared RNA samples or specific variants (e.g., in linearity, LOD/LOQ, primer specificity). For the method comparison study, the ground truth was the result from the predicate device, the QXDx BCR-ABL %IS Kit.
      • Experts: The document does not explicitly state the use of external "experts" (e.g., radiologists, pathologists) to establish ground truth for this in vitro diagnostic (IVD) kit. The ground truth for analytical performance was based on the physical properties of the prepared samples (e.g., known dilutions, concentrations). For the clinical comparison, the predicate device served as the reference.
    4. Adjudication method for the test set:

      • Not applicable. As this is an IVD kit for quantitative measurement and not an image-based AI device requiring human interpretation, there is no mention of an adjudication method like 2+1 or 3+1 for establishing ground truth. Raw data from instrument readings are compared against predefined analytical or clinical reference methods/values.
    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • Not applicable. This is an in vitro diagnostic device, not an imaging AI device involving human interpretation/readers. Therefore, an MRMC study with human readers and AI assistance is not relevant or described.
    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • The study primarily details the standalone performance of the device (BCR-ABL1 (p210) %IS Kit on the Sniper Digital PCR All-in-One System) itself, without direct human-in-the-loop interaction for result interpretation beyond running the assay and reviewing the automatically generated results. The device "is intended to measure BCR-ABL1 to ABL1, expressed as a log molecular reduction (MR value) from a baseline of 100% on the International Scale." The "results are interpreted automatically by the embedded Software DQ24-Dx-Sight from measured droplet counts, fluorescent signals, and embedded calculation algorithms." This indicates a standalone algorithmic performance.
    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • For the analytical performance section, the ground truth was based on known concentrations/values of prepared RNA samples and comparisons to the 1st WHO International Genetic Reference Panel for BCR-ABL translocation quantitation.
      • For the method comparison study, the ground truth was established by the results from the legally marketed predicate device (QXDx BCR-ABL %IS Kit).
    8. The sample size for the training set:

      • The document describes the analytical and method comparison studies for validation of the device. It does not mention a separate "training set" in the context of machine learning, as this is a digital PCR assay kit and not an AI/ML-based diagnostic system that typically requires distinct training, validation, and test sets.
    9. How the ground truth for the training set was established:

      • Not applicable, as no "training set" is described for an AI/ML context.
    Ask a Question

    Ask a specific question about this device

    K Number
    K203245
    Manufacturer
    Date Cleared
    2023-05-04

    (912 days)

    Product Code
    Regulation Number
    866.6010
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Bladder EpiCheck Kit is intended for the qualitative detection of DNA methylation patterns of 15 loci in human DNA that are associated with transitional cell carcinoma of the bladder. The test is performed on voided urine samples and run on the ABI® 7500 Fast Dx Real-Time PCR system.

    Bladder EpiCheck Kit is indicated for use as a non-invasive method to monitor for tumor recurrence in conjunction with cystoscopy in patients previously diagnosed with Non-Muscle Invasive Bladder Cancer.

    Device Description

    The Bladder EpiCheck Test is a real-time PCR-based in vitro diagnostic assay intended for the qualitative detection of DNA methylation patterns associated with transitional cell carcinoma of the bladder to monitor for tumor recurrence (in conjunction with cystoscopy) in patients previously diagnosed with non-muscle invasive bladder cancer (NMIBC).

    The assay consists of a panel of 15 novel DNA methylation (covalent addition of methyl (CH3) groups to the C5 position of the pyrimidine ring of cytosines, typically in a CpG dinucleotide) biomarkers that were found to distinguish between patients with bladder cancer and patients without bladder cancer. The Bladder EpiCheck Test differentiates between methylated and non-methylated DNA, creating a unique platform for methylation profiling of urine specimens towards the detection of bladder cancer recurrence in patients previously diagnosed with the disease. The test is comprised of reagents for end-to-end (sample-to-answer) processing of urine samples (reagents for DNA extraction, DNA digestion, PCR amplification, and analysis software), and is performed using the Applied Biosystems® 7500 Fast Dx Real-Time PCR system.

    A voided urine specimen is centrifuged, and the cells (both normal and cancerous if present) are separated from the urine supernatant. DNA is then extracted from the cell pellet using the Bladder EpiCheck Extraction kit (P/N NX899090-01C). The extracted DNA is digested using a methylation-sensitive restriction enzyme mix. which cleaves DNA at specific recognition sequences if they are unmethylated. Methylated DNA is protected from enzymatic digestion and therefore remains intact.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Bladder EpiCheck Kit, based on the provided FDA 510(k) summary:

    Device: Bladder EpiCheck Kit
    Intended Use: Qualitative detection of DNA methylation patterns of 15 loci in human DNA associated with transitional cell carcinoma of the bladder, used as a non-invasive method to monitor for tumor recurrence in conjunction with cystoscopy in patients previously diagnosed with Non-Muscle Invasive Bladder Cancer.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria in a dedicated table format. However, performance metrics are reported. Based on the "Method Comparison" section (9.2 Clinical), the de facto acceptance criteria appear to be tied to non-inferiority against the predicate device (UroVysion Bladder Cancer Kit) and sufficient performance against a Gold Standard.

    Performance MetricImplicit Acceptance Criteria (Inferred from comparison to Predicate / Gold Standard)Reported Device Performance (Bladder EpiCheck)
    Against Gold Standard (Clinical Performance)
    AccuracyMust be clinically acceptable78.8% ([74.8%; 82.4%])
    SensitivityMust be clinically acceptable66.7% ([58.4%; 74.0%])
    SpecificityMust be clinically acceptable84.2% ([79.8%; 87.9%])
    Positive Predictive Value (PPV)Must be clinically acceptable65.3% ([57.1%; 72.6%])
    Negative Predictive Value (NPV)Must be clinically acceptable85.1% ([80.7%; 88.6%])
    Against Predicate Device (Comparative Effectiveness)
    Sensitivity DifferenceNon-inferior (e.g., within a predefined margin)+4.82% (Bladder EpiCheck higher than UroVysion) ([ -5.7%; 15.3%])
    Specificity DifferenceNon-inferior (e.g., within a predefined margin)-2.97% (Bladder EpiCheck lower than UroVysion) ([ -7.8%; 1.9%])
    Analytical Performance (Examples)
    Interlaboratory Reproducibility (Overall Agreement, Lab to Lab with contrived samples)High agreement (e.g., >95%)99.3% ([98.28%; 99.72%])
    Interlaboratory Reproducibility (Overall Agreement, Lab to Lab with clinical samples)High agreement (e.g., >95%)96.5% ([94.0%; 98.0%])
    Operator-to-Operator/Day-to-Day Reproducibility (Overall Agreement)High agreement (e.g., >95%)99% ([96.4%; 99.7%]) (for Operator 1) and 99% ([94.6%; 99.8%]) (for Operator 2)
    Lot-to-Lot/Instrument-to-Instrument Reproducibility (Overall Agreement)High agreement (e.g., >95%)100.0% ([99.09%; 100.0%])
    Functional Limit of Detection (fLoD)Clinically relevant lower limit0.186 ng/well (2.23 ng/sample)
    Tumor Limit of Detection (tLoD)Clinically relevant lower limit7.5% tumor DNA fraction (~0.17 ng tumor DNA)
    Methylation Limit of Detection (mLoD)Clinically relevant lower limit0.348% for BE-1, 0.06681% for BE-2
    Digestion Restriction Efficiency>99%>99.9% for all 15 biomarkers
    Robustness (Contrived samples)High agreement (e.g., >95%)98.5% ([96.77%; 99.31%])
    Robustness (Clinical samples)High agreement (e.g., >95%)99.3% ([96.9%; 99.8%])
    Lack of InterferenceNo significant interference at clinical levelsNo evidence of interference caused by substances tested at clinically relevant physiological ranges.
    In-use & Real-time Stability (Overall Agreement)No significant performance change100% agreement (for kit performance up to 486 days based on descriptions)
    Freeze-Thaw Stability (Overall Agreement)No significant performance changeNo significant performance changes and low variability in EpiScore value between the 3 timepoints
    Shipping Stability (Overall, Positive, Negative Agreement)100%100%
    Sample Stability (Fresh Urine)Clinically acceptable duration99.01% ([95.68%; 99.78%]) for 5 days
    Sample Stability (Pelleted Urine)Clinically acceptable duration100.0% ([97.08%; 100.0%]) for 19 days at -20°C
    Sample Stability (Extracted DNA)Clinically acceptable duration98.25% ([94.84%; 99.42%]) for 30 days at -20°C
    DNA Extraction Efficiency (Overall, Positive, Negative Agreement)100%100%

    2. Sample Size and Data Provenance for Test Set (Clinical Performance Study)

    • Sample Size:
      • Against Gold Standard: 583 subjects (total voided urine specimens collected from 583 subjects). Valid Bladder EpiCheck and GS results were obtained from 449 subjects.
      • Against Predicate Device (Matched Cases): Valid Bladder EpiCheck, UroVysion, and GS results were obtained from 352 samples.
      • Specificity in Urology Patients without Bladder Cancer: 147 subjects.
      • Clinical Specificity - Cross Reactivity with Other Cancers: 147 urine samples.
    • Data Provenance:
      • Country of Origin: U.S. and Canada (from 11 academic and urology specialty medical centers).
      • Retrospective or Prospective: The main clinical study (Method Comparison) was a multi-center, prospective, IRB-approved longitudinal study. The specificity study in urology patients without bladder cancer was also multi-center, prospective. The cross-reactivity study utilized banked remnant de-identified urine samples, which would generally be considered retrospective.

    3. Number of Experts and their Qualifications for Establishing Ground Truth for the Test Set

    The document does not specify the number of experts or their qualifications for establishing the ground truth. It states that positive cases were confirmed by "cystoscopy and pathology." This implies that the ground truth was established by clinical diagnoses and pathological examination of tissue, presumably performed by trained urologists and pathologists, which are standard practices. No "experts" are explicitly described as reviewing cases for the purpose of establishing a "ground truth" consensus for the study, beyond the routine clinical workflow.

    4. Adjudication Method for the Test Set

    The adjudication method is implicitly described for the Gold Standard (GS) definition:

    • "a subject was considered 'positive' if the interpretation for either cytology or the combined cystoscopy/pathology results were positive"
    • "and a subject was considered 'negative' if both cytology and the combined cystoscopy/pathology results were negative."

    This indicates a hierarchical or "any positive result makes it positive" adjudication for the ground truth definition. There is no explicit mention of an adjudication panel (e.g., 2+1, 3+1) for cases of disagreement between cytology and pathology results, or for disagreements among multiple readers of the ground truth modalities.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted. This device is a molecular diagnostic test (in-vitro diagnostic) and not an imaging AI device that assists human readers. Therefore, the concept of human readers improving with AI vs. without AI assistance does not apply in this context. The comparison was between the Bladder EpiCheck test result and clinical ground truth (cytology/pathology), and between Bladder EpiCheck test results and the predicate device's test results.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance Study

    Yes, the device's performance, as reported in the "Method Comparison" section, is a standalone (algorithm only without human-in-the-loop) performance. The Bladder EpiCheck Kit provides a qualitative result (positive/negative) based on its algorithm (EpiScore), and this result is compared directly to the established Gold Standard.

    7. Type of Ground Truth Used

    The primary ground truth used for the clinical performance study consisted of:

    • Combined Cystoscopy/Pathology data: This is the gold standard for definitive diagnosis of bladder cancer recurrence.
    • Clinical Cytology: Urine cytology was also part of the Gold Standard definition.

    Therefore, the ground truth is a combination of pathology (histopathological examination of biopsy/resection specimens) and outcomes data (clinical diagnosis via cystoscopy, supplemented by cytology).

    8. Sample Size for the Training Set

    The document refers to "Clinical Cutoff (Training and Feasibility Data)" in section 9.1.

    • Total for software algorithm development: 178 samples.
    • First set (for cut-off definition): 109 samples (40 control, 69 UCC positive).
    • Second set (for cut-off validation): 67 samples (51 control, 16 UCC positive).

    It's important to note that this "training" refers to the development and validation of the EpiScore algorithm's cutoff, not necessarily a machine learning training set in the AI sense.

    9. How the Ground Truth for the Training Set Was Established

    For the "training" set (used for algorithm development and cutoff definition, section 9.1), the ground truth was established by:

    • "urine samples collected from control patients with a history of bladder cancer and bladder cancer positive patients confirmed by cystoscopy and pathology."
    • "Urothelial Cell Carcinoma (UCC) positive patients confirmed by pathology."

    Similar to the test set, the ground truth for algorithm development was based on definitive clinical diagnosis and pathological confirmation.

    Ask a Question

    Ask a specific question about this device

    K Number
    DEN200044
    Manufacturer
    Date Cleared
    2022-11-09

    (854 days)

    Product Code
    Regulation Number
    866.5980
    Type
    Direct
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Eonis™ SCID-SMA kit is intended for the qualitative detection of the SMN1 gene exon 7 as an aid in screening newborns for Spinal Muscular Atrophy (SMA). The test is intended for DNA from blood specimens dried on a filter paper and for use on the QuantStudio™ Dx Real-Time PCR instrument.

    This test is only intended for use for screening of SMA that bear the homozygous deletion of SMN1 exon 7.

    This test is not intended for use as a diagnostic test and a positive screening result should be followed by confirmatory testing.

    Device Description

    The Eonis SCID-SMA kit contains reagents to detect three biomarkers: TREC, KREC and exon 7 in the SMN1 gene. Detection of TREC and KREC was cleared in K203035.

    The newborn screening workflow for the Eonis SCID-SMA kit includes:

    • Two liquid handling platforms (one for DNA extraction and one for PCR master mix . setup)
    • QuantStudio Dx Real-Time PCR instrument .
    • . Eonis Analysis Software

    Each Eonis SCID-SMA kit contains reagents for up to 384 reactions or 1152 reactions including kit controls. The kit contents are listed in Table 1. Materials required but not provided include the Eonis DNA Extraction Kit, Eonis Analysis Software and consumables (Table 2).

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    Acceptance CriteriaReported Device Performance
    Qualitative Detection of SMN1 gene exon 7 (Output: "Presumptive Positive" or "Presumptive Normal")100% Qualitative Agreement in Precision/Reproducibility Studies: - Precision (SMN1 presence call): Sample 11 (SMA positive) showed 100% (107/107) "Above Cut-off" (Presumptive Positive). Other normal/carrier samples (1-8, 10, 12-13) showed 100% "Below Cut-off" (Presumptive Normal), with one exception in Sample 9 (99.1% Below Cut-off, 1/106 incorrect call).- Reproducibility (SMN1 presence call): All 13 samples showed 100% agreement (150/150 replicates) for qualitative calls across study sites, operators, and runs. This includes Sample 11 (SMA positive) consistently yielding "Above Cut-off" (Presumptive Positive) results, and other samples consistently yielding "Below Cut-off" (Presumptive Normal).- Filter Paper Reproducibility: 100% qualitative agreement for all tested samples across different filter paper brands and lots.- qPCR Method Equivalency: 100% qualitative agreement between 384-well and 96-well qPCR methods.- DNA Extraction Equivalency: 100% concordance for qualitative calls among JANUS handler, a second commercial liquid handler, and manual extraction processes.
    False Positive Rate for SMN1 Detection (Desirable: Low)Clinical Study: 0.0% false positive rate (0 historical SMA cases misclassified as normal out of 3018 normal newborns).Limit of Blank Study: 0.0% false positive rate (analytes-negative samples consistently yielded no Ct value)
    False Negative Rate for SMN1 Detection (Desirable: Low)Clinical Study: 0.0% false negative rate (0 historical SMA cases misclassified as normal out of 51 confirmed SMA cases).
    Concordance with Genetic Testing (Accuracy)Accuracy Study: 100% positive percentage agreement (51/51 confirmed SMA cases correctly identified) and 100% negative percentage agreement (55/55 confirmed negative samples correctly identified), resulting in 100% overall agreement.
    Specimen Stability for DBS samplesNo differences in qualitative calls or SMN1 Ct values at day 28 compared to day 0 under varying temperature and humidity conditions.
    Eonis DNA Extraction Kit In-Use and On-Board StabilityStable for 14 days at +19 - +25 °C after first opening.
    Eonis DNA Extraction Kit Real-Time and Transport Simulation Interim StabilityNo difference in SMN1 Ct values up to 7 months. Can be shipped at room temperature. Supports a shelf life of 6 months.
    Eonis SCID-SMA Kit Interim In-Use and On-Board StabilityPCR Reagents 1 and 2 stable for 14 days at +2°C to +8°C after thawing. SCID-SMA Kit Controls stable for 14 days at -30°C to -16°C after first use.
    Eonis SCID-SMA Kit Real-Time and Transport Simulation Interim StabilityNo change in SMN1 Ct values for assay controls or PCR Reagents 1 or 2 up to 10 months. Supports a shelf life of 180 days (6 months).
    Control of Contamination (Carry-Over)Analytical Study: 4% false-negative rate observed in artificially high analyte-positive samples in a checkerboard configuration. Clinical Validation: 0% false negative rate; no clinically significant carry-over observed.

    Study Details

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Precision Study (Test Set 1):
      • Sample Size: 13 representative DBS samples (SMA positive, carrier, and normal), tested in 108 replicates (106 for some) per sample over 54 runs. Total measurements: 13 samples * 108 measurements = 1404 measurements.
      • Data Provenance: Analytical performance studies conducted using contrived samples (cord blood or adult whole blood with hematocrit adjusted to neonate levels). SMA positive sample created by spiking SMN1 negative Coriell cells into leukocyte-depleted blood.
    • Reproducibility Study (Test Set 2):
      • Sample Size: 13 samples (same as precision study), tested in 150 replicates per sample across 3 study sites over 5 operating days. Total measurements: 13 samples * 150 measurements = 1950 measurements.
      • Data Provenance: Contrived samples (cord blood or adult whole blood with hematocrit adjusted to neonate levels).
    • Filter Paper Reproducibility Study (Test Set 3):
      • Sample Size: 6 samples (from precision study set) prepared on 3 lots of 2 brands of filter paper each (total 36 conditions). 5 replicates per condition. Total 900 results.
      • Data Provenance: Contrived samples.
    • Limit of Blank Study (Test Set 4):
      • Sample Size: 150 replicates of contrived analyte-negative samples per kit lot (total 300 replicates across 2 kit lots).
      • Data Provenance: Contrived samples (SMN1-negative cells from Coriell Institute into leukocyte-depleted human blood).
    • Interference Study (Test Set 5):
      • Sample Size: 7 interfering substances, 2 interferent levels, 3 target DNA levels, 13 replicates per level. Total 544 sample results.
      • Data Provenance: Contrived samples (SMN1 presumptive normal).
    • qPCR Method Equivalency Study (Test Set 6):
      • Sample Size: 13 samples (from precision study set), 5 replicates per sample, test for 2 PCR methods. Total 1560 results.
      • Data Provenance: Contrived samples.
    • DNA Extraction Equivalency Study (Test Set 7):
      • Sample Size: 7 samples (from precision study set), 5 replicates per sample, test for 3 extraction/PCR methods. Total 1050 results.
      • Data Provenance: Contrived samples.
    • Clinical Screening Study (Test Set 8):
      • Sample Size: 3069 DBS specimens. This included 51 retrospective archived DBS specimens from subjects confirmed positive for SMA and 3018 routine newborn screening specimens.
      • Data Provenance: Retrospective archived DBS specimens from the US and Denmark. Routine newborn screening specimens obtained from the Danish Newborn Screening Biobank (NBS-Biobank).
    • Accuracy Study (Test Set 9):
      • Sample Size: 51 confirmed positive SMA samples and 55 presumed negative DBS samples. Total 106 samples.
      • Data Provenance: Confirmed positive SMA samples (molecular genetic testing result showing homozygous deletion of SMN1 exon 7) and presumed negative DBS samples matched by storage time.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document mentions that the clinical status of the routine subjects in the Clinical Screening Study was determined through a "retrospective review by clinical experts." However, it does not specify the number of experts or their specific qualifications (e.g., "radiologist with 10 years of experience"). For the confirmed SMA cases, "confirmatory test results" (molecular genetic testing) were used as the comparator, which is a definitive method rather than expert consensus on imaging.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (like 2+1 or 3+1) for the interpretation of results in the test sets. For the Eonis SCID-SMA kit, the interpretation of results appears to be largely automated by the Eonis Analysis Software based on pre-set Ct cut-off values. For the clinical screening study, samples with values above the cut-off were re-tested in duplicate to obtain the final result.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No, an MRMC comparative effectiveness study was not performed. This device is a quantitative PCR-based assay with automated interpretation software, not an imaging-based AI system that assists human readers. Therefore, the concept of human readers improving with AI assistance is not applicable here.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    Yes, the performance of the Eonis SCID-SMA Kit is essentially standalone. The Eonis Analysis Software "automatically flags quality control (QC) violations and interprets results according to the cut-offs," presenting results as "Presumptive Positive" or "Presumptive Normal." While human operators perform the lab procedures (DNA extraction, PCR setup), the final interpretation of the test result itself is automated by the algorithm based on the measured Ct values against a pre-set cut-off.

    7. The Type of Ground Truth Used

    • Analytical Studies (Precision, Reproducibility, Limit of Blank, Interference, Method/Extraction Equivalency, Carry-over): Ground truth was based on the contrived nature of the samples. For example, SMA positive samples were created by spiking specific cells, and analyte-negative samples were prepared to contain no target analyte.
    • Clinical Screening Study:
      • For SMA positive cases (51 samples): Ground truth was established by "confirmatory test results" (molecular genetic testing showing homozygous deletion of SMN1 exon 7).
      • For routine newborn screening specimens (3018 samples): Ground truth was established by "retrospective review by clinical experts to confirm the routine subject cohort samples were from unaffected individuals." This suggests a form of clinical outcome/diagnosis as ground truth, likely based on further clinical evaluations, not just genetic testing for SMN1 deletion.
    • Accuracy Study:
      • For confirmed SMA samples (51 samples): Ground truth was "molecular genetic testing result showing homogenous deletion of SMN1 gene exon 7."
      • For presumed negative samples (55 samples): Ground truth was confirmed by "molecular genetic testing for SMN1" using a CE-IVD labeled assay. This is molecular genetic testing/pathology ground truth.

    8. The Sample Size for the Training Set

    The document does not explicitly state a separate "training set" for the Eonis SCID-SMA Kit. As a PCR-based assay, its "learning" primarily involves setting the appropriate Ct cut-off value (31.24). This cut-off is pre-set in the Eonis Analysis Software. The document does not describe how this specific cut-off was initially determined (e.g., through a separate study for calibration or training). The studies described here are verification and validation studies to demonstrate the performance with that pre-set cut-off.

    9. How the Ground Truth for the Training Set Was Established

    Since a distinct "training set" is not explicitly mentioned for algorithmic development in a machine learning sense, the establishment of ground truth for training is not detailed. The inherent "training" of such a system would involve optimizing the Ct cut-offs based on a set of known positive and negative samples to achieve desired diagnostic sensitivity and specificity. However, the provided text focuses on the validation of the device's performance given its pre-determined operational parameters (like the 31.24 Ct cut-off).

    Ask a Question

    Ask a specific question about this device

    K Number
    K203035
    Manufacturer
    Date Cleared
    2022-11-09

    (765 days)

    Product Code
    Regulation Number
    866.5930
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Eonis™ SCID-SMA kit is intended for the semi-quantitative determination of TREC (T-cell receptor excision circle) as an aid in screening newborns for Severe Combined Immunodeficiency (SCID) and for the semi-quantitative determination of KREC (Kappa-deleting recombination excision circle) as an aid in screening newborns for X-linked agammaglobulinemia (XLA). The test is intended for DNA from blood specimens dried on a filter paper and for use on the QuantStudio™ Dx Real-Time PCR instrument.

    This test is not intended for screening of SCID-like Syndromes, such as DiGeorge Syndrome, or Omenn Syndrome. lt is also not intended to screen for less acute SCID syndromes such as leaky-SCID or variant SCID. The test is not indicated for screening B-cell deficiency disorders other than XLA, such as atypical XLA, or for screening of XLA carriers.

    This test is not intended for use as a diagnostic test and a positive screening result should be followed by confirmatory testing.

    Device Description

    The Eonis SCID-SMA kit is a multiplex real-time PCR-based assay. It uses target sequence-specific primers and TaqMan™ probes to amplify and detect three targets: TREC, and RPP30, in the DNA extracted from newborn dried blood spot (DBS) using Eonis DNA Extraction kit in a single PCR reaction.

    Each Eonis SCID-SMA kit contains reagents for up to 384 reactions (for 3241-001U) or 1152 reactions (for 3242-001U) including kit controls.

    AI/ML Overview

    The document describes the Eonis SCID-SMA kit, a real-time PCR-based assay for newborn screening of Severe Combined Immunodeficiency (SCID) and X-linked agammaglobulinemia (XLA). The study provided demonstrates the device's analytical and screening performance to support its substantial equivalency to a predicate device.

    Here's a breakdown of the requested information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state "acceptance criteria" as a separate table. However, it presents Sensitivity and Specificity for both TREC and KREC analytes, which serve as key performance metrics. These values are compared to the predicate device.

    Reported Device Performance of Eonis SCID-SMA Kit:

    AnalyteMetricPercentConfidence Limits
    TRECSensitivity100 %80.5 % - NA
    False-negative rate0 %NA - 19.5 %
    Specificity99.7 %99.4 % - 99.9 %
    False-positive rate0.3 %0.1 % - 0.6 %
    KRECSensitivity100 %54.1 % - NA
    False-negative rate0 %NA - 45.9 %
    Specificity99.7 %99.4 % - 99.9 %
    False-positive rate0.3 %0.1 % - 0.6 %

    Comparison to Predicate Device (PerkinElmer EnLite Neonatal TREC Kit) for TREC:

    AnalyteMetricPercentConfidence Limits
    TRECSensitivity100 %79.4 % - NA
    False-negative rate0 %NA - 20.6 %
    Specificity99.7 %99.4 % - 99.8 %
    False-positive rate0.3 %0.2 % - 0.6 %

    The reported performance clearly aims to meet or exceed the performance of the predicate device, demonstrating 100% sensitivity and high specificity for both analytes.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Screening Performance Study (Test Set):
      • Total DBS specimens: 3090
      • Confirmed SCID positive: 17
      • Confirmed XLA positive: 6
      • Normal newborn screening specimens: 3018 (retrospective archived)
    • Data Provenance: Retrospective archived dried blood spot specimens.
      • Country of Origin: US and Denmark.
      • Study conducted in Denmark.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: Not explicitly stated as a specific number. The document mentions "clinical experts" were used.
    • Qualifications of Experts: The document states "clinical experts" retrospectively reviewed the clinical status of routine subjects to confirm they were from unaffected individuals. Further specific qualifications (e.g., specific medical specialty, years of experience) are not provided in this document.

    4. Adjudication Method for the Test Set

    • Adjudication Method: The document describes a retesting protocol for initial "screen positive" results.
      • "The specimens having TREC and KREC levels below the cut-off values in the initial round of testing were re-tested in duplicate."
      • "The final results (presumptive positive, invalid result) were classified after the second round of testing."
      • This implies a form of internal re-adjudication based on duplicate retesting for samples near the cut-off. There is no mention of external expert adjudication for discordant results or a specific "X+Y" type of adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • MRMC Study: No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done. This device is an in-vitro diagnostic (IVD) kit for semi-quantitative determination of biomarkers, not an AI assisting human readers of medical images. Therefore, the concept of human readers improving with AI assistance is not applicable to this type of device.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Yes, the entire performance data regarding sensitivity, specificity, reproducibility, precision, limit of detection, and linearity are based on the standalone performance of the Eonis SCID-SMA kit (the algorithm of the kit combined with the instrument) on dried blood spot samples. This device operates as an automated assay, therefore, its performance is inherently "standalone" in terms of its analytic and clinical validity.

    7. The Type of Ground Truth Used

    • Ground Truth Type:
      • Confirmatory testing: For SCID and XLA positive cases, "Confirmatory test results were used as the comparator." This implies clinical diagnosis or gold standard laboratory tests.
      • Clinical expert retrospective review: For normal newborn screening specimens, "The clinical status of the routine subjects was determined through a retrospective review by clinical experts to confirm the routine subject cohort samples were from unaffected individuals." This indicates clinical outcomes or medical records adjudicated by experts.

    8. The Sample Size for the Training Set

    • Training Set Sample Size: The document does not explicitly state the sample size of a separate "training set" for the assay. The study described is a clinical validation (test set). For assay development (which would include "training" for establishing parameters like cut-offs), the document mentions:
      • Cut-off values were established using "an independent dataset." The size of this independent dataset is not specified.
      • Reproducibility and precision studies used panels of dried blood spots at different TREC/KREC levels, but these are for analytical validation rather than establishing classification criteria.

    9. How the Ground Truth for the Training Set Was Established

    • Training Set Ground Truth Establishment: As no specific "training set" is detailed, the method for establishing ground truth for any data used during the assay's development or cut-off determination (the "independent dataset" mentioned for cut-off study) is not explicitly described. However, it's reasonable to infer that similar methods to the test set ground truth (confirmatory testing for affected individuals and clinical review for unaffected individuals) would have been applied during the development phase.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN200062
    Manufacturer
    Date Cleared
    2022-05-24

    (603 days)

    Product Code
    Regulation Number
    866.6110
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Parsortix® PC1 system is an in vitro diagnostic device intended to enrich circulating tumor cells (CTCs) from peripheral blood collected in K>EDTA tubes from patients diagnosed with metastatic breast cancer. The system employs a microfluidic chamber (a Parsortix cell separation cassette) to capture cells of a certain size and deformability from the population of cells present in blood. The cells retained in the cassette are harvested by the Parsortix PC1 system for use in subsequent downstream assays. The end user is responsible for the validation of any downstream assay. The standalone device, as indicated, does not identify, enumerate or characterize CTCs and cannot be used to make any diagnostic/prognostic claims for CTCs, including monitoring indications or as an aid in any disease management and/or treatment decisions.

    Device Description

    The Parsortix® PC1 system is a bench top laboratory instrument consisting of five main subsystem components:

    • . Parsortix PC1 instrument incorporating a computer, keypad and display, pneumatic and hydraulic components including reservoir bottles and tubes, a separation cassette mounting clamp and other electronics to control the instrument hardware and behavior.
    • Parsortix PC1 Software consisting of a Windows 7 Embedded operating system together . with dedicated Parsortix PC1 proprietary Windows application software (Software).
    • A set of embedded and encrypted Protocol Files (Protocols) that are sequences of simple . instructions, interpreted by the Software and used to control the instrument fluidic and hydraulic components and circuits. The Protocols supplied embedded within the Software enable the four core instrument processes: Clean, Prime, Separate, and Harvest.
    • Parsortix PC1 MBC-01 Metastatic Breast Cancer Kit which contains Separation Cassettes . (n = 10, 50 or 100), Cleaning Cassettes [(n = 1, 5, or 10), one Cleaning Cassette for every multiple of 10 x separation cassette], Encrypted Instrument protocol file distributed on a USB memory stick as required to perform the proposed intended use, Cassette labels and one package insert (per kit) containing instructions for use and expected performance data for the Parsortix PC1 instrument, when used in conjunction with the MBC-001 Metastatic Breast Cancer Kit.
    • Parsortix PC1 ICT-01 Instrument Control Test Kit which contains Control tubes . containing a known, aliquoted cell suspension which is used to periodically confirm acceptable performance of the system, Separation Cassettes Polystyrene 12mL 16x100 mm tubes (n = 10 or 25) and one package insert (per kit) containing instructions for use for the ICT-001 Instrument Control Test Kit.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:

    Acceptance Criteria and Device Performance

    The provided document details various analytical performance studies demonstrating the device's capabilities. While explicit "acceptance criteria" are not presented in a single, clear list with pass/fail thresholds, the studies' objectives and reported results implicitly define what was considered acceptable performance for the device's de novo classification. The performance data is summarized below based on these implicit criteria.

    Table of Implicit Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied from Study Objectives)Reported Device Performance
    Cell Recovery (Linearity & Rate)
    Ability to linearly recover live SKBR3 cells (125-1000 range)Linear model (slope 0.6544) with average recovery of ~65% (CI: 62%-69%).
    Ability to linearly recover live SKBR3, MCF7, Hs578T cells (2-100 range)Linear model: - SKBR3: ~69% (CI: 65%-73%) - MCF7: ~76% (CI: 73%-79%) - Hs578T: ~76% (CI: 74%-79%)
    Comparison of live vs. fixed cell recoveryFixed SKBR3 recovery: 88% (more efficient than live). Live SKBR3 recovery: 69%.
    Detection Limit
    Minimum number of spiked tumor cells to recover at least one cell >95% of the time- SKBR3: 3 cells - Hs 578T: 4 cells - MCF7: 5 cells
    Limit of Blank0 cells (no tumor cells detected in unspiked healthy donor blood).
    Blood Volume Impact
    No significant impact on efficiency across 5mL, 7.5mL, 10mL volumes (direct harvest)Mean % SKBR3 Harvest: - 7.5mL: 71.1% - 5mL: 62.3% - 10mL: 66.3% (Avg Difference from 7.5mL: -8.8% for 5mL, -4.8% for 10mL; CIs indicate no significant differences across volumes)
    Impact of Cytospin™ slide deposition on recoverySignificant cell loss observed. Mean % SKBR3 Deposited: - 7.5mL: 23.5% - 5mL: 24.2% - 10mL: 29.1%
    Blood Stability
    No significant impact on recovery for samples stored at RT or 4°C for up to 72 hoursMean % Harvest (control 71.2%): - 24h RT: 81.6% - 48h RT: 74.9% - 72h RT: 71.1% - 24h 4°C: 72.9% - 48h 4°C: 72.5% - 72h 4°C: 74.1% (CIs indicate no significant impact)
    Impact on processing time / residual nucleated cellsStorage at RT >4h or 4°C >48h increases residual nucleated cells. RT >24h may increase processing time.
    Cell Carryover
    Absence of any cell carryover between samples0 of 220 PBS harvests showed fluorescently labeled cells.
    Cleaning Reagent Carryover
    Residual cleaning detergent not interfering with cell recovery/morphology/molecular evaluationNo more than 0.01% residual cleaning detergent, which was demonstrated not to impact recovery, morphology, or RNA evaluation.
    Cassette Lot Performance
    Consistent performance across multiple cassette lots- Overall mean % harvest: 81.4% (SD 14.4%, %CV 17.7%) Range: 52.6% to 100%. - Overall mean % capture: 84.0% (SD 13.2%, %CV 15.6%) Range: 57.6% to 100%.
    Interfering Substances
    No significant interference from tested cancer drugsNo significant differences in captured/harvested SKBR3 cells. (Paclitaxel at 80ug/mL, however, showed potential for sample loss/quality reduction).
    No significant interference from high albumin or triglyceridesNo impact on harvested cells or processing time.
    No significant interference from different hematocrit levels on cell capture/harvestNo interference for capture/harvest; high hematocrit increased processing time/residual WBCs, low hematocrit significantly increased residual WBCs.
    High WBC count not interfering with SKBR3 cell capture/harvestNo interference with capture/harvest (up to 16x10^9 cells/L). Elevated WBCs lead to increased residual nucleated cells (addressed by downstream assay compatibility).
    Compatibility of WBC background with downstream qPCR assayNo negative impact on qPCR performance for most genes (except ERBB2).
    Compatibility of WBC background with downstream cytology, FISH, and IF evaluationNo significant impact on quality of WBCs or SKBR3 cells observed in these evaluations.
    Reproducibility and Repeatability (Precision)
    Acceptable %CVs for various precision studies (fixed/live cells, PBS/blood, single/multi-site)- 10-day single site (fixed SKBR3, PBS): Overall avg harvest 81.3%, repeatability %CV 14.4%, within-laboratory %CV 14.5%. - 20-day 3-site (fixed SKBR3, PBS): Overall avg harvest 75.3%, repeatability %CV 17.0%, reproducibility %CV 20.6%. - 20-day single site (fixed SKBR3, blood): Overall avg harvest 89.4%, repeatability %CV 10.2%, within-laboratory %CV 10.3%. - 20-day single site (live SKBR3, blood): Overall avg harvest 70.4%, repeatability %CV 21.1%, within-laboratory %CV 22.0%. - Combined 20-day precision (fixed/live SKBR3, blood): Repeatability %CV 15.4%, reproducibility %CV 23.2%. - 5-day single site (live SKBR3, MCF7, Hs578T, blood, various spike levels): Within-run repeatability %CVs ranged from 12.3% to 32.4%, within-laboratory %CVs ranged from 13.3% to 34.1%. Overall (5-50 cells): Repeatability and reproducibility %CV 26.3%.
    Clinical Performance (Enrichment of CTCs)
    Comparison of CTC detection in MBC patients vs. healthy volunteers (IF staining)- HV: 6.9% (5/72) had ≥1 CTC (DAPI+, CD45-, EpCAM+/CK+). - MBC: 45.3% (34/75) had ≥1 CTC. Significantly larger proportion in MBC patients (Fisher's exact p < 0.0001 implied by data).
    Comparison of CTC detection in MBC patients vs. healthy volunteers (Cytological evaluation by pathologist)- HV: 1.6% (3/192) had ≥1 CTC. - MBC: 15.8% (32/202) had ≥1 CTC. Significantly higher proportion in MBC patients.
    Utility of harvested cells for downstream molecular analysis (qPCR)Demonstrated that harvested cells could be used for representative molecular techniques (qPCR).
    Utility of harvested cells for downstream histopathological/cytological techniques (cytology, FISH, IF)Demonstrated that harvested cells could be used for these techniques.

    Study Information

    2. Sample Sizes Used for the Test Set and Data Provenance

    The document details numerous analytical validation studies and two clinical studies. Given the nature of a CTC enrichment device, "test set" and "training set" aren't explicitly delineated for algorithm development as they would be for an AI model. Instead, performance is validated through various analytical and clinical studies.

    Analytical Test Sets (Spiked Samples):

    • Cell Recovery Studies:
      • High-level SKBR3 (125-1000 cells): 12 healthy donors (blood collected from 2 donors on each of 6 testing days).
      • Low-level SKBR3, MCF7, Hs578T (2-100 cells): 10 healthy female donors for each cell line tested (8x 10mL tubes from each donor).
    • Detection Limit: Minimum of 60 7.5mL healthy donor blood samples for each cell line (SKBR3, Hs 578T, MCF7) and each spike level tested. Additionally, 63 different healthy donors for limit of blank assessment.
    • Blood Volume Study: Not explicitly stated, but implies multiple healthy donor blood samples across 5mL, 7.5mL, and 10mL volumes for assessment.
    • Blood Stability: Healthy donors whose blood was spiked with SKBR3 cells (samples were stored at RT or 4°C for various durations).
    • Cell Carryover: Healthy donors (blood samples spiked with SKBR3, Hs578T, MCF7 cells). Subsequent PBS samples processed to check carryover.
    • Cleaning Reagent Carryover: Not applicable (tested with deionized water).
    • Cassette Lot Study: 328 runs in total, using healthy donor blood spiked with fixed SKBR3 cells in PBS using the Parsortix Control Tube (PCT-001).
    • Interfering Substances: Healthy donors for spiked blood samples.
    • Reproducibility & Repeatability:
      • 10-day precision: 600 measurements (fixed SKBR3 in PBS).
      • 20-day reproducibility (multi-site): 800 data points (fixed SKBR3 in PBS).
      • 20-day single site precision (live SKBR3 in blood): 400 measurements (from 2 healthy donors each day).
      • 20-day single site precision (fixed SKBR3 in blood): 400 measurements.
      • 5-day single site precision (live SKBR3, MCF7, Hs578T in blood): 900 measurements (from healthy women, 100 measurements per cell line/spike level).

    Clinical Test Sets:

    • Study #1 (ANG-008):
      • Spiked SKBR3 (primary eval): 76 healthy volunteer (HV) subjects and 74 metastatic breast cancer (MBC) patients.
      • Patient-derived CTCs (secondary eval): 72 HV subjects and 75 MBC patients.
    • Study #2 (ANG-002):
      • Approximately 200 MBC patients and 200 HV subjects (actual evaluable: 202 MBC patients and 192 HVs for cytological evaluation).

    Data Provenance:

    • Country of Origin: Not explicitly stated for all studies, but ANGLE Europe Ltd. is the applicant, suggesting likely European origin (or at least studies conducted under their oversight). The 20-day 3-site reproducibility study was conducted across "three different sites," implying multiple locations.
    • Retrospective/Prospective: The analytical and clinical studies described are prospective in nature, as they involve blood collection from healthy donors and patients specifically for the purpose of testing the device's performance.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    The concept of "experts establishing ground truth" here applies primarily to the manual counting of cells and the cytological evaluation of clinical samples.

    • Analytical Studies (Cell Recovery, Detection Limit, Reproducibility):
      • Cell counting (fluorescently labeled cells) was performed by one operator (e.g., in the initial cell recovery study) or by two independent readers (e.g., for cassette lot study, which was used to establish acceptance ranges).
      • No specific qualifications for these operators/readers are provided (e.g., "radiologist with 10 years of experience"). They are implied to be laboratory personnel trained in fluorescence microscopy and cell counting.
    • Clinical Study #1 (ANG-008):
      • Fluorescent microscopy was used to determine the number of SKBR3 cells on slides.
      • IF staining was used to identify CTCs based on specific marker profiles (DAPI+, CD45-, EpCAM+/CK+).
      • A follow-up study involved re-staining IF slides with Wright-Giemsa and evaluation by "a pathologist" (singular). No specific qualifications are given for this pathologist.
    • Clinical Study #2 (ANG-002):
      • For cytological evaluation, cells were assessed by "a qualified pathologist" (singular). No specific qualifications for this pathologist are given.
      • For molecular evaluations, standard techniques were used, implying trained laboratory personnel performed these, but not "experts" in the sense of independent adjudication.

    4. Adjudication Method for the Test Set

    • Analytical Studies: For some analytical studies, such as the Cassette Lot Study, where two independent readers determined the number of cells harvested, there is an implicit "adjudication" by comparison of their counts. However, the exact method for resolving discrepancies (e.g., average, third reader, consensus) is not explicitly stated. For other studies, it mentions "one operator counted," indicating no formal adjudication.
    • Clinical Studies: For the primary clinical endpoints (detection of CTCs by IF or cytological evaluation), the text refers to assessment by "a pathologist" or "a qualified pathologist." This suggests that the final determination for cases was made by a single expert rather than through a multi-reader, adjudicated process (e.g., 2+1, 3+1). If multiple pathologists reviewed, it's not described as an adjudication process to reach a consensus.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    No. A formal MRMC comparative effectiveness study comparing human readers with AI assistance vs. without AI assistance was not conducted or described. The Parsortix PC1 device is a physical enrichment system, not an AI diagnostic tool, and its evaluation focuses on its ability to isolate cells for subsequent human or machine analysis. The evaluation of its "effect" is on the quality and presence of isolated cells for downstream applications, not on improving human reader performance directly.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done

    The Parsortix PC1 is a standalone device in the sense that it performs the cell enrichment process independently of direct real-time human intervention during the "Separate" phase. The studies described are essentially "algorithm only" (device only) in terms of its core function: enriching cells. Human intervention occurs before (loading samples) and after (harvesting cells, performing downstream analyses, and interpreting those analyses). The performance metrics (recovery rates, detection limits, precision) are entirely dependent on the device's physical and fluidic processes, not on a human-in-the-loop during the core enrichment.

    7. The Type of Ground Truth Used

    The ground truth for device performance was established in several ways:

    • Analytical Studies:
      • Spiked Samples: The "ground truth" for cell recovery and detection limit studies was the known number of cultured tumor cells deliberately spiked into healthy donor blood. These cells were pre-labeled (fluorescently) for easier identification and counting post-processing.
      • Absence of Cells: For "limit of blank" and "cell carryover" studies, the ground truth was the known absence of spiked cells or tumor cells in donor blood/PBS samples.
    • Clinical Studies:
      • Expert Consensus/Pathology: For patient samples, the "ground truth" for the presence of CTCs was established by expert (pathologist) evaluation of the harvested cells using morphological (Wright-Giemsa staining, cytology) and immunofluorescent (IF) criteria. For IF, CTCs were defined by a specific immunophenotype (DAPI+, CD45-, EpCAM+/CK+).
      • Molecular Data: The ability to perform subsequent molecular analyses (qPCR) on harvested cells also implicitly served as a "ground truth" for the utility of the enriched sample.

    8. The Sample Size for the Training Set

    As this device is a physical cell enrichment system and not an AI/ML algorithm, the concept of a "training set" for model development (as in deep learning) does not apply. All the studies described are essentially validation or performance characterization studies.

    9. How the Ground Truth for the Training Set Was Established

    Since there is no "training set" in the context of an AI/ML algorithm for this device, this question is not directly applicable. If "training set" is taken to mean the data used for initial device development and internal optimization before formal validation, then the text does not provide details on how ground truth was established during those earlier stages. However, the ground truth for validation (as described in point 7) was established through known spiked cell counts and subsequent expert evaluation of harvested cells.

    Ask a Question

    Ask a specific question about this device

    K Number
    K211499
    Manufacturer
    Date Cleared
    2022-01-06

    (237 days)

    Product Code
    Regulation Number
    866.6090
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The 23andMe Personal Genome Service (PGS) uses qualitative genotyping to detect select clinically relevant variants in genomic DNA isolated from human saliva collected from individuals ≥18 years for the purpose of reporting and interpreting genetic health risks, including the 23andMe PGS Genetic Health Risk Report for Hereditary Prostate Cancer (HOXB13-Related). The 23andMe PGS Genetic Health Risk Report for Hereditary Prostate Cancer (HOXB13- Related) is indicated for reporting of the G84E variant in the HOXB13 gene. The report describes if a person has the G84E variant and if a male is at increased risk for prostate cancer. The variant included in this report is most common in people of European descent. The test report does not describe a person's overall risk of developing any type of cancer, and the absence of a variant tested does not rule out the presence of other variants that may be cancer-related. This test is not a substitute for visits to a healthcare provider for recommended screenings or appropriate follow-up and should not be used for diagnosis, to determine any treatments or medical interventions.

    Device Description

    The 23andMe Personal Genome Service (PGS) is an over-the-counter (direct-to-consumer), DNA testing service that provides information and tools for consumers to learn about and explore their DNA.

    The 23andMe Personal Genome Service (PGS) is a currently marketed, non-invasive genetic information service that combines qualitative genotyping data covering genetic ancestry, traits, and certain heritable health conditions from a single multiplex assay with descriptive information derived from peer reviewed, published genetic research studies. It is a home use, over-thecounter (direct-to-consumer) DNA testing service intended to provide information and tools for consumers to learn about and explore their DNA.

    Customer saliva is self-collected using the Oragene-Dx® Device manufactured by DNA Genotek, Inc. (previously cleared for carrier screening indications under K141410, and the same collection kit used to generate performance data for DEN140044, DEN160026, DEN170046, K182784, DEN180028, and K193492, which consists of a sealable collection tube containing a stabilizing buffer solution. Once the sample is collected, it is shipped to one of our Clinical Laboratory Improvement Amendments (CLIA) certified laboratories for testing.

    DNA is isolated from the saliva and tested in a multiplex assay using a customized genotyping beadchip, and off the shelf reagents and instrumentation manufactured by Illumina. The multiplex assay simultaneously tests for more than 500,000 variants, including those for the previously authorized indications, as well as for the indications proposed herein.

    Raw data is generated using Illumina GenomeStudio software, and then sent to 23andMe. The data is then analyzed using 23andMe's proprietary Coregen software, where a genotype is determined for each tested SNP. The results for certain of these SNPs are used to generate personalized reports for the customer that provide information about the detected genotype.

    Personalized reports are generated for each user that provide results of the testing performed. These reports tell the user which genetic health risk variant(s) have been detected in their sample and provide information about the disease associated with the variant(s). If no variant was detected, that information is also provided. The personalized reports are designed to present scientific concepts to users in an easy-to-understand format. The reports provide scientifically valid information about the risks associated with the presence of a particular variant. The reports are designed to help users understand the meaning of their results and any appropriate actions that may be taken based on their results.

    The modified components of the Personal Genome Service included in this 510(k) submission are new labeling to include (a) one new variant to be reported, and (b) the qualitative reporting of one's Genetic Health Risk for Hereditary Prostate Cancer (HOXB13-Related).

    Engineering drawings, schematics, etc. of Genetic Health Risk Report for Hereditary Prostate Cancer (HOXB13-Related) are not applicable to this device.

    AI/ML Overview

    The provided document describes the acceptance criteria and study proving the device meets these criteria for the 23andMe PGS Genetic Risk Report for Hereditary Prostate Cancer (HOXB13-Related).

    Here's the breakdown of the information requested:


    1. Table of Acceptance Criteria and Reported Device Performance

    Performance MetricAcceptance CriteriaReported Device Performance
    Method Comparison (Accuracy)≥99% PPA and NPA for each SNP>99% PPA and NPA for all genotypes. Study passed the criteria.
    Precision / Reproducibility≥99% correct calls100% correct genotype calls. 100% reproducibility and repeatability.
    DNA Input (Lowest Concentration)≥95% correct calls at 5 ng/µL100% correct genotype calls at 5, 15, and 50 ng/µL. Study passed.
    Interfering Substance (Specificity)100% accuracy when following IFU100% accuracy when following instructions for use.
    Labeling Comprehension≥90% overall comprehensionAverage comprehension rate ranged from 90.7% to 96.1%. Study met criteria.

    2. Sample Sizes Used for the Test Set and Data Provenance

    • Accuracy/Method Comparison Study:
      • Sample Size: Not explicitly stated as a number, but "Saliva samples were selected from the 23andMe customer biobank, based on their predetermined genotype and minimum volume required for testing." This implies a varied sample size based on the availability of specific genotypes.
      • Data Provenance: From the "23andMe customer biobank" and "approved contract laboratory sites." The origin of the customers is not specified beyond "23andMe customer" which is a US-based company, suggesting primarily US data. The study was retrospective, using pre-existing samples from the biobank.
    • Precision Study:
      • Sample Size: "DNA samples were selected based on their confirmed genotypes, and were obtained from the 23andMe biobank." Not an explicit number.
      • Data Provenance: From the "23andMe biobank." Implies primarily US data, retrospective.
    • DNA Input Study:
      • Sample Size: "DNA samples were obtained from the 23andMe biobank based on their listed genotypes." Not an explicit number.
      • Data Provenance: From the "23andMe biobank." Implies primarily US data, retrospective.
    • Interfering Substance Study (referenced from DEN140044):
      • Sample Size: Over 35,000 sample replicates.
      • Data Provenance: Not explicitly stated for this particular study, but given it's for a US regulatory submission by a US company, it's highly likely to be US data, retrospective.
    • Labeling Comprehension Study (referenced from DEN160026):
      • Sample Size: Not explicitly stated.
      • Data Provenance: Not explicitly stated, but also likely US data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Ground Truth Method: For the analytical studies (Method Comparison, Precision, DNA Input), the ground truth for genotyping was established by bi-directional Sanger sequencing.
    • Number/Qualifications of Experts: The document does not specify the number or qualifications of experts involved in performing or interpreting the Sanger sequencing results to establish the "truth." It only states that sequencing was performed "by an approved supplier" and that the sequencing results were "considered to be 'truth.'"

    4. Adjudication Method for the Test Set (e.g., 2+1, 3+1, none)

    • The document does not describe any human adjudication method for establishing the ground truth from Sanger sequencing. It implies that the sequencing results themselves were directly taken as ground truth without further expert consensus or adjudication.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No MRMC or comparative effectiveness study involving human readers (e.g., radiologists) with or without AI assistance was performed or described. This device is a direct-to-consumer genetic test, not an imaging-based AI diagnostic tool.
    • The closest concept is the "Labeling Comprehension" study, which assesses how well consumers understand the report. It indicates that the report and educational materials were effective in communicating relevant concepts for safe use. This is a measure of user comprehension, not human reader improvement with AI assistance.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Yes, the performance studies (Accuracy, Precision, DNA Input, Interfering Substance) represent a standalone evaluation of the genotyping assay, which is essentially the "algorithm" or technical process of the device. The accuracy and precision figures are "algorithm only" performance metrics, as they compare the device's genotype calls directly against Sanger sequencing as the ground truth.

    7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)

    • For the analytical performance studies (Accuracy, Precision, DNA Input), the ground truth for specific genetic variants (genotype) was established by bi-directional Sanger sequencing.
    • For the clinical performance, the document refers to "published studies of variant frequencies in various populations and the results of analytical studies" and "allele frequencies in the 23andMe customer database." This relies on established scientific literature and aggregated anonymized real-world data rather than individual outcomes or pathology reports.

    8. The Sample Size for the Training Set

    • The document primarily describes validation studies (test sets) for the analytical performance of the device. It does not provide information about a separate "training set" sample size for developing the genotyping assay or the underlying "Coregen software." The genotyping method described relies on physical beadchip arrays and established principles of DNA analysis, not on a machine learning model that would typically have a distinct training phase with a dedicated dataset.
    • The "Customer biobank" is used for selecting samples for the performance studies, which may implicitly reflect data used in the development or refinement of their overall genotyping process, but it's not explicitly defined as a separate 'training set' for an AI model.

    9. How the Ground Truth for the Training Set Was Established

    • As mentioned above, the document does not elaborate on a distinct "training set" with established ground truth in the context of an AI/ML model for this genetic test. The "Coregen software" analyzes raw data from the beadchip, and its accuracy is validated against Sanger sequencing. The development process of this proprietary software, and any data used to "train" it (if it involves statistical modeling beyond simple rule-based interpretation of genotyping signals), is not detailed in terms of ground truth establishment.
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN190035
    Manufacturer
    Date Cleared
    2020-12-23

    (509 days)

    Product Code
    Regulation Number
    866.6000
    Type
    Direct
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Helix Laboratory Platform is a qualitative in vitro diagnostic device intended for exome sequencing and detection of single nucleotide variants (SNVs) and small insertions and deletions (indels) in human genomic DNA extracted from saliva samples collected with Oragene® Dx OGD-610. The Helix Laboratory Platform is only intended for use with other devices that are germline assays authorized by FDA for use with this device. The device is performed at the Helix laboratory in San Diego, CA.

    Device Description

    The Helix Laboratory Platform (HLP) is a high throughput DNA sequencing platform for targeted sequencing of an individual's whole exome. It is intended for targeted sequencing of an individual's whole exome for use with a genetic test application. Genetic test applications may be third party partner ("Partner") genetic test applications or a Helix genetic test application such as the Helix Genetic Health Risk App (HRA; K192073). The DNA sequence generated by this device is intended as input to clinical germline DNA assays intended for use with this device that have FDA marketing authorization. A brief overview of the commercialized workflow is shown in Figure 1 (refer to the section titled "Test Principle" for more specific information regarding the commercialized workflow.) HLP consists of a HiSeq sequencing instrument, cBot system, library preparation reagents, sequencing reagents, and data analysis software. The Helix Laboratory Platform also interacts with the Helix Laboratory Automation Systems and Content Mapping Systems which serve as repositories for the data and do not perform data analysis. The test detects single nucleotide variants (SNVs) insertions and deletions (indels) up to 20 base pairs (bp) and is limited to making high-confidence variant calls that meet prespecified quality metrics (i.e., the analytical range) within the reportable range. Sequencing is performed at the Helix clinical laboratory in San Diego. CA.

    AI/ML Overview

    Acceptance Criteria and Device Performance Study

    This document describes the acceptance criteria for the Helix Laboratory Platform (HLP) and the studies conducted to demonstrate that the device meets these criteria. The HLP is a qualitative in vitro diagnostic device intended for exome sequencing and detection of single nucleotide variants (SNVs) and small insertions and deletions (indels) in human genomic DNA extracted from saliva samples, for use with other FDA-authorized germline assays.

    1. Table of Acceptance Criteria and Reported Device Performance

    Variant TypeAcceptance Criteria (PPA, TPPV, NPA)Reported Device Performance (Summary Across Studies)Notes on Performance & Exclusions
    SNVPPA ≥ 99.5%, TPPV ≥ 99.5%, NPA ≥ 99.99%PPA: 99.91% - 99.98% (across various studies and stratifications)TPPV: 99.93% - 99.99% (across various studies and stratifications)NPA: ≥ 99.99% (consistently reported as 1.0000 in various contexts)All overall SNV performance metrics (PPA, TPPV, NPA) consistently met or exceeded acceptance criteria across precision, between-lot reproducibility, and accuracy studies, and across different regions (Coding, Mendeliome, Priority) and GC content ranges.
    Indel (all sizes)PPA ≥ 99.0%, TPPV ≥ 99.0%PPA: 98.63% - 99.98% (varies by size and study)TPPV: 91.92% - 99.92% (varies by size and study)Overall indel performance met criteria. However, for indels ≥ 6bp, particularly insertions, PPA and TPPV were sometimes below the 99.0% threshold (e.g., as low as 92.12% PPA and 91.92% TPPV for NA12878 in Precision study). These indels ≥ 6bp are noted to require independent validation as per the Instructions for Use. Indels in regions with GC content >65% are excluded from reporting due to observed suboptimal performance.
    Exogenous Interference (Food)NPA ≥ 99.99%, PPA ≥ 99.5%, TPPV ≥ 99.5% (all with 95% CI lower bound of 99.0%)Immediately after food: Mean PPA 0.9988 (lower bound 0.9986), Mean NPA 0.9999 (lower bound 0.9999), Mean TPPV 0.9362 (lower bound 0.9355)Performance immediately after food failed to meet acceptance criteria for mean PPA and TPPV, attributed to one poorly performing sample. This indicates that saliva samples should be collected at least 30 minutes after consuming food. The "30 minutes after food" condition met all criteria.

    2. Sample Sizes Used for the Test Set and Data Provenance

    The major testing was performed across several studies:

    • Precision (Cell lines): 6 unique reference cell line samples (NA12877, NA12878, NA24385, NA24149, NA24143, NA24631). Each was tested with 72 replicates, for a total of 432 replicates.
    • Precision (Clinical Specimens): 18 unique saliva (clinical) samples. Originally intended for 72 replicates each, resulting in ~1296 replicates. However, due to QC failures, 118 replicates were not evaluable, leaving 1178 evaluable replicates across 17 samples (one sample had all replicates fail).
    • Between-Lot Reproducibility: 24 samples (6 cell lines and 18 saliva-derived DNAs). Each sample produced 54 replicate sequences with combinatorial sets of reagents, totaling 1296 intended replicates. 1287 evaluable replicates passed QC.
    • DNA Input: 20 unique samples with known variants. Tested at 35ng, 50ng, 70ng, and 100ng DNA input, each in triplicate (totaling 240 intended samples). 219 samples were evaluable.
    • Index Swapping - Barcoding: 48 saliva samples with known variants. Run in triplicate, totaling 160 libraries. 157 were used in analysis.
    • Interfering Substances (Endogenous): 60 donors, each providing 3 aliquots (no treatment, plus 2 different endogenous substances). 180 no-treatment libraries and 120 treatment libraries were generated. 299 out of 300 samples were evaluable.
    • Interfering Substances (Exogenous): 22 donors (originally 20, 2 added for food group), each providing samples for various conditions (before, immediately after, 30 min after consumption of food, drink, gum, mouthwash). 198 intended samples. Number of evaluable samples varied by condition.
    • Interfering Substances (Microbial): 6 cell line DNA samples across 5 bacterial content conditions (0%, 10%, 20%, 30%, 50%), each in triplicate (totaling 90 samples). 81 samples evaluable. Also, fresh saliva from 3 donors tested across 3 conditions (baseline, bacteria spiked-in, yeast spiked-in), each in triplicate (totaling 27 samples).
    • Interfering Substances (Smoking): 5 donors, each providing samples for 3 conditions (before, immediately after, 30 min after smoking), each in triplicate (totaling 45 samples).
    • Accuracy Study 1 (Reference Cell Lines): 6 well-characterized cell lines (same as Precision study).
    • Accuracy Study 2 (Clinical Specimens): 1002 clinical samples and 96 unique cell line samples.

    Data Provenance:
    The reference cell line samples (NA12877, NA12878, NA24385, NA24149, NA24143, NA24631) are publicly available from the Genome in a Bottle (GIAB) consortium and Platinum Genomes project, primarily representing Northern European (Utah) and Ashkenazim Jewish ethnicities. One GIAB sample was Asian Chinese.
    Clinical samples were saliva samples collected from donors within the Helix lab's specimen collection. These are therefore retrospective samples. The country of origin is not explicitly stated for the clinical samples but is assumed to be the USA, where the Helix lab is located.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The ground truth for the reference cell lines (NA12877, NA12878, NA24385, NA24149, NA24143, NA24631) was established by publicly available reference datasets from the Genome in a Bottle (GIAB) consortium and the Platinum Genomes project. These consortia involve multiple expert groups and diverse sequencing technologies to establish highly confident variant calls. No specific number of, or qualifications for, individual experts are listed, as the ground truth relies on these highly vetted, community-accepted reference standards.

    For the clinical samples in Accuracy Study 2, a validated Sanger sequencing method was used as the comparator method to confirm the accuracy of specific variants. This implies expert interpretation of Sanger sequencing results, but the number and qualifications of these experts are not explicitly stated.

    4. Adjudication Method for the Test Set

    The ground truth for reference cell lines was based on publicly available, highly vetted datasets (GIAB, Platinum Genomes), which typically involve a consensus-based approach from multiple sequencing technologies and analyses rather than active, real-time adjudication by a small group of experts for this specific study.

    For samples where a reference sequence was generated within a study (e.g., Precision, DNA Input, Endogenous/Exogenous Substances, Microbial Interference, Smoking studies), it was established by majority call comparison over multiple replicates of a sample within the study. This implies an internal consensus mechanism rather than external expert adjudication.

    For Accuracy Study 2 (clinical samples), Sanger sequencing was used as the comparator. Discrepancies between HLP and Sanger sequencing would be reviewed, but a formal adjudication method (e.g., 2+1, 3+1) involving external experts for these specific discrepancies is not described.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No MRMC comparative effectiveness study was performed or described in this document. The evaluation focuses on the standalone analytical performance of the Helix Laboratory Platform and its concordance with established ground truth or comparator methods, not on how human readers' performance improves with or without AI assistance from this platform.

    6. Standalone Performance Study

    Yes, extensive standalone performance studies were conducted. The entire "Performance Characteristics" section (Section L) details the intrinsic analytical performance of the Helix Laboratory Platform (HLP) without human intervention for variant calling and quality assessment. The reported PPA, NPA, and TPPV values across various studies (Precision, Between-Lot Reproducibility, DNA Input, Index Swapping, Interfering Substances, Accuracy) demonstrate the algorithm's performance in detecting SNVs and indels when operating independently.

    7. Type of Ground Truth Used

    The primary types of ground truth used were:

    • Expert Consensus / Community Standards: For reference cell lines, publicly available datasets from the Genome in a Bottle (GIAB) consortium and the Platinum Genomes project were used. These are highly confident, multi-platform consensus truth sets.
    • Validated Comparator Method: For clinical samples where specific variants were assessed (e.g., Accuracy Study 2), a validated Sanger sequencing method served as the comparator ground truth.
    • Internal Majority Call: For various precision and interference studies where multiple replicates of a sample were generated and analyzed, a "majority call" across these replicates was used to establish an internal reference sequence for performance comparison.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size used for training the Helix bioinformatics pipeline's algorithms. It mentions optimization processes in Section L.1, such as "Optimization of variant read depth, allele fraction and callability thresholds" and "Establishment of filter and QC threshold for variant calling." These optimizations were "based on historical reference sample runs" and "analyzed with [...redacted...] of the bioinformatics pipeline representing different conditions relative to the quality metric criteria." While this indicates that data was used for optimizing and establishing parameters, specific training set sizes are not provided.

    9. How the Ground Truth for the Training Set Was Established

    Similar to point 8, the document does not explicitly detail how the ground truth for any training set was established. However, the optimization efforts heavily relied on "historical reference sample runs" and "reference samples." It is highly likely that these reference samples would have included publicly available, well-characterized control genomes like those from the GIAB and Platinum Genomes projects, for which the "truth" variants are established through extensive, multi-platform sequencing and expert consensus by those consortia. The process described for establishing QC thresholds (e.g., in Section L.1.b) implies a comparison against these known reference datasets to fine-tune filtering parameters and improve accuracy.

    Ask a Question

    Ask a specific question about this device

    K Number
    K200009
    Date Cleared
    2020-08-05

    (216 days)

    Product Code
    Regulation Number
    866.6100
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The clonoSEQ Assay is an in vitro diagnostic that uses multiplex polymerase chain reaction (PCR) and next-generation sequencing (NGS) to identify and quantify rearranged IgH (VDJ), IgH (DJ), IgK and IgL receptor gene sequences, as well as translocated BCL1/1gH (J) and BCL2/1gH (J) sequences in DNA extracted from bone marrow from patients with B-cell acute lymphoblastic leukemia (ALL) or multiple myeloma (MM), and blood or bone marrow from patients with chronic lymphocytic leukemia (CLL).

    The clonoSEQ Assay measures minimal residual disease (MRD) to monitor changes in burden of disease during and after treatment. The test is indicated for use by qualified healthcare professionals in accordance with professional guidelines for clinical decision-making and in conjunction with other clinicopathological features.

    The clonoSEQ Assay is a single-site assay performed at Adaptive Biotechnologies Corporation.

    Device Description

    The clonoSEQ Assay is a next-generation sequencing (NGS) based assay that identifies rearranged IgH (VDJ), IgH (DJ), IgK, and IgL receptor gene sequences, as well as translocated BCL1/IgH (J) and BCL2/IgH (J) sequences. The assay also includes primers that amplify specific genomic regions present as diploid copies in normal genomic DNA (gDNA) to allow determination of total nucleated cell content.

    Testing begins with gDNA extracted from the specimen supplied (Figure 1). Extracted gDNA quality is assessed and rearranged immune receptors are amplified using a multiplex PCR. Reaction-specific index barcode sequences for sample identification are added to the amplified receptor sequences by PCR. Sequencing libraries are prepared from barcoded amplified DNA, which are then sequenced by synthesis using NGS. Raw sequence data are uploaded from the sequencing instrument to the Adaptive analysis pipeline. These sequence data are analyzed in a multi-step process: first, a sample's sequence data are identified using the sample index sequences. Next, data are processed using a proprietary algorithm with in-line controls to remove amplification bias. When the clonoSEQ Clonality (ID) assessment is conducted, the immune repertoire of the sample is checked for the presence of DNA sequences specific to "dominant" clone(s) consistent with the presence of a lymphoid malignancy. Each sequence that is being considered for MRD tracking is compared against a B cell repertoire database and assigned a uniqueness value that, together with its abundance relative to other sequences, is used to assign the sequence to a sensitivity bin which will be used in the estimation of the reported LoD and LoO on the patient report. During clonoSEQ Tracking (MRD) assessment, the complete immunoglobulin receptor repertoire is again assessed, and the previously identified dominant clonotype sequence(s) are detected and quantified to determine the sample MRD level. The clonoSEQ Assay MRD assessment measures residual disease in a biologic sample.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the Adaptive Biotechnologies clonoSEQ Assay, based on the provided FDA 510(k) summary:

    This device, the Adaptive Biotechnologies clonoSEQ Assay, is an in vitro diagnostic (IVD) that identifies and quantifies rearranged immune receptor gene sequences (IgH, IgK, IgL) and translocated BCL1/IgH and BCL2/IgH sequences using multiplex PCR and Next-Generation Sequencing (NGS). It measures Minimal Residual Disease (MRD) in patients with B-cell acute lymphoblastic leukemia (ALL), multiple myeloma (MM), and chronic lymphocytic leukemia (CLL) to monitor disease burden. The current submission is an expansion of indications to include blood samples from CLL patients.


    1. A table of acceptance criteria and the reported device performance

    The provided document doesn't explicitly list "acceptance criteria" in a single table, but rather describes the performance characteristics that were measured and the outcomes for each. I will compile these for the CLL in Blood indication, as this is the focus of the 510(k) expansion.

    Acceptance Criteria & Reported Device Performance for clonoSEQ Assay (CLL in Blood)

    Performance CharacteristicAcceptance Criteria (Implicit)Reported Device Performance (CLL in Blood)
    Precision (MRD Frequency)%CV within acceptable clinical/analytical limits (not explicitly stated, but inferred from successful results)Range: 18.7% - 54.9% CV. - At 500 ng DNA input: 21.9% - 54.9% CV - At 2 µg DNA input: 20.8% - 51.6% CV - At 20 µg DNA input: 18.7% - 49.2% CV (Comparable to BMA precision)
    Precision (Malignant Cells Detected)%CV within acceptable clinical/analytical limits, primarily influenced by cell number.Range: 19% CV (at 765.70 cells) to 53% CV (at 3.10 cells). Primarily due to residual variability; other factors (operator, instruments, reagents, day, run) contributed 0%-10% CV.
    LinearityMaximum deviation from linearity (based on quadratic or cubic fit) less than 5%.Met for all DNA inputs (20µg, 2µg, 500ng) across the entire tested MRD frequency range (0 to 1x10^-3 / 4x10^-3). - Slopes: 0.989 - 0.997 (indicating strong linearity) - Intercepts: -0.009 to -0.075
    Accuracy (Concordance with mpFC)High positive percent agreement (PPA) and understanding of negative percent agreement (NPA) reflecting greater sensitivity.PPA: 98.9% (95% CI: 94.3%-100%) NPA: 47.5% (95% CI: 40.5%-54.6%) (NPA lower due to clonoSEQ's higher sensitivity detecting MRD where mpFC calls negative)
    Limit of Blank (LoB)LoB should be zero or negligible.LoB was confirmed as zero (95th percentile of trackable sequences in healthy blood was zero).
    Limit of Detection (LoD) / Limit of Quantitation (LoQ)LoD/LoQ for blood should be comparable to or lower than previously determined values for bone marrow.LoD and LoQ for CLL in blood were lower or within the 95% CI of bootstrapped prior BMA data, confirming comparability.
    Analytical Specificity (Interfering Substances)Mean MRD frequency difference $\pm$ 30% when comparing with and without interferent substances.All tested endogenous and exogenous substances met acceptance criteria. - Endogenous: Bilirubin (conjugated & unconjugated), Hemoglobin, Cholesterol, Triglycerides. - Exogenous: K2EDTA, K3EDTA, Heparin, Chloroform. (MRD results not substantially influenced).
    Cross-Contamination/Sample CarryoverNo significant contamination events leading to false positive ID or MRD results.- No false calibrations for run-to-run (0/44 BMA, 0/44 BMMC). - One single well-to-well false calibration (1/44 BMA) at a very low template count (83 templates), not associated with cell lines. - Blood: No contamination or disease clone-sharing events leading to false positive ID/MRD results. - PCR/Library/Sequencing: No run-to-run contamination (0/36 tests). 8/712 well-to-well events (likely vendor-related primer barcode issue) < 4x10^-6, deemed unimpactful due to patient-specific clonotypes.
    Reagent Stability (In-Use)Sequencing results meeting all QC metrics.All conditions tested met acceptance criteria. (Pre-amp/PCR primer mix, master mix, complete reaction, process pause stability).
    Reagent Stability (Real Time)Performance adequate and consistent; pairwise equivalence test of clinical specimens within $\pm$ 30% MRD frequency.Established 15-month shelf life for pre-amp and PCR primer mixes at -20 ± 5 ℃. Clinical sample equivalence met $\pm$ 30% MRD frequency criterion.
    Sample StabilitySamples remain stable under specified storage and shipping conditions for stated durations.Met for blood samples: - -15 ℃ to -25 ℃: up to 6 months - 2 ℃ to 8 ℃: up to 14 days - 15 ℃ to 25 ℃: up to 5 days - Freeze/Thaw: Up to 3 cycles Shipper Stability: - Ambient, Summer, Winter: up to 5 days

    2. Sample sizes used for the test set and the data provenance

    • Precision Studies (Test Set):
      • CLL in BMA: 22 patients. Contrived samples (blending patient gDNA with healthy donor BMA gDNA). 360 contrived samples tested, yielding ~7,480 MRD measurements.
      • CLL in Blood: 15 patients. Contrived samples (blending patient gDNA with healthy donor blood gDNA). 320 contrived samples tested, yielding ~4,785 MRD measurements.
    • Linearity Studies (Test Set):
      • Cell Lines in BMA: 3 CLL cell lines (HG-3, MEC-1, PGA-1). Blended cell line gDNA with healthy subject gDNA. Data shown for multiple DNA inputs and 11 MRD frequencies.
      • Clinical BMA Specimens: Re-analysis of data from the 22 CLL patients in the precision study.
      • Clinical Blood Specimens: Re-analysis of data from the 15 CLL patients in the precision study.
    • Accuracy (Concordance with mpFC in Blood Clinical Samples): 299 matched clinical samples.
    • Limit of Blank (LoB): 22 CLL patient samples (for trackable sequences) and healthy bone marrow samples. For blood, 15 CLL patient samples and healthy blood samples.
    • Limit of Detection/Quantitation (LoD/LoQ): 22 CLL patient specimens (BMA) and 15 CLL patient specimens (Blood). Contrived dilution series.
    • Interfering Substances: 4 different donors for both BMA and blood samples. Each condition replicated 8 times. Additional assessment on 4 CLL clinical blood specimens.
    • Cross-Contamination/Carryover:
      • Automated DNA Extraction (BMA/BMMC): Panel of 6 lymphoid malignancy cell lines (3 ALL, 3 MM), 10% spiked into healthy BMA/BMMC pool + PBS blanks.
      • Automated DNA Extraction (Blood): Panel of 6 lymphoid malignancy cell lines, 10% spiked into normal healthy blood + PBS blanks.
      • PCR, Library Pooling, Sequencing: gDNA from blood from healthy subjects (MRD-negative) and 5% spiked cell line gDNA blends.
    • Clinical Studies (to support prognostic utility):
      • NCT02242942 (Primary CLL study): Samples and outcomes data from 445 patients initially. For primary analysis, 337 patients had usable clonoSEQ Assay MRD data and clinical outcomes (after QC and excluding early progression).
      • NCT00759798 (Secondary CLL study): 111 front-line CLL patients with clonoSEQ ID samples, 137 clonoSEQ MRD samples (also evaluated by 4-color flow cytometry). Bone marrow available for 75 patients, blood for 62 patients (26 had both). 3 patients excluded due to missing clinical covariates.

    Data Provenance:
    The data provenance is largely implied as originating from Adaptive Biotechnologies Corporation as a single-site assay performed in Seattle, Washington. The studies involve clinical specimens from patients with CLL.

    • The clinical validation studies reference specific US clinical trials (NCT02242942, NCT00759798), indicating prospective data collection from multi-center clinical trials, likely including locations within the US.
    • The analytical studies (precision, linearity, etc.) used both contrived samples (blending patient/cell line gDNA with healthy donor gDNA) and clinical specimens, processed and analyzed at the Adaptive Biotechnologies lab. These are described as analytic validation studies.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    The document describes an in vitro diagnostic (IVD) test that quantifies specific DNA sequences. The "ground truth" for the analytical performance studies (precision, linearity, LoD/LoQ) is established by the known concentrations of the contrived samples, where specific amounts of malignant cells or gDNA from patient samples (with known clonal sequences) are blended into a background of healthy donor gDNA. This is a common and appropriate method for analytical validation of quantitative IVD assays.

    For the Clinical Studies, the "ground truth" for patient outcomes is clinical progression-free survival (PFS) data and disease assessment from the clinical trials (NCT02242942 and NCT00759798). These outcomes are established by the clinical trial investigators, who would be qualified healthcare professionals (e.g., oncologists, hematologists) following standard clinical practice and trial protocols. The document does not specify a number of "experts" to establish ground truth in the sense of independent expert review of images or clinical cases, as would be common for an imaging AI device. The ground truth for this device is based on quantifiable molecular levels and patient outcomes.


    4. Adjudication method for the test set

    Not applicable in the typical sense for this type of IVD device. "Adjudication" usually refers to a process for resolving discrepancies in expert interpretations (e.g., radiologist reads).

    • For the analytical studies, the ground truth is established by the known input concentrations of contrived samples or the re-analysis of patient data to show linearity. Reproducibility and accuracy are assessed by repeatedly measuring these known inputs across different conditions (operators, instruments, reagents, etc.).
    • For the clinical studies, the ground truth is patient outcome data (PFS) collected as part of the clinical trials, which would be managed and reviewed according to standard clinical trial protocols, not a specific "adjudication method" as seen in image reading studies.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    No, an MRMC comparative effectiveness study was not done. This is an in vitro diagnostic (IVD) assay, not an AI imaging device where human readers interact with AI. The clonoSEQ Assay is a laboratory test that provides quantitative MRD measurements. There is no "human reader" in the sense of interpreting an AI output directly influencing their decision-making for a specific case (like interpreting a radiology image with or without AI assistance). The output is a numerical MRD value reported to qualified healthcare professionals for use in clinical decision-making.


    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    Yes, the "standalone" performance is the core of the studies presented. The clonoSEQ Assay is an algorithm-driven test that provides a quantitative MRD value. The precision, linearity, LoD/LoQ, and analytical specificity studies directly evaluate the "algorithm only" performance (i.e., the performance of the assay system independent of clinical interpretation for a specific patient). The output (MRD value) is generated by the assay system and its bioinformatics pipeline, which includes proprietary algorithms. The clinical studies then demonstrate the prognostic utility of this standalone quantitative output.


    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)

    The ground truth for the different aspects of the study include:

    • Analytical Studies (Precision, Linearity, LoD/LoQ, Interfering Substances, Cross-Contamination): Known input concentrations of contrived samples (blending gDNA from patient/cell lines with healthy donor gDNA). This is a quantitative ground truth.
    • Accuracy (Concordance with mpFC): Flow Cytometry (mpFC) results served as a comparator method. While not a "gold standard" pathology ground truth, it's a widely accepted method for MRD detection, and the study highlights clonoSEQ's greater sensitivity.
    • Clinical Studies: Patient Outcomes Data, specifically Progression-Free Survival (PFS), as recorded during controlled clinical trials.

    8. The sample size for the training set

    The document does not explicitly state a "training set" size for the clonoSEQ assay's algorithmic development. For IVD assays based on NGS and bioinformatics pipelines, the "training" (or development/optimization) often involves extensive analytical validation, optimization using synthetic controls, and iterations on algorithms rather than a distinct "training set" like in machine learning for image classification.

    The document states:

    • "Raw sequence data are uploaded from the sequencing instrument to the Adaptive analysis pipeline. These sequence data are analyzed in a multi-step process: first, a sample's sequence data are identified using the sample index sequences. Next, data are processed using a proprietary algorithm with in-line controls to remove amplification bias."
    • "When the clonoSEQ Clonality (ID) assessment is conducted, the immune repertoire of the sample is checked for the presence of DNA sequences specific to "dominant" clone(s) consistent with the presence of a lymphoid malignancy. Each sequence that is being considered for MRD tracking is compared against a B cell repertoire database and assigned a uniqueness value..."

    This suggests the algorithms were likely developed and refined using a combination of synthetic data, known biological samples, and potentially retrospective patient data to build the "B cell repertoire database" and optimize the bias correction and clonality assessment. However, the exact sample sizes for this development phase are not provided in this 510(k) summary, as it primarily focuses on the validation of the finalized assay for regulatory approval.


    9. How the ground truth for the training set was established

    As inferred above, if there was a "training set" in the context of algorithm development, the ground truth would have been established through:

    • Known molecular constructs/synthetic controls: For optimizing sequencing and amplification bias correction.
    • Well-characterized cell lines and patient samples: Where the presence and frequency of specific clonal rearrangements are either known or confirmed by orthogonal methods (e.g., flow cytometry, Sanger sequencing, or other molecular techniques) to build the B cell repertoire database or tune the clonality determination.
    • Expert knowledge of immunology and genetics: To design the algorithms that identify and quantify rearranged gene sequences and interpret their significance in the context of hematological malignancies.

    The document implicitly refers to this through descriptions of the "proprietary algorithm with in-line controls" and comparison against a "B cell repertoire database," indicating an internally developed and optimized system for which ground truth would have been internally established during its development phase.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 2