Search Results
Found 1 results
510(k) Data Aggregation
(490 days)
The POMC/PCSK1/LEPR CDx Panel is a next generation sequencing (NGS)-based in vitro diagnostic test that analyzes genomic DNA isolated from blood or saliva. Specimens used with the test are K-EDTA blood collected using certain indicated K-EDTA blood collection devices and saliva collected using ORAcollect-Dx™ OCD-100 devices. The test detects germline nucleotide substitutions, short insertions and deletions, and copy number variants (CNVs) within the following 3 genes:
- Pro-opiomelanocortin (POMC) .
- Proprotein Convertase Subtilisin/Kexin type 1 (PCSKI) .
- Leptin Receptor (LEPR) .
The test is a companion diagnostic device intended to select adult and pediatric patients 6 years of age and older who have obesity and certain variants in POMC, PCSKI or LEPR genes for treatment with IMCIVREE® (setmelanotide) in accordance with the approved therapeutic product labeling. The POMC/PCSK1/LEPR CDx Panel is a single-site assay performed at PreventionGenetics, LLC (Marshfield, WI).
The POMC/PCSK1/LEPR CDx Panel is a next generation sequencing (NGS) assay for the detection of germline variants in three genes (pro-opiomelanocortin (POMC), leptin receptor (LEPR), and convertase subtilisin/kexin type 1 (PCSK1)). The POMC/PCSK1/LEPR CDx Panel is performed in a single laboratory (PreventionGenetics, LLC in Marshfield, WI).
Acceptance Criteria and Device Performance for POMC/PCSK1/LEPR CDx Panel
The POMC/PCSK1/LEPR CDx Panel is a next-generation sequencing (NGS)-based in vitro diagnostic test for detecting germline variants in POMC, PCSK1, and LEPR genes, intended to select patients for treatment with IMCIVREE (setmelanotide). The acceptance criteria primarily revolve around the analytical performance of the device, focusing on accuracy, precision, and specificity.
1. Table of Acceptance Criteria and Reported Device Performance
Given the nature of the device (a genetic variant detection system), the acceptance criteria are generally established through analytical performance metrics like Positive Percent Agreement (PPA), Negative Percent Agreement (NPA), and Overall Percent Agreement (OPA) when compared to validated orthogonal methods.
| Acceptance Criteria (Metric, Threshold) | Reported Device Performance (Value, 95% CI) | Study Name |
|---|---|---|
| Analytical Accuracy (Method Comparison) | ||
| PPA (Variant/Non-variant base level) | 100% (99.85%, 100.00%) | Method Comparison Study |
| NPA (Variant/Non-variant base level) | 100% (99.99%, 100.00%) | Method Comparison Study |
| OPA (Variant/Non-variant base level) | 100% (99.99%, 100.00%) | Method Comparison Study |
| PPA (Clinical Bridging - Local Test vs. Device, Pivotal Subjects) | 100% (84.5%, 100.0%) | Clinical Bridging Study |
| PPA (Clinical Bridging - Local Test vs. Device, Pivotal + Supplemental Subjects) | 96.7% (83.3%, 99.4%) | Clinical Bridging Study |
| Analytical Precision (Reproducibility) | ||
| OPA (Whole Blood, Variant/Non-variant base level) | 100% (100.00%, 100.00%) | Precision Study (additional runs) |
| PPA (Whole Blood, Variant/Non-variant base level) | 100% (99.72%, 100.00%) | Precision Study (additional runs) |
| NPA (Whole Blood, Variant/Non-variant base level) | 100% (100.00%, 100.00%) | Precision Study (additional runs) |
| OPA (Saliva, Variant/Non-variant base level) | 100% (100.00%, 100.00%) | Precision Study (additional runs) |
| PPA (Saliva, Variant/Non-variant base level) | 100% (99.72%, 100.00%) | Precision Study (additional runs) |
| NPA (Saliva, Variant/Non-variant base level) | 100% (100.00%, 100.00%) | Precision Study (additional runs) |
| Analytical Specificity (Interference) | ||
| Sequence Agreement (Blood substances) | b(4) % (exact value redacted) | Interference Study |
| Sequence Agreement (Saliva substances) | b(4) % (exact value redacted) | Interference Study |
| Sequence Agreement (DNA extraction components) | b(4) % (exact value redacted) | Interference Study |
| Analytical Specificity (Cross-contamination) | ||
| Percentage contamination (ART < 0.4, corresponding to <10% contamination) | Overall: 0.119 (Intra-Run), 0.158 (Inter-Run) | Cross-contamination Study |
| Specimen Stability | 100% Sequence Agreement (each condition vs. baseline) | Whole Blood, Saliva, Extracted DNA Stability Studies |
Note: Specific numerical thresholds for some acceptance criteria are not explicitly stated in the provided text as "acceptance criteria" but are implicitly met by the reported 100% agreement or low contamination percentages.
2. Sample Size Used for the Test Set and Data Provenance
-
Method Comparison Study:
- Sample Size: b(4) saliva samples and b(4) K2EDTA whole blood samples were used. This test set contained 1900 variants in total (SNVs, insertions/deletions < 50 bp, and one specific CNV).
- Data Provenance: The samples were from "patients within the intended use population." The document does not explicitly state the country of origin, but the applicant (PreventionGenetics, LLC) is based in the US (Marshfield, WI). The study appears to be retrospective, using existing patient samples.
-
Precision Studies:
- Initial Precision Study: b(4) saliva samples and b(4) K2EDTA whole blood samples.
- Additional Precision Study: A subset of b(4) K2EDTA whole blood samples and b(4) saliva samples.
- Data Provenance: From "patients within the intended use population." Assumed to be retrospective and from the US, similar to the method comparison study.
-
Clinical Bridging Study:
- Sample Size: 21 pivotal subjects (from IMCIVREE efficacy analysis) and 8 supplemental subjects. Total of 29 subjects for PPA calculation. One additional discordant patient was also mentioned, making a total of 30 for the broader PPA analysis.
- Data Provenance: Patients were from "across the world" as they were selected for IMCIVREE clinical studies based on local genetic testing. This suggests a prospective and international data provenance, specifically from real-world clinical trial participants.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document does not explicitly detail the number of experts or their qualifications for establishing the ground truth for the analytical test sets (method comparison, precision studies).
However, for the Method Comparison Study, the ground truth was established by validated orthogonal methods (Sanger sequencing for SNVs and insertions/deletions < 50 bp, or high-density gene-centric (HDGC) array comparative genomic hybridization (CGH) for larger CNVs). This implies that the interpretation of these orthogonal results would be conducted by qualified laboratory personnel, but not necessarily "experts" in the sense of clinical decision-makers.
For the Clinical Bridging Study, the "ground truth" used for comparison was the local genetic testing results which formed part of the patient's "standard of care" during the IMCIVREE clinical trials. These local findings were then confirmed by the POMC/PCSK1/LEPR CDx Panel and the orthogonal methods mentioned above. While the decisions based on local tests would have been made by medical professionals, the document does not specify the number or qualifications of these individuals, nor is there a formal "expert panel" to establish ground truth for this specific comparison.
4. Adjudication Method for the Test Set
The document does not specify an explicit adjudication method (e.g., 2+1, 3+1) for resolving discrepancies in the analytical test sets.
For the Method Comparison Study, the comparison was made between the POMC/PCSK1/LEPR CDx Panel and "validated orthogonal methods." Any discrepancies would likely trigger investigations into the cause (e.g., failed sequencing, problematic region) rather than an expert adjudication of two different test results. The 100% agreement implies no discrepancies requiring resolution, except for a mention of a false positive CNV in an earlier precision run that "did not meet the threshold for a high confidence call and should not have been reported," suggesting internal criteria for handling ambiguous results.
For the Clinical Bridging Study, the comparison was between "local tests" and the "POMC/PCSK1/LEPR CDx Panel" and "orthogonal methods." The single discordant result mentioned was a patient who was initially included based on local testing but then found to have "no variants in POMC, PCSK1, or LEPR" by both the CDx panel and Sanger sequencing, leading to their removal from the study. This implies that if a discrepancy arises between a local test and the more rigorous methods validated for the CDx, the latter would take precedence, effectively acting as an adjudicator.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done
No, an MRMC comparative effectiveness study was not done. This device is an automated genetic sequencing panel, not an imaging device where human readers would interpret results. The performance is assessed against gold standard genetic methods.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
Yes, the device's performance, as described in the analytical performance section (Precision, Method Comparison, Interference, Cross-contamination studies), represents standalone algorithm-only performance. The output of the device is genetic variant calls and interpretations, which are then compared to ground truth established by orthogonal methods. Human intervention is involved in sample preparation, sequencing operations, and potentially final review of reports, but the core "performance" measured here is the automated detection and interpretation of variants by the NGS pipeline.
7. The Type of Ground Truth Used
-
Analytical Studies (Precision, Method Comparison, Interference, Cross-contamination, Stability): The ground truth was established by validated orthogonal methods. Specifically:
- Sanger sequencing for Single Nucleotide Variants (SNVs) and insertions/deletions < 50 base pairs.
- High-density gene-centric (HDGC) array comparative genomic hybridization (CGH) for larger Copy Number Variants (CNVs).
- Additionally, in stability studies, the "baseline sample" (freshly collected and processed) served as a reference for comparison.
-
Clinical Bridging Study: The ground truth for comparison to the investigational device was initially "local genetic testing results" (standard of care), which were then cross-referenced and confirmed by the POMC/PCSK1/LEPR CDx Panel and the same orthogonal methods (Sanger sequencing or HDGC array CGH) used in the analytical validation.
8. The Sample Size for the Training Set
The document does not provide information on the sample size for a training set. This is common for genetic sequencing panels where the "algorithm" is based on established bioinformatics pipelines for sequence alignment, variant calling, and interpretation using public databases (like GRCh37/hg19 as a reference genome and ACMG guidelines for variant classification). These pipelines are typically developed and refined over time using large, publicly available datasets and internal validation samples rather than a single, defined "training set" in the context of machine learning model development. The focus is on the analytical validation of the test method itself against known variants and accepted standards.
9. How the Ground Truth for the Training Set was Established
As noted above, a distinct "training set" with its own ground truth establishment process is not described for this type of device. The bioinformatics pipeline's underlying principles (e.g., sequence alignment algorithms, variant calling parameters) are based on the fundamental science of genomics and likely validated through extensive use and comparison to existing reference standards and databases. Variant interpretations are specifically based on the 2015 American College of Medical Genetics and Genomics (ACMG) guidelines, which are established consensus guidelines for interpreting genetic variants.
Ask a specific question about this device
Page 1 of 1