K Number
K955136
Manufacturer
Date Cleared
1996-07-29

(259 days)

Product Code
Regulation Number
866.3820
Panel
MI
Reference & Predicate Devices
N/A
AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
Intended Use

Not Found

Device Description

Not Found

AI/ML Overview

This K955136 submission is for the RPR Card Test Kit, a device for detecting Treponema pallidum infection. The provided documents describe the performance characteristics of two RPR antigens (Ag#1 and Ag#2) compared to existing RPR antigens.

While the documents indicate that the RPR Card Test Kit is "effective when tested under the conditions for its intended use" and includes "relative sensitivity and specificity data," they do not explicitly state quantitative acceptance criteria or provide a table of performance against such criteria. The focus is on comparing the REMEL manufactured antigens to existing and CDC-approved antigens.

Here's an attempt to extract and infer the requested information based on the provided text:

1. Table of Acceptance Criteria and Reported Device Performance

As explicit, quantitative acceptance criteria are not provided, we can infer that the general acceptance criterion was "satisfactory performance" as judged by the CDC and consistency with comparator tests and clinical history.

Acceptance Criterion (Inferred)Reported Device Performance (Ag#1 - REMEL manufactured)Reported Device Performance (Ag#2 - LEE Labs manufactured)
Overall Performance / Satisfactory Rating (by CDC)"Satisfactory" by the CDC."Satisfactory" by the CDC.
Consistency with Comparator RPR Antigen (BD RPR for Ag#1 & Ag#2, Difco USR for Ag#2)No significant discrepancies compared to BD RPR antigen at the University of Texas.Generally consistent with BD RPR antigen at the University of Texas and Difco USR antigen at Michigan Dept. of Health. Some reactive samples at U. of Texas were nonreactive with Ag#2 and reactive (undiluted endpoint titer) with BD Ag; however, this was not seen as a significant discrepancy at the Michigan site and was consistent with USR.
Consistency in Titer (for reactive samples)No statistically significant difference in performance (titer) compared to BD RPR for reactive samples.Not explicitly stated for titer, but the overall performance was considered consistent.
Consistency with MHA-TP (for reactive samples)Results consistent with MHA-TP (for all reactives).Results consistent with MHA-TP (for all reactives).
Consistency with Clinical History (stage of infection, treatment history)Reactive results consistent with history of treatment and staging of illness.Reactive results consistent with history of treatment and staging of illness.
Impact of Patient Gender on OutcomePatient gender did not affect the outcome of the tests.Patient gender did not affect the outcome of the tests.
Repeatability/ReproducibilityNot explicitly stated for Ag#1."Repeat testing of the same samples with the same material at the U. of Texas yielded the same results" for discrepant samples, implying good internal consistency. Michigan site also showed consistency with known results despite initial U. of Texas discrepancies.
Relative Sensitivity and Specificity"Each insert contains information on the limitations and the performance characteristics of the test (including relative sensitivity and specificity data) when used according to the directions.""Each insert contains information on the limitations and the performance characteristics of the test (including relative sensitivity and specificity data) when used according to the directions."

2. Sample Size Used for the Test Set and Data Provenance

The exact sample sizes for the test sets are not explicitly stated in terms of total number of specimens. The text mentions "many of the reactive samples" were titered or had clinical history documented. For Ag#2, a subset of "U. of Texas discrepant samples along with 12 known reactive samples and 12 known nonreactive samples" were sent blinded to Michigan.

  • Data Provenance: The studies were conducted at the University of Texas and the Michigan Department of Public Health within the United States. The data appears to be retrospective, using clinical samples, based on the mention of "patient genders," "stage of infection with Treponema pallidum and history of treatment."

3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

The ground truth appears to be established through a combination of:

  • Comparator tests: BD RPR antigen, MHA-TP, Difco USR antigen. These are established laboratory tests.
  • Clinical history: "Stage of infection with Treponema pallidum and history of treatment."

The individuals involved in the evaluation were:

  • Dr. Beth Hartwell at the University of Texas.
  • Mr. Harlan Stiefel at the Michigan Department of Public Health.
  • The CDC (for initial evaluation and "satisfactory" determination).

Their specific qualifications (e.g., number of years of experience in serology or infectious disease diagnostics) are not detailed, beyond their institutional affiliations. It is implied that they are experts in their respective fields capable of conducting and evaluating these tests.

4. Adjudication Method

The adjudication method is not explicitly stated as a formal process like "2+1" or "3+1." The evaluation involves comparative testing against established methods (BD RPR, Difco USR, MHA-TP) and consistency with clinical records. Discrepancies were noted and, in the case of Ag#2, a subset of discrepant samples along with known samples were sent "blinded" to the Michigan site for re-evaluation and verification, indicating a form of external review. However, a structured adjudication protocol for the entire test set is not described.

5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

There is no indication of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study being done to assess human reader improvement with or without AI assistance. This device is a diagnostic kit, not an AI-assisted interpretation tool for human readers.

6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

This request is not applicable to the RPR Card Test Kit. This is a manual diagnostic test kit, not an algorithm or AI system. Its performance inherently involves human interpretation of results (e.g., flocculation). The study assesses the performance of the antigen/reagent, not a standalone algorithm.

7. Type of Ground Truth Used

The ground truth used is a combination of:

  • Expert Consensus/Established Comparator Tests: Performance against "Satisfactory" ratings by the CDC, BD RPR antigen, Difco USR antigen, and MHA-TP (Microhemagglutination Assay for Treponema pallidum). MHA-TP is a treponemal-specific test, often used for confirmation of RPR results.
  • Outcomes/Clinical Data: "Stage of infection with Treponema pallidum and history of treatment." This clinical information serves as a crucial part of establishing the true status of the patient (e.g., reactive RPR should align with active infection or treated infection history).

8. Sample Size for the Training Set

No explicit training set is mentioned. For diagnostic kits like this, the "training" equivalent would typically involve formulation and initial in-house testing to optimize the antigen and kit components. The provided text describes the evaluation of already formulated antigens (Ag#1 and Ag#2) that were deemed "Satisfactory" by the CDC prior to the reported comparison studies.

9. How the Ground Truth for the Training Set Was Established

Since no explicit training set is mentioned in the context of algorithm development, this question is not directly applicable. However, the "ground truth" for the initial development and "satisfactory" determination by the CDC would likely involve:

  • Known positive and negative samples (well-characterized clinical specimens).
  • Comparison to standard reference preparations or established, validated RPR tests.
  • Correlation with other serological tests for syphilis (e.g., FTA-ABS, MHA-TP) and clinical diagnosis.

The "satisfactory" rating from the CDC for both antigens before their use in the reported studies implies that a ground truth was established for the antigens themselves, but the details of that establishment are not provided within these documents.

§ 866.3820

Treponema pallidum nontreponemal test reagents.(a)
Identification. Treponema pallidum nontreponemal test reagents are devices that consist of antigens derived from nontreponemal sources (sources not directly associated with treponemal organisms) and control sera (standardized sera with which test results are compared) used in serological tests to identify reagin, an antibody-like agent, which is produced from the reaction of treponema microorganisms with body tissues. The identification aids in the diagnosis of syphilis caused by microorganisms belonging to the genusTreponema and provides epidemiological information on syphilis.(b)
Classification. Class II (performance standards).