(323 days)
The HAVAb IgG II assay is a chemiluminescent microparticle immunoassay (CMA) used for the qualitative detection of IgG antibody to hepatitis A virus (IgG anti-HAV) in human adult and pediatric (4 through 21 years) serum (collected in serum and serum separator tubes) and plasma (collected in sodium heparin, lithium heparin separator, dipotassium EDTA, and tripotassium EDTA tubes) from patients with signs and symptoms or at risk for hepatitis A on the Alinity i system.
The HAVAb IgG II assay is used to determine the immune status of individuals to hepatitis A virus (HAV) infection. Warning: This assay has not been cleared for use in screening blood, plasma, or tissue donors. This assay camot be used for the diagnosis of acute HAV infection.
Assay performance characteristics have not been established when the HAVAb IgG II assay is used in conjunction with other hepatitis assays.
The HAVAb IgG II assay is a chemiluminescent microparticle immunoassay (CMIA) used for the qualitative detection of IgG antibody to hepatitis A virus (IgG anti-HAV) in human adult and pediatric (4 through 21 years) serum (collected in serum and serum separator tubes) and plasma (collected in sodium heparin, lithium heparin separator, dipotassium EDTA, and tripotassium EDTA tubes) from patients with signs and symptoms or at risk for hepatitis A on the Alinity i system. The kit includes reagents (Microparticles, Conjugate, Assay Diluent), Calibrator, and Controls. The assay is an automated, two-step immunoassay.
Considering the provided document, the device described is an in vitro diagnostic (IVD) assay (HAVAb IgG II) for the qualitative detection of IgG antibody to hepatitis A virus (IgG anti-HAV). The FDA 510(k) summary focuses on establishing substantial equivalence to a predicate device, not on meeting specific acceptance criteria in the context of an AI/ML medical device's performance evaluation against clinical ground truth.
Therefore, many of the requested criteria (like ground truth establishment by experts, adjudication methods, multi-reader multi-case studies, and separate training/test sets with their ground truth establishment) are generally not applicable to the performance claims made for this in vitro diagnostic device in this FDA submission. The document describes analytical and clinical performance studies typical for an IVD, focusing on agreement with a predicate device and reproducibility/precision, rather than predictive performance of an AI model against a complex clinical endpoint established by human experts.
Based on the provided text, here's a breakdown of the information that is applicable and a clear indication where the requested information is not present or relevant to this type of device submission:
1. A table of acceptance criteria and the reported device performance
The document doesn't explicitly state "acceptance criteria" in a table format that would typically be found for an AI/ML model for diagnostic imaging (e.g., target sensitivity/specificity values). Instead, it presents performance data for comparison to a predicate device and for reproducibility. The implicit "acceptance criterion" for a 510(k) is demonstrating "substantial equivalence" to a legally marketed predicate device.
However, we can infer some performance metrics presented as evidence of equivalence:
Performance Metric | Reported Device Performance (HAVAb IgG II) | Predicate Device Performance (ARCHITECT HAVAB-G) (for comparison, not acceptance criteria) |
---|---|---|
PPA (Positive Percent Agreement) with Predicate: | ||
- Increased Risk of HAV Infection Population (n=250) | 96.75% (95% CI: 91.94%, 98.73%) | N/A (this is agreement with the predicate) |
- Signs and Symptoms of Hepatitis Infection (n=499) | 95.39% (95% CI: 92.42%, 97.24%) | N/A |
- Pediatric Population (n=105) | 100.00% (95% CI: 95.91%, 100.00%) | N/A |
NPA (Negative Percent Agreement) with Predicate: | ||
- Increased Risk of HAV Infection Population (n=250) | 98.43% (95% CI: 94.44%, 99.57%) | N/A |
- Signs and Symptoms of Hepatitis Infection (n=499) | 98.97% (95% CI: 96.34%, 99.72%) | N/A |
- Pediatric Population (n=105) | 93.33% (95% CI: 70.18%, 98.81%) | N/A |
Within-Laboratory Precision (20-Day) | ||
- High Negative Panel 1 (0.71 S/CO) | SD: 0.028 (Range 0.026-0.045) | N/A (Predicate's Within-Lab Precision: 0.029-0.050 SD for = 1.00 S/CO) |
- Negative Control (0.09 S/CO) | SD: 0.015 (Range 0.011-0.035) | N/A |
- Positive Control (2.19 S/CO) | %CV: 2.9 (Range 2.5-4.0) | N/A |
System Reproducibility (Multi-site) | ||
- High Negative Panel A (0.66 S/CO) | SD: 0.053 | N/A (Predicate's Reproducibility: 0.023-0.116 SD) |
- Low Positive Panel B (1.32 S/CO) | %CV: 5.2 | N/A (Predicate's Reproducibility: 4.6-10.8 %CV) |
- Negative Control (0.11 S/CO) | SD: 0.046 | N/A |
- Positive Control (2.26 S/CO) | %CV: 4.7 | N/A |
2. Sample sizes used for the test set and the data provenance
-
Clinical Performance Test Set (Agreement with Predicate):
- Individuals at Increased Risk of HAV Infection: n=250
- Individuals with Signs and Symptoms of Hepatitis Infection: n=499
- Pediatric Population: n=105
- Total for Agreement Study: 250 + 499 + 105 = 854 specimens.
- Data Provenance: Prospective multi-center study.
- Increased Risk of HAV: collected in California, Colorado, Florida, Illinois, Massachusetts, North Carolina, and Texas.
- Signs and Symptoms of Hepatitis: collected in California, Colorado, Florida, Illinois, Massachusetts, and Texas.
- Pediatric Population: collected in the US (California and Massachusetts) and Belgium.
-
Precision/Reproducibility Test Sets:
- Within-Laboratory Precision: n=80 per sample/control for a representative combination (tested over 20 days, 2 replicates/day). Total tested for this study was 4 reagent/calibrator/instrument combinations.
- System Reproducibility: n=360 per sample/control (tested at 3 sites, with 4 replicates/run, 2 runs/day, 5 days).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable / Not stated. For this IVD device, the "ground truth" for the clinical performance study was the result produced by the legally marketed predicate device (ARCHITECT HAVAB-G). This is a common practice for 510(k) submissions for IVDs. There were no human experts adjudicating results for the purpose of establishing a "ground truth" beyond what the predicate device determined.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. As the ground truth was the predicate device's result, no human adjudication method (like 2+1, 3+1) was used or described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not applicable. This is an in vitro diagnostic assay, not an imaging AI device intended to assist human readers. Therefore, an MRMC study and evaluation of human reader improvement with AI assistance were not performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in spirit. The device (HAVAb IgG II assay) functions as a standalone test; it's an automated immunoassay that generates a qualitative result (Reactive or Nonreactive) without human interpretation in the loop influencing the output beyond sample collection and instrument operation. Its performance was assessed directly against the predicate device's results.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for the clinical performance comparison was the results from a legally marketed predicate device (ARCHITECT HAVAB-G assay). In essence, the performance of the new device was compared to the established performance of the predicate. This is a common form of "truth" in demonstrating substantial equivalence for IVDs.
8. The sample size for the training set
- Not explicitly stated in terms of a "training set" for model development. This is an immunoassay, not an AI/ML model where a distinct training dataset size is typically reported. The document describes analytical verification and clinical performance studies, not model training.
9. How the ground truth for the training set was established
- Not applicable. As this is not an AI/ML device in the sense of a machine learning model requiring a training set with a ground truth established by experts for supervised learning, this information is not provided. The development process for an immunoassay does not involve "training data" in the same way an AI/ML model does.
§ 866.3310 Hepatitis A virus (HAV) serological assays.
(a)
Identification. HAV serological assays are devices that consist of antigens and antisera for the detection of hepatitis A virus-specific IgM, IgG, or total antibodies (IgM and IgG), in human serum or plasma. These devices are used for testing specimens from individuals who have signs and symptoms consistent with acute hepatitis to determine if an individual has been previously infected with HAV, or as an aid to identify HAV-susceptible individuals. The detection of these antibodies aids in the clinical laboratory diagnosis of an acute or past infection by HAV in conjunction with other clinical laboratory findings. These devices are not intended for screening blood or solid or soft tissue donors.(b)
Classification. Class II (special controls). The special control is “Guidance for Industry and FDA Staff: Class II Special Controls Guidance Document: Hepatitis A Virus Serological Assays.” See § 866.1(e) for the availability of this guidance document.