Search Results
Found 1 results
510(k) Data Aggregation
(75 days)
NOVA Lite DAPI ANA Kit
NOVA Lite® DAPI ANA Kit is an indirect immunofluorescence assay for the qualitative detection and semi-quantitative determination of anti-nuclear antibodies of the IgG isotype in human serum by manual fluorescence microscopy or with the NOVA View Automated Fluorescence Microscope. The presence of anti-nuclear antibodies can be used in conjunction with other serological tests and clinical findings to aid in the diagnosis of systemic lupus erythematosus and other systemic rheumatic diseases. A trained operator must confirm results when generated with the NOVA View device.
The NOVA Lite DAPI ANA Kit is an indirect immunofluorescence assay for the detection and semiquantitative determination of anti-nuclear antibodies in human serum.
Kit components:
- HEp-2 (human epithelial cell) substrate slides; 12 wells/slide, with desiccant.
- FITC IgG Conjugate with DAPI, containing 0.09% sodium azide; ready to use.
- Positive Control: ANA Titratable Pattern, human serum with antibodies to HEp-2 nuclei in buffer, containing 0.09% sodium azide; pre-diluted, ready to use.
- . Negative Control: IFA System Negative Control, diluted human serum with no ANA present, containing 0.09% sodium azide; pre-diluted, ready to use.
- PBS II (40x) Concentrate, sufficient for making 2000 mL of 1x PBS II.
- Mounting Medium, containing 0.09% sodium azide ●
- Coverslips
The provided document describes the analytical and clinical performance of the NOVA Lite® DAPI ANA Kit, an indirect immunofluorescence assay for detecting anti-nuclear antibodies. The study focuses on demonstrating substantial equivalence to a predicate device and the agreement between manual microscopy, digital image interpretation, and the automated NOVA View system.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria | Reported Device Performance |
---|---|---|
Precision Performance | - Reactivity grades within one run (between replicates) are within ± one reactivity grade. |
- Average reactivity grade difference between any runs is within ± one reactivity grade.
- Pattern consistent for 100% of the replicates (considering positive results only). | - First set, Digital Reading: Reactivity grade range was consistent with criterion (e.g., 0-1, 1-2, 2-3, 4).
- Second set, Digital Reading: All grades were within ± one reactivity grade within one run, and average grade was no more than one reactivity grade different between runs.
- Second set, Manual Reading: All grades were within ± one reactivity grade within one run, and average grade was no more than one reactivity grade different between runs.
- All sets, Pattern Consistency: 100% pattern consistency for positive results was reported for digital reading and manual reading across all precision studies. |
| Conjugate Comparison (DAPI vs. non-DAPI) | - Agreement between the two conjugate sets is > 85%. - Pattern agreement (for positive samples only) is > 85%.
- Grades are within ± one grade from each other for 90% of the samples. | - Total Agreement: 96.6% (94.3-98.1%)
- Positive Agreement: 98.6% (95.9-99.7%)
- Negative Agreement: 94.3% (90.1-97.1%)
- All grades were within ± one grade from each other (100%).
- Pattern discrepancy was observed in only 3 cases out of 210 positive samples, indicating high pattern agreement. |
| Lot-to-Lot Comparison | - Average negative agreement > threshold (implied by meeting grade agreement). - Average positive agreement > threshold (implied by meeting grade agreement).
- Total agreement > threshold (implied by meeting grade agreement).
- All grades (100%) within ± 1 grade from each other for all samples in any pairwise comparison.
- 100% pattern agreement between lots for definitive patterns (considering positive samples only). | - Agreements (Digital Reading): Average negative agreement 91.9-97.4%, average positive agreement 93.0-97.6%, total agreement 92.5-97.5%.
- Agreements (Manual Reading): Average negative agreement 93.8-100%, average positive agreement 95.8-100%, total agreement 95.0-100%.
- Grade Agreement: 100% of grades were within ± 1 grade for all pair-wise comparisons (both digital and manual reading).
- Pattern Agreement: 100% pattern agreement (both digital and manual reading). |
| Accelerated Stability | Reactivity grades obtained on slides stored at 37 °C for 2 weeks are within ± one grade of those obtained on the control slides (for a preliminary 1-year shelf life). | - All reactivity grades of tested samples from accelerated stability studies were within ± one grade of the control samples for both digital and manual reading across all three lots. - Pattern consistency also maintained. |
| Accuracy of Endpoint Titration (Manual vs. Digital) | Endpoints by digital reading are the same or within ± 1 dilution step from that of manual reading for a high percentage of cases (implicitly demonstrating good agreement). | - 100% of cases at Site #1, 60% at Site #2, and 90% at Site #3 were within ± 1 dilution step. - All remaining cases were within ± 2 dilution steps. |
| Clinical Sample Agreement (Manual vs. Digital vs. NOVA View) | Agreement between digital image reading and manual reading results > 90% at all three testing sites. | - Reproducibility Cohort (120 samples): Total Agreement between Manual vs Digital was 99.2% (Site 1), 95.8% (Site 2), 96.7% (Site 3). - Clinical Cohort (463 samples): Total Agreement between Manual vs Digital was 91.4% (Site 1), 92.2% (Site 2), 92.2% (Site 3).
- Grade Agreement (Clinical Cohort): Fluorescence intensity grades determined by digital image reading were within ± one dilution step from manual reading in 96.3% (Site 1), 99.1% (Site 2), and 99.6% (Site 3) of samples.
- Pattern Agreement (Clinical Cohort): Agreement between digital image reading and manual reading was above 90% at all three sites (94.7% Site 1, 91.6% Site 2, 95.7% Site 3). |
| Clinical Sensitivity & Specificity | Sensitivity and specificity values at each site should have overlapping confidence intervals between NOVA View classification, digital image reading, and manual reading, indicating no significant differences. | - Site 1: Overlap observed (e.g., SLE sensitivity for NV: 80.0%, Manual: 72.0%, Digital: 80.0%; SARD+AIL sensitivity for NV: 69.4%, Manual: 62.9%, Digital: 69.9%; Specificity for NV: 75.3%, Manual: 74.1%, Digital: 72.4%). - Site 2: Overlap observed (e.g., SLE sensitivity for NV: 72.0%, Manual: 70.7%, Digital: 73.3%; SARD+AIL sensitivity for NV: 62.9%, Manual: 65.6%, Digital: 62.98%; Specificity for NV: 77.0%, Manual: 67.2%, Digital: 75.3%).
- Site 3: Overlap observed (e.g., SLE sensitivity for NV: 82.7%, Manual: 82.7%, Digital: 81.3%; SARD+AIL sensitivity for NV: 72.0%, Manual: 71.0%, Digital: 69.4%; Specificity for NV: 69.0%, Manual: 67.2%, Digital: 71.3%).
- No statistically significant differences were found between the different reading methods. |
| CDC ANA Reference Sera | All reference sera should produce the expected pattern. Results of NOVA View digital image interpretation should be within ± one reactivity grade from manual interpretation. No discrepancies in pattern interpretation. | - All reference sera produced the expected pattern. - Digital image interpretation results were within ± one reactivity grade from manual interpretation.
- No discrepancies in pattern interpretation were seen between manual and digital results. |
2. Sample Sizes Used for the Test Set and Data Provenance
- Precision/Reproducibility Studies:
- First Set: 13 samples (3 negative, 10 positive), processed in 3 replicates across 10 runs (30 data points per sample).
- Second Set: 22 samples (20 negative/around cut-off, 2 strong positive), processed in 3 replicates across 10 runs (30 data points per sample).
- Third Set: Samples tested in triplicates or duplicates across 5 runs (15 or 10 data points per sample).
- Conjugate Comparison: 407 individual human serum samples.
- Method Comparison: 410 samples (400 clinically characterized sera, 10 samples with known ANA patterns).
- Lot-to-Lot Comparison: 40 sera.
- Endpoint Titration Accuracy: 10 ANA positive samples.
- Agreement on Clinical Sample Cohort (Reproducibility): 120 samples at each of 3 sites.
- Clinical Performance (Clinical Sensitivity and Specificity): 463 clinically characterized samples at each of 3 sites.
- CDC ANA Reference Sera: 12 reference sera.
Data Provenance:
- The document implies that the studies were conducted by Inova Diagnostics (Site #1) and two external sites (Site #2 and Site #3). While origin of patients' samples (e.g., country) is not explicitly stated for all cohorts, the studies conducted at "external sites" suggest broader geographic reach for sample collection, and certainly implies varied patient populations.
- The studies were retrospective, using "clinically characterized samples" and "individual serum samples" that were already available.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
The document consistently states that "A trained operator must confirm results when generated with the NOVA View device" and that "all slides were read by the same operator with manual microscopy" for various studies. For adjudication specifically, the number of experts for initial ground truth establishment isn't broken down for each individual sample, but implies internal validation by "trained operators".
- Precision Studies, Conjugate Comparison, Accelerated Stability, CDC Reference Sera: "The slides were read with NOVA View, and digital images were interpreted by the operator." and "all slides were read by the same operator with manual microscopy." This suggests at least one trained operator for defining ground truth for reading discrepancies.
- Endpoint Titration and Reproducibility/Clinical Performance Studies: "all slides were read by the same operator with manual microscopy." for generating ground truth. These studies were carried out at three different sites (Inova Diagnostics and two external locations), implying that a "trained operator" at each site was responsible for manual readings. The qualifications of these "trained operators" are not further specified beyond "trained operator."
4. Adjudication Method for the Test Set
The primary method for establishing agreement and performance comparison appears to be through comparison with manual microscopy readings by a "trained operator". There is no explicit mention of an adjudication protocol (e.g., 2+1 or 3+1 consensus) for discrepant results between the automated system and manual reading, or between multiple manual readers for the general test sets. The "trained operator" performing the manual reading effectively serves as the reference standard against which the digital and NOVA View results are compared. For the NOVA View results, it states "Digital images were interpreted and confirmed," implying a human review step and potential individual reconciliation.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
-
No explicit MRMC comparative effectiveness study involving AI-assistance performance improvement for human readers is described. The studies primarily focus on the agreement and equivalence between:
- Manual reading (human only, traditional method)
- Digital image reading (human interpreting images from the automated system)
- NOVA View output (raw automated classification)
The design compares the performance of the automated system and its digital interpretation against the manual method, rather than quantifying how much human readers improve when assisted by the AI in making their initial assessments. The phrasing "A trained operator must confirm results when generated with the NOVA View device" suggests that the human remains in the loop for final confirmation, but the study doesn't isolate the "effect size" of this assistance.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, partial standalone performance data is presented as "NOVA View output".
"NOVA View" refers to "raw results obtained with the NOVA View Automated Fluorescence Microscope, such as Light Intensity Units (LIU), positive/negative classification and pattern information." These raw results are then compared against "Digital reading" (human interpretation of NOVA View images) and "Manual reading" (human interpretation of actual slides).
For example, in the "Precision performance" section, NOVA View output (standalone) is compared to digital image reading. In the "Clinical performance" section, NOVA View classification (standalone) is compared to digital image reading and manual reading for sensitivity, specificity, and agreement.
7. The Type of Ground Truth Used
The primary ground truth used for performance validation is expert consensus/manual interpretation by experienced "trained operators" using traditional fluorescence microscopy. This is explicitly stated in multiple sections, for instance: "all slides were read by the same operator with manual microscopy" serving as a reference.
For the clinical performance section, "clinically characterized samples" are used, implying that patient diagnoses (e.g., SLE, SSc, SS, etc.) served as the clinical classification for sensitivity/specificity calculations, but the ANA ground truth itself (positive/negative, pattern, grade) within those clinical cohorts was established by manual interpretation.
The CDC ANA reference sera also represent a form of "known ground truth" based on established reference standards and known antibody specificities.
8. The Sample Size for the Training Set
The document does not explicitly state the sample size of the training set used to develop or train the NOVA View automated system's algorithms. The focus of this 510(k) submission is on the validation of the NOVA Lite® DAPI ANA Kit, which includes its use with the previously cleared NOVA View device (DEN140039). Training data details for NOVA View itself would likely be in its original submission.
9. How the Ground Truth for the Training Set Was Established
Since the document does not specify the training set used for the NOVA View algorithm, it also does not describe how its ground truth was established. This information would typically be found in the original submission for the NOVA View device itself (DEN140039).
Ask a specific question about this device
Page 1 of 1