Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    DEN140039
    Date Cleared
    2015-04-09

    (115 days)

    Product Code
    Regulation Number
    866.4750
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    NOVA View Automated Fluorescense Microscope

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    NOVA View® Automated Fluorescence Microscope is an automated system consisting of a fluorescence microscope and software that acquires, stores and displays digital images of stained indirect immunofluorescent slides. It is intended as an aid in the detection and classification of certain antibodies by indirect immunofluorescence technology. The device can only be used with cleared or approved in vitro diagnostic assays that are indicated for use with the device. A trained operator must confirm results generated with the device.

    Device Description

    NOVA View® is an automated fluorescence microscope. The instrument does not process samples. The instrument acquires digital images of representative areas of indirect immunofluorescent slides.

    Hardware components:

    • PC and monitor
    • Keyboard and mouse ●
    • Microscope
    • Microscope control unit
    • Slide stage
    • LED illumination units
    • Handheld LED display unit ●
    • Camera ●
    • Two fans ●
    • Printer (optional) ●
    • UPS (optional) or surge protector
    • Handheld barcode scanner (optional) ●
    AI/ML Overview

    1. Acceptance Criteria and Reported Device Performance

    The device under evaluation is the NOVA View® Automated Fluorescence Microscope and its performance for the NOVA Lite® DAPI ANA Kit, as presented in the accuracy and reproducibility studies. The acceptance criteria are implicitly derived from the comparisons to manual reading (the reference standard) and digital reading by a human operator, with targets often expressed as agreement percentages or consistent classification and pattern recognition. The accuracy study assesses sensitivity and specificity, while the reproducibility study examines agreement within and between sites and operators.

    Here's a summary of the reported device performance against these implicit acceptance criteria, focusing on key metrics from the provided text:

    Acceptance Criteria CategorySpecific MetricAcceptance Criteria (Implicit from context)Reported Device Performance
    Accuracy (Detection)Agreement between NOVA View® and Manual reading for Positive/Negative classificationHigh agreement (e.g., >80-90%) for positive and negative classifications, indicating that the NOVA View® system's automated calls align well with human expert interpretation.Site #1: Positive Agreement: 88.3% (82.5-92.7), Negative Agreement: 90.4% (86.4-93.5), Total Agreement: 89.6% (86.5-92.3).
    Site #2: Positive Agreement: 80.5% (74.2-85.9), Negative Agreement: 96.3% (93.4-98.2), Total Agreement: 89.8% (86.7-92.4).
    Site #3: Positive Agreement: 86.1% (80.7-90.5), Negative Agreement: 87.8% (83.1-91.6), Total Agreement: 87.0% (83.6-90.0).
    Sensitivity for specific disease conditions (e.g., SLE)The device's performance (NOVA View® and Digital Read) should be comparable to or ideally better than Manual Read.Site #1: Manual: 72.0% (SLE), 62.9% (CTD+AIL); Digital: 80.0% (SLE), 69.9% (CTD+AIL); NOVA View®: 80.0% (SLE), 69.4% (CTD+AIL).
    Site #2: Manual: 70.7% (SLE), 65.6% (CTD+AIL); Digital: 73.3% (SLE), 62.9% (CTD+AIL); NOVA View®: 72.0% (SLE), 62.9% (CTD+AIL).
    Site #3: Manual: 82.7% (SLE), 71.0% (CTD+AIL); Digital: 81.3% (SLE), 69.4% (CTD+AIL); NOVA View®: 82.7% (SLE), 72.0% (CTD+AIL).
    Specificity (excluding healthy subjects)The device's performance (NOVA View® and Digital Read) should be comparable to or ideally better than Manual Read.Site #1: Manual: 74.1%; Digital: 72.4%; NOVA View®: 75.3%.
    Site #2: Manual: 67.2%; Digital: 75.3%; NOVA View®: 77.0%.
    Site #3: Manual: 67.2%; Digital: 71.3%; NOVA View®: 69.0%.
    Accuracy (Pattern Recognition)Agreement between NOVA View® and Manual for Pattern IdentificationHigh agreement (e.g., >70% for definitive patterns) for pattern recognition, indicating that the automated system can accurately classify patterns as interpreted by human experts.Accuracy Study: Site #1: 76.0%; Site #2: 86.3%; Site #3: 72.7%.
    Reproducibility Study: Site #1: 78.9%; Site #2: 83.3%; Site #3: 80.4%.
    Precision/ReproducibilityRepeatability (internal consistency) - Positive/Negative ClassificationHigh consistency (e.g., >95%) for samples not near the cut-off.For samples away from the cut-off, NOVA View® output showed 100% positive or negative classification. For samples near the cut-off, variability was observed.
    Repeatability (internal consistency) - Pattern Consistency100% consistency for pattern determination in positive samples.Pattern determination was consistent for 100% of replicates for positive samples (digital image reading and manual reading). NOVA View® pattern classification was correct for >80% of cases (excluding unrecognized).
    Within-Site Reproducibility (Operator and Method Agreement)High total agreement (e.g., >90-95%) between operators and different reading methods within a site.Site #1: NOVA View® vs Manual: 99.2%; Digital vs Manual: 99.2%; Digital vs NOVA View®: 100.0%.
    Site #2: NOVA View® vs Manual: 96.7%; Digital vs Manual: 95.8%; Digital vs NOVA View®: 95.8%.
    Site #3: NOVA View® vs Manual: 96.7%; Digital vs Manual: 96.7%; Digital vs NOVA View®: 98.3%.
    Between-Site Reproducibility (Method Agreement across sites)High overall agreement (e.g., >90-95%) across different sites for all reading methods.Manual: Site #1 vs #2: 90.7%; Site #1 vs #3: 85.7%; Site #2 vs #3: 87.3%.
    Digital: Site #1 vs #2: 92.0%; Site #1 vs #3: 93.1%; Site #2 vs #3: 92.0%.
    NOVA View®: Site #1 vs #2: 92.7%; Site #1 vs #3: 89.6%; Site #2 vs #3: 87.9%.
    Single Well Titer (SWT)SWT accuracy compared to Manual and Digital endpointsHigh agreement, with estimated titer within ±1 or ±2 dilution steps.SWT results were within ±2 dilution steps from manual endpoint for 96% (48/50) of samples and from digital endpoint for 98% (49/50) of samples in the initial validation. In the clinical study, it was within ±2 dilution steps for all 20 samples at all three locations.

    2. Sample Sizes and Data Provenance

    • Test Set (Accuracy Study): 463 clinically characterized samples.
      • Data Provenance: The study was conducted retrospectively or prospectively, and at three different locations: one internal (Site 1) and two external (Sites 2 and 3). The countries of origin of the data are not explicitly stated, but the mention of "U.S. sites" in special controls (2)(ii)(B) suggests primary relevance to the US context.
    • Test Set (Reproducibility Study): 120 samples per location (total of 360 unique or overlapping samples if shared across sites as described).
      • Data Provenance: Conducted at Inova Diagnostics (internal; Site#1) and two external sites (Sites #2 and #3). The same cohort of samples was processed at each location.
    • Test Set (Repeatability Study 1): 13 samples (3 negative, 10 positive), tested in triplicate across 10 runs (30 data points per sample).
    • Test Set (Repeatability Study 2): 22 samples (20 borderline/cut-off, 2 high intensity), tested in triplicate across 10 runs (30 data points per sample).
    • Test Set (Repeatability Study 3): 8 samples, tested in triplicate or duplicate across 5 runs (10-15 data points per sample).
    • SWT Validation Study 1: 50 ANA positive samples.
    • SWT Validation Study 2: 20 ANA positive samples at each of the three locations (total 60 data points, if unique).

    3. Number of Experts and Qualifications for Ground Truth

    • Accuracy Study, Reproducibility Study, and SWT Validation: For "Manual" reading (the reference standard), "trained human operators" performed the interpretations. For "Digital" reading, "trained human operators" interpreted the software-generated images, blinded to automated results.
    • Qualifications: The document consistently refers to "trained operators" and "trained human operators." Specific professional qualifications (e.g., "radiologist with 10 years of experience") are not explicitly provided, but the context implies experienced clinical laboratory personnel proficient in indirect immunofluorescence microscopy.

    4. Adjudication Method for the Test Set

    • Accuracy Study: Not explicitly stated as a formal adjudication. The comparison was described as a "three-way method comparison of NOVA View® automated software-driven result (NOVA View®) compared to the Digital image reading... by a trained operator who was blinded to the automated result (Digital) and compared to the reference standard of conventional IIF manual microcopy (Manual)." The "Manual" reading served as the key reference. Clinical truth for sensitivity/specificity was determined independently from the three reading methods.
    • Reproducibility Study: No formal adjudication process is detailed between the different reading methods or operators. Agreement simply refers to concordance between the specified interpretations. For between-operator agreement, multiple operators at each site interpreted the same digital images.
    • SWT Validation Studies: The "manual endpoint" determined by an operator using a traditional microscope served as a primary reference for comparison with the SWT application's endpoint.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a form of MRMC was conducted for reproducibility. The reproducibility study involved multiple operators (referred to as "Operator #1" and "Operator #2" at each site) interpreting the same digital images and comparing their results.
    • Effect Size of Human Readers with AI vs. without AI: The document does not provide a direct "effect size" in terms of how much human readers improve with AI assistance versus without. Instead, it measures the agreement between manual reading (without AI assistance, as the traditional method) and digital image reading (human-in-the-loop with AI-provided images).
      • For example, in the Accuracy Study, comparing "Digital vs. Manual" total agreement was: Site #1 (91.4%), Site #2 (92.2%), Site #3 (92.2%). This indicates a high level of agreement between human interpretation of digital images (AI-assisted display) and manual microscopy, suggesting that the digital images are comparable to traditional microscopy.
      • Furthermore, "Between Operator Agreement" for digital image reading showed very high agreement (e.g., 99.2% for Site #1 Op #1 vs. Site #1 Op #2), indicating consistency among human readers using the digital system.

    6. Standalone (Algorithm Only) Performance

    • Yes, a standalone performance was done for various aspects.
      • The "NOVA View®" results explicitly refer to "results obtained with the NOVA View® Automated Fluorescence Microscope, such as Light Intensity Units (LIU), positive/negative classification and pattern information without operator interpretation." This represents the algorithm's standalone output before human review.
      • The accuracy and reproducibility tables compare "NOVA View®" (standalone algorithm) directly against "Manual" reading (reference standard) and "Digital" reading (human-in-the-loop).

    7. Type of Ground Truth Used

    • Expert Consensus / Clinical Diagnosis / Reference Standard:
      • For the Accuracy Study, the "Manual" reading by trained operators using a traditional fluorescence microscope served as the primary reference standard for comparing the digital and automated methods. Additionally, clinical sensitivity and specificity were determined by comparing the results from all three methods (Manual, Digital, NOVA View®) to a "clinical truth" derived from a "cohort of clinically characterized samples." This clinical truth would likely be established through a combination of clinical criteria and other diagnostic tests, representing a form of expert consensus or outcomes data.
      • For the Reproducibility/Repeatability Studies, the "Manual" reading served as the reference standard for evaluating consistency.
      • For the SWT Validation, the "manual endpoint" titer determined by trained operators using traditional microscopy was the reference.

    8. Sample Size for the Training Set

    • The document does not explicitly state the sample size used for training the NOVA View® algorithm. The studies described are performance evaluations (test sets) rather than detailing the algorithm's development or training data.
    • For the Single Well Titer (SWT) function, it states that "The NOVA View® SWT function was established [using] 38 ANA positive samples," which could be considered a form of "calibration" or establishment data for that specific algorithm feature, rather than a general training set for the primary classification.

    9. How the Ground Truth for the Training Set Was Established

    • Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document.
    • For the SWT function's establishment data (38 ANA positive samples), the text implies that the "software application automatically performs the calculations based on the predetermined dilution curve, the LIU produced by the sample, and the pattern of the ANA." This suggests these 38 samples were used to define or fine-tune this "predetermined dilution curve" and pattern-based calculations, likely referencing expert-determined ANA patterns and traditional titration results for those samples.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1