Search Results
Found 2 results
510(k) Data Aggregation
(22 days)
PIV
NOVA View® Automated Fluorescence Microscope is an automated system consisting of a fluorescence microscope and software that acquires, analyzes, stores and displays digital images of stained indirect immunofluorescent slides. It is intended as an aid in the detection and classification of certain antibodies by indirect immunofluorescence technology. The device can only be used with cleared or approved in vitro diagnostic assays that are indicated for use with the device. A trained operator must confirm results generated with the device.
NOVA View is an automated fluorescence microscope that acquires, analyses, stores and displays digital images of stained indirect immunofluorescent slides.
The NOVA View AUTOLoader is an optional hardware accessory that performs the automated transfer of slide carriers to and from NOVA View, thereby providing a continuous load capability without human interaction.
AUTOLoader hardware components consist of a NOVA View alignment base, 3-position stack base, 3 slide carrier stacks (labelled as "Pending", "Completed", "and "Error"), telescoping arm with rotary gripper, and a 2D barcode scanner station. The AUTOLoader can be connected to up to two NOVA View devices.
Below is a summary of the acceptance criteria and study information for the NOVA View® Automated Fluorescence Microscope with AUTOLoader, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document describes the NOVA View® Automated Fluorescence Microscope with AUTOLoader as an aid in the detection and classification of certain antibodies. It explicitly states that "A trained operator must confirm results generated with the device." This implies that the device is not intended as a standalone diagnostic tool, but rather as one that assists a human expert.
Given this context and the fact that this is a Special 510(k) submission for the addition of an AUTOLoader, the primary focus of the performance evaluation appears to be on demonstrating that the addition of the AUTOLoader does not negatively impact the existing functionality of the NOVA View, and that the AUTOLoader itself performs its mechanical tasks reliably and accurately.
The document does not provide specific numerical acceptance criteria or performance metrics (e.g., sensitivity, specificity, accuracy) for the diagnostic performance of the device (i.e., its ability to correctly detect and classify antibodies). Instead, it focuses on demonstrating that the system functionality is maintained with the addition of the AUTOLoader.
Acceptance Criteria for AUTOLoader Functionality (Inferred):
Acceptance Criteria Category | Reported Device Performance |
---|---|
Software Functionality | New AUTOLoader module interfaces correctly with NOVA View software (version 2.1.4). Regression testing performed to confirm no change in NOVA View functionality. |
Automated Slide Handling (Loading) | AUTOLoader successfully picks up slide carriers from "Pending" stack and places them on NOVA View stage. |
Automated Barcode Scanning | AUTOLoader captures images of barcodes on slides. |
Automated Slide Handling (Unloading) | AUTOLoader successfully picks up slide carriers after scanning and transports them to "Completed" or "Error" stack. |
Continuous Load Capability | AUTOLoader provides continuous load capability without human interaction for multiple carriers. |
Integration with NOVA View | AUTOLoader connects to NOVA View and operates as an integrated system. |
Note on Diagnostic Performance: The document explicitly states "Analytical performance characteristics n/A" and "Clinical performance n/a." This indicates that this specific submission for the AUTOLoader addition did not involve new analytical or clinical performance studies related to the diagnostic accuracy of antibody detection. The regulatory submission leverages the established performance of the predicate device (NOVA View without AUTOLoader) for its core diagnostic function.
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a distinct "test set" in the traditional sense for evaluating diagnostic performance. The studies described are focused on the functionality of the AUTOLoader and the integration of new software.
- Software Testing: "All new functions were tested during software verification, and regression testing has been performed to demonstrate that NOVA View functionality has not changed." The sample size (number of test cases, scenarios, etc.) for this software testing is not provided.
- AUTOLoader Mechanical Testing: "This procedure [automated handling of slide carriers] is automatically repeated with the rest of the carriers that are in the Pending stack." While the number of carriers per stack (up to 12) is mentioned, the total number of carriers or repeated cycles used for testing the AUTOLoader's mechanical reliability is not specified.
- Data Provenance: Not explicitly stated as the primary focus is on system functionality and software verification rather than a clinical dataset.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
Not applicable in the context of the studies described. The "ground truth" here pertains to the correct functioning of the AUTOLoader and software, which is evaluated through verification and validation processes rather than expert clinical consensus on diagnostic outcomes. The device's diagnostic "results" still require confirmation by a "trained operator."
4. Adjudication Method for the Test Set
Not applicable for the described functionality and software verification testing.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
No MRMC comparative effectiveness study is mentioned. The device is intended as an "aid" for a "trained operator" who "must confirm results." This inherently suggests a human-in-the-loop system, but a formal study comparing human performance with and without the device's assistance is not part of this specific submission.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
No standalone performance study is explicitly described. The "Indications for Use" clearly state that "A trained operator must confirm results generated with the device," indicating it's not a standalone diagnostic device.
7. The Type of Ground Truth Used (Expert Consensus, Pathology, Outcomes Data, etc.)
For the specific performance studies described in this submission (AUTOLoader addition):
- Functional 'Ground Truth': The expected operational behavior of the AUTOLoader (e.g., correctly picking up, barcode scanning, placing carriers) and the established, unchanged functionality of the NOVA View software. This is assessed through engineering and software verification standards rather than clinical ground truth types.
8. The Sample Size for the Training Set
Not applicable. The document does not describe any machine learning or AI components that would require a "training set" in the conventional sense for diagnostic algorithm development. The software update is for controlling the AUTOLoader and preserving existing NOVA View functionality.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set for AI/ML was mentioned.
Ask a specific question about this device
(115 days)
PIV
NOVA View® Automated Fluorescence Microscope is an automated system consisting of a fluorescence microscope and software that acquires, stores and displays digital images of stained indirect immunofluorescent slides. It is intended as an aid in the detection and classification of certain antibodies by indirect immunofluorescence technology. The device can only be used with cleared or approved in vitro diagnostic assays that are indicated for use with the device. A trained operator must confirm results generated with the device.
NOVA View® is an automated fluorescence microscope. The instrument does not process samples. The instrument acquires digital images of representative areas of indirect immunofluorescent slides.
Hardware components:
- PC and monitor
- Keyboard and mouse ●
- Microscope
- Microscope control unit
- Slide stage
- LED illumination units
- Handheld LED display unit ●
- Camera ●
- Two fans ●
- Printer (optional) ●
- UPS (optional) or surge protector
- Handheld barcode scanner (optional) ●
1. Acceptance Criteria and Reported Device Performance
The device under evaluation is the NOVA View® Automated Fluorescence Microscope and its performance for the NOVA Lite® DAPI ANA Kit, as presented in the accuracy and reproducibility studies. The acceptance criteria are implicitly derived from the comparisons to manual reading (the reference standard) and digital reading by a human operator, with targets often expressed as agreement percentages or consistent classification and pattern recognition. The accuracy study assesses sensitivity and specificity, while the reproducibility study examines agreement within and between sites and operators.
Here's a summary of the reported device performance against these implicit acceptance criteria, focusing on key metrics from the provided text:
Acceptance Criteria Category | Specific Metric | Acceptance Criteria (Implicit from context) | Reported Device Performance |
---|---|---|---|
Accuracy (Detection) | Agreement between NOVA View® and Manual reading for Positive/Negative classification | High agreement (e.g., >80-90%) for positive and negative classifications, indicating that the NOVA View® system's automated calls align well with human expert interpretation. | Site #1: Positive Agreement: 88.3% (82.5-92.7), Negative Agreement: 90.4% (86.4-93.5), Total Agreement: 89.6% (86.5-92.3). |
Site #2: Positive Agreement: 80.5% (74.2-85.9), Negative Agreement: 96.3% (93.4-98.2), Total Agreement: 89.8% (86.7-92.4). | |||
Site #3: Positive Agreement: 86.1% (80.7-90.5), Negative Agreement: 87.8% (83.1-91.6), Total Agreement: 87.0% (83.6-90.0). | |||
Sensitivity for specific disease conditions (e.g., SLE) | The device's performance (NOVA View® and Digital Read) should be comparable to or ideally better than Manual Read. | Site #1: Manual: 72.0% (SLE), 62.9% (CTD+AIL); Digital: 80.0% (SLE), 69.9% (CTD+AIL); NOVA View®: 80.0% (SLE), 69.4% (CTD+AIL). | |
Site #2: Manual: 70.7% (SLE), 65.6% (CTD+AIL); Digital: 73.3% (SLE), 62.9% (CTD+AIL); NOVA View®: 72.0% (SLE), 62.9% (CTD+AIL). | |||
Site #3: Manual: 82.7% (SLE), 71.0% (CTD+AIL); Digital: 81.3% (SLE), 69.4% (CTD+AIL); NOVA View®: 82.7% (SLE), 72.0% (CTD+AIL). | |||
Specificity (excluding healthy subjects) | The device's performance (NOVA View® and Digital Read) should be comparable to or ideally better than Manual Read. | Site #1: Manual: 74.1%; Digital: 72.4%; NOVA View®: 75.3%. | |
Site #2: Manual: 67.2%; Digital: 75.3%; NOVA View®: 77.0%. | |||
Site #3: Manual: 67.2%; Digital: 71.3%; NOVA View®: 69.0%. | |||
Accuracy (Pattern Recognition) | Agreement between NOVA View® and Manual for Pattern Identification | High agreement (e.g., >70% for definitive patterns) for pattern recognition, indicating that the automated system can accurately classify patterns as interpreted by human experts. | Accuracy Study: Site #1: 76.0%; Site #2: 86.3%; Site #3: 72.7%. |
Reproducibility Study: Site #1: 78.9%; Site #2: 83.3%; Site #3: 80.4%. | |||
Precision/Reproducibility | Repeatability (internal consistency) - Positive/Negative Classification | High consistency (e.g., >95%) for samples not near the cut-off. | For samples away from the cut-off, NOVA View® output showed 100% positive or negative classification. For samples near the cut-off, variability was observed. |
Repeatability (internal consistency) - Pattern Consistency | 100% consistency for pattern determination in positive samples. | Pattern determination was consistent for 100% of replicates for positive samples (digital image reading and manual reading). NOVA View® pattern classification was correct for >80% of cases (excluding unrecognized). | |
Within-Site Reproducibility (Operator and Method Agreement) | High total agreement (e.g., >90-95%) between operators and different reading methods within a site. | Site #1: NOVA View® vs Manual: 99.2%; Digital vs Manual: 99.2%; Digital vs NOVA View®: 100.0%. | |
Site #2: NOVA View® vs Manual: 96.7%; Digital vs Manual: 95.8%; Digital vs NOVA View®: 95.8%. | |||
Site #3: NOVA View® vs Manual: 96.7%; Digital vs Manual: 96.7%; Digital vs NOVA View®: 98.3%. | |||
Between-Site Reproducibility (Method Agreement across sites) | High overall agreement (e.g., >90-95%) across different sites for all reading methods. | Manual: Site #1 vs #2: 90.7%; Site #1 vs #3: 85.7%; Site #2 vs #3: 87.3%. | |
Digital: Site #1 vs #2: 92.0%; Site #1 vs #3: 93.1%; Site #2 vs #3: 92.0%. | |||
NOVA View®: Site #1 vs #2: 92.7%; Site #1 vs #3: 89.6%; Site #2 vs #3: 87.9%. | |||
Single Well Titer (SWT) | SWT accuracy compared to Manual and Digital endpoints | High agreement, with estimated titer within ±1 or ±2 dilution steps. | SWT results were within ±2 dilution steps from manual endpoint for 96% (48/50) of samples and from digital endpoint for 98% (49/50) of samples in the initial validation. In the clinical study, it was within ±2 dilution steps for all 20 samples at all three locations. |
2. Sample Sizes and Data Provenance
- Test Set (Accuracy Study): 463 clinically characterized samples.
- Data Provenance: The study was conducted retrospectively or prospectively, and at three different locations: one internal (Site 1) and two external (Sites 2 and 3). The countries of origin of the data are not explicitly stated, but the mention of "U.S. sites" in special controls (2)(ii)(B) suggests primary relevance to the US context.
- Test Set (Reproducibility Study): 120 samples per location (total of 360 unique or overlapping samples if shared across sites as described).
- Data Provenance: Conducted at Inova Diagnostics (internal; Site#1) and two external sites (Sites #2 and #3). The same cohort of samples was processed at each location.
- Test Set (Repeatability Study 1): 13 samples (3 negative, 10 positive), tested in triplicate across 10 runs (30 data points per sample).
- Test Set (Repeatability Study 2): 22 samples (20 borderline/cut-off, 2 high intensity), tested in triplicate across 10 runs (30 data points per sample).
- Test Set (Repeatability Study 3): 8 samples, tested in triplicate or duplicate across 5 runs (10-15 data points per sample).
- SWT Validation Study 1: 50 ANA positive samples.
- SWT Validation Study 2: 20 ANA positive samples at each of the three locations (total 60 data points, if unique).
3. Number of Experts and Qualifications for Ground Truth
- Accuracy Study, Reproducibility Study, and SWT Validation: For "Manual" reading (the reference standard), "trained human operators" performed the interpretations. For "Digital" reading, "trained human operators" interpreted the software-generated images, blinded to automated results.
- Qualifications: The document consistently refers to "trained operators" and "trained human operators." Specific professional qualifications (e.g., "radiologist with 10 years of experience") are not explicitly provided, but the context implies experienced clinical laboratory personnel proficient in indirect immunofluorescence microscopy.
4. Adjudication Method for the Test Set
- Accuracy Study: Not explicitly stated as a formal adjudication. The comparison was described as a "three-way method comparison of NOVA View® automated software-driven result (NOVA View®) compared to the Digital image reading... by a trained operator who was blinded to the automated result (Digital) and compared to the reference standard of conventional IIF manual microcopy (Manual)." The "Manual" reading served as the key reference. Clinical truth for sensitivity/specificity was determined independently from the three reading methods.
- Reproducibility Study: No formal adjudication process is detailed between the different reading methods or operators. Agreement simply refers to concordance between the specified interpretations. For between-operator agreement, multiple operators at each site interpreted the same digital images.
- SWT Validation Studies: The "manual endpoint" determined by an operator using a traditional microscope served as a primary reference for comparison with the SWT application's endpoint.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a form of MRMC was conducted for reproducibility. The reproducibility study involved multiple operators (referred to as "Operator #1" and "Operator #2" at each site) interpreting the same digital images and comparing their results.
- Effect Size of Human Readers with AI vs. without AI: The document does not provide a direct "effect size" in terms of how much human readers improve with AI assistance versus without. Instead, it measures the agreement between manual reading (without AI assistance, as the traditional method) and digital image reading (human-in-the-loop with AI-provided images).
- For example, in the Accuracy Study, comparing "Digital vs. Manual" total agreement was: Site #1 (91.4%), Site #2 (92.2%), Site #3 (92.2%). This indicates a high level of agreement between human interpretation of digital images (AI-assisted display) and manual microscopy, suggesting that the digital images are comparable to traditional microscopy.
- Furthermore, "Between Operator Agreement" for digital image reading showed very high agreement (e.g., 99.2% for Site #1 Op #1 vs. Site #1 Op #2), indicating consistency among human readers using the digital system.
6. Standalone (Algorithm Only) Performance
- Yes, a standalone performance was done for various aspects.
- The "NOVA View®" results explicitly refer to "results obtained with the NOVA View® Automated Fluorescence Microscope, such as Light Intensity Units (LIU), positive/negative classification and pattern information without operator interpretation." This represents the algorithm's standalone output before human review.
- The accuracy and reproducibility tables compare "NOVA View®" (standalone algorithm) directly against "Manual" reading (reference standard) and "Digital" reading (human-in-the-loop).
7. Type of Ground Truth Used
- Expert Consensus / Clinical Diagnosis / Reference Standard:
- For the Accuracy Study, the "Manual" reading by trained operators using a traditional fluorescence microscope served as the primary reference standard for comparing the digital and automated methods. Additionally, clinical sensitivity and specificity were determined by comparing the results from all three methods (Manual, Digital, NOVA View®) to a "clinical truth" derived from a "cohort of clinically characterized samples." This clinical truth would likely be established through a combination of clinical criteria and other diagnostic tests, representing a form of expert consensus or outcomes data.
- For the Reproducibility/Repeatability Studies, the "Manual" reading served as the reference standard for evaluating consistency.
- For the SWT Validation, the "manual endpoint" titer determined by trained operators using traditional microscopy was the reference.
8. Sample Size for the Training Set
- The document does not explicitly state the sample size used for training the NOVA View® algorithm. The studies described are performance evaluations (test sets) rather than detailing the algorithm's development or training data.
- For the Single Well Titer (SWT) function, it states that "The NOVA View® SWT function was established [using] 38 ANA positive samples," which could be considered a form of "calibration" or establishment data for that specific algorithm feature, rather than a general training set for the primary classification.
9. How the Ground Truth for the Training Set Was Established
- Since the training set size is not provided, the method for establishing its ground truth is also not detailed in this document.
- For the SWT function's establishment data (38 ANA positive samples), the text implies that the "software application automatically performs the calculations based on the predetermined dilution curve, the LIU produced by the sample, and the pattern of the ANA." This suggests these 38 samples were used to define or fine-tune this "predetermined dilution curve" and pattern-based calculations, likely referencing expert-determined ANA patterns and traditional titration results for those samples.
Ask a specific question about this device
Page 1 of 1