Search Results
Found 1 results
510(k) Data Aggregation
(28 days)
VIRTUSO SYSTEM FOR IHC (DO-7)
The Virtuoso system provides automated digital slide creation, management, analysis, and viewing. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting, review and classification of tissues and cells of clinical interest based on particular morphology, color, intensity, size, pattern and shape.
The Virtuoso™ System for p53 (DO-7) is for digital read and image analysis applications. This particular Virtuoso system is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of p53 (DO-7) protein in formalin-fixed, paraffin-embedded normal and neoplastic tissue. This device is an accessory to the Ventana Medical Systems, Inc. CONFIRM™ anti-p53 (DO-7) Mouse Monoclonal Primary Antibody assay. The Ventana Medical Systems, Inc. CONFIRM™ anti-p53 assay is indicated for the assessment of p53 protein where mutations have been linked to tumor proliferation. When used with this assay, the Virtuoso™ System for p53 (DO-7) is indicated for use as an aid in the assessment of p53 status in breast cancer patients (but is not the sole basis for treatment).
The Virtuoso™ System is an instrument-plus-software system designed to assist the qualified pathologist in the consistent assessment of protein expression in immunohistochemically stained histologic sections from formalin-fixed, paraffin-embedded normal and neoplastic tissues. The system consists of a slide scanner (iScan), computer, monitor, keyboard, mouse, image analysis algorithms for specific immunohistochemical markers, and software with a Windows web browser-based user interface. Virtuoso is a web-based, end-to-end, digital pathology software solution that allows pathology laboratories to acquire, manage, view, analyze, share, and report digital images of pathology specimens. Using the Virtuoso software, the pathologist can view digital images, add annotations, make measurements, perform image analysis, and generate reports.
Hardware: The iScan slide scanning device captures digital images of formalin-fixed, paraffin-embedded tissues that are suitable for storage and viewing. The device includes a digital slide scanner, racks for loading glass slides, computer, scanner software, keyboard, mouse and monitor.
Software: The Virtuoso software is designed to complement the routine workflow of a qualified pathologist in the review of immunohistochemically stained histologic slides. It allows the user to select fields of view (FOVs) in the digital image for analysis and provides quantitative data on these FOVs to assist with interpretation. The software makes no independent interpretations of the data and requires competent human intervention for all steps in the analysis process.
Acceptance Criteria and Device Performance Study for Virtuoso™ System for IHC p53 (DO-7)
The information provided describes clinical validation studies for the Virtuoso™ System for IHC p53 (DO-7), first with the Benchmark XT stainer (predicate device's studies essentially) and then with the Benchmark ULTRA stainer (for this specific submission). The acceptance criteria are implicitly derived from the reported performance in these studies, focusing on agreement between the device's digital read (DR) and image analysis (IA) with manual microscopic assessment.
1. Table of Acceptance Criteria and Reported Device Performance
Note: The provided document does not explicitly state predefined "acceptance criteria" but rather reports the observed performance and concludes that the system is "safe and effective for its intended use" based on these results. Therefore, the "Acceptance Criteria" below are inferred from the reported performance figures that led to regulatory clearance.
Performance Metric | Acceptance Criteria (Inferred from reported performance) | Reported Device Performance (Benchmark XT Stainer) | Reported Device Performance (Benchmark ULTRA Stainer) |
---|---|---|---|
Digital Read (DR) vs. Manual Method - Overall Agreement | > 80% (based on lowest reported site performance) | Site 1: 93% (87-97% CI) Site 2: 95% (89-98% CI) Site 3: 94% (88-97% CI) Site 4: 82% (73-88% CI) | 88.3% (81.4-92.9% CI) |
Digital Read (DR) vs. Manual Method - Negative Agreement | > 70% (based on lowest reported site performance) | Site 1: 100% (95-100% CI) Site 2: 93% (86-97% CI) Site 3: 95% (87-98% CI) Site 4: 74% (63-82% CI) | 83.7% (74.5-90.0% CI) |
Digital Read (DR) vs. Manual Method - Positive Agreement | > 65% (based on lowest reported site performance) | Site 1: 83% (69-91% CI) Site 2: 100% (89-100% CI) Site 3: 93% (81-98% CI) Site 4: 97% (87-100% CI) | 100.0% (89.8-100.0% CI) |
Image Analysis (IA) vs. Manual Method - Overall Agreement | > 90% (based on lowest reported site performance) | Site 1: 92% (85-95% CI) Site 2: 97% (92-99% CI) Site 3: 91% (84-95% CI) Site 4: 90% (83-95% CI) | 95.0% (89.5-97.7% CI) |
Image Analysis (IA) vs. Manual Method - Negative Agreement | > 90% (based on lowest reported site performance) | Site 1: 99% (93-100% CI) Site 2: 95% (89-98% CI) Site 3: 95% (87-98% CI) Site 4: 91% (82-96% CI) | 95.3% (88.6-98.2% CI) |
Image Analysis (IA) vs. Manual Method - Positive Agreement | > 80% (based on lowest reported site performance) | Site 1: 80% (67-89% CI) Site 2: 100% (89-100% CI) Site 3: 83% (69-92% CI) Site 4: 89% (76-96% CI) | 94.1% (80.9-98.4% CI) |
Intra-Pathologist/Inter-Day Digital Read Agreement | > 90% (based on lowest reported performance) | 90% - 95% | Not applicable (study focused on concordance for ULTRA stainer) |
Intra-Pathologist/Inter-Day Image Analysis Agreement | > 80% (based on lowest reported performance) | 80% - 93% | Not applicable (study focused on concordance for ULTRA stainer) |
Inter-Pathologist Digital Read Agreement | > 94% (based on lowest reported performance) | 94% - 99% | Not applicable (study focused on concordance for ULTRA stainer) |
Inter-Pathologist Image Analysis Agreement | > 94% (based on lowest reported performance) | 94% - 97% | Not applicable (study focused on concordance for ULTRA stainer) |
2. Sample Sizes Used for the Test Set and Data Provenance
For predicate device (Benchmark XT stainer):
- Sample Size (Agreement/Concordance):
- Site 1: n = 119
- Site 2: n = 119
- Site 3: n = 117
- Site 4: n = 114 (for Digital Read), n = 105 (for Image Analysis)
- Total across sites: Approximately 469 unique cases for Digital Read, and 460 unique cases for Image Analysis.
- Sample Size (Reproducibility - Intra-Pathologist/Inter-Day and Inter-Pathologist): The document reports agreement percentages but does not explicitly state the number of cases (n) used for these specific reproducibility comparisons in the summary table. However, confusion matrices for intra-pathologist reproducibility show "Session 1 Neg" count for example, which implies a smaller, constant set of cases (e.g., 40 cases if (26+14) for neg and (1+25) for pos in Session 1 for Digital Read).
- Data Provenance: The document states "across four sites". No country of origin is specified. The studies are clinical validation studies, suggesting prospective data collection for the validation phase.
For new device (Benchmark ULTRA stainer):
- Sample Size (Agreement/Concordance): 120 cases
- Data Provenance: "one pathologist at one site". No country of origin is specified. This was a concordance study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
For predicate device (Benchmark XT stainer):
- Number of Experts: The agreement/concordance study involved "Each pathologist's Virtuoso digital read results were compared to their manual results." This implies that the ground truth was established by the same pathologists who then used the digital system. The reproducibility studies involved three pathologists.
- Qualifications: "qualified pathologist." No specific years of experience or sub-specialty are explicitly mentioned beyond being "qualified."
For new device (Benchmark ULTRA stainer):
- Number of Experts: "one pathologist at one site."
- Qualifications: "qualified pathologist." No specific years of experience or sub-specialty are explicitly mentioned beyond being "qualified."
4. Adjudication Method for the Test Set
The document does not explicitly describe a formal adjudication method (e.g., 2+1, 3+1). For the agreement studies, the manual microscopic reading by the "qualified pathologist" appears to be the reference standard against which the digital read and image analysis results are compared. This suggests that the pathologist's own manual score served as the definitive ground truth for their comparative analysis, not an independent adjudicated ground truth from multiple experts for each case before comparison.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Yes, a form of MRMC study was done for reproducibility purposes for the predicate device (Benchmark XT stainer).
- "Inter-Pathologist/Site" study: This evaluated the reproducibility among three pathologists for digital read and image analysis.
- However, this was a reproducibility study among AI-assisted reads, not a direct comparison of human readers with AI vs. without AI assistance to measure an "effect size" of improvement. The primary concordance studies compared each pathologist's digital/IA read to their own manual read.
- Effect Size of human reader improvement: The document does not report an effect size for how much human readers improve with AI vs. without AI assistance in the format of a comparative effectiveness study showing gain in performance. Instead, it measures the agreement between the AI-assisted read and the manual read, and the reproducibility of the AI-assisted reads among multiple pathologists and over time for a single pathologist.
6. Standalone (Algorithm Only) Performance Study
- Yes, a standalone performance was done for Image Analysis (IA). The "Image Analysis vs Manual Method" sections provide data on the algorithm's performance (standalone, as it generates the quantitative score) compared to the manual method. While a human selects the fields of view, the scoring itself is algorithmic.
7. Type of Ground Truth Used
- Expert Consensus / Pathology (Manual Read): The ground truth for the predicate device studies and the new device study was established by a "qualified pathologist" performing a "manual method" with a traditional microscope. This is described as the "reference manual method" or "manual score (reference result)."
8. Sample Size for the Training Set
The document does not provide information on the sample size used for the training set for the Virtuoso™ system's image analysis algorithms. The studies described are clinical validation studies using test sets, not details about the algorithm development or training data.
9. How the Ground Truth for the Training Set Was Established
As the document does not provide details on the training set, it does not specify how the ground truth for the training set was established.
Ask a specific question about this device
Page 1 of 1