Search Results
Found 1 results
510(k) Data Aggregation
(248 days)
The Virtuoso system provides automated digital slide creation, management, analysis, and viewing. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting, review and classification of tissues and cells of clinical interest based on particular morphology, color, intensity, size, pattern and shape.
The Virtuoso™ System for IHC PR (1E2) is for digital read and image analysis applications. This particular Virtuoso system is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of progesterone receptor (PR) protein in formalin-fixed, paraffin-embedded normal and neoplastic tissue. This device is an accessory to Ventana Medical Systems, Inc. CONFIRM™ anti-Progesterone Receptor (PR) (1E2) Rabbit Monoclonal Primary Antibody assay. The CONFIRM™ anti-Progesterone Receptor (PR) (1E2) Rabbit Monoclonal Primary Antibody assay is indicated for use as an aid in the assessment of breast cancer patients for whom endocrine treatment is being considered (but is not the sole basis for treatment).
Note: The IHC PR (1E2) Digital Read and Image Analysis applications are adjunctive computer-assisted methodologies for the qualified pathologist in the acquisition and measurement of images from microscope glass slides of breast cancer specimens stained for the presence of PR protein. The pathologist should verify agreement with the Image Analysis software application score. The accuracy of the test results depends on the immunohistochemical staining. It is the responsibility of a qualified pathologist to employ appropriate morphological studies and controls as specified in the instructions for the CONFIRM™ anti-Progesterone Receptor (PR) (IE2) Rabbit Monoclonal Primary Antibody used to assure the validity of the Virtuoso System for IHC PR Digital Read and Image Analysis scores. The actual correlation of CONFIRM™ anti-PR antibody to clinical outcome has not been established.
The Virtuoso™ System is an instrument-plus-software system designed to assist the qualified pathologist in the consistent assessment of protein expression in immunohistochemically stained histologic sections from formalin-fixed, paraffinembedded normal and neoplastic tissues. The system consists of a slide scanner (iScan), computer, monitor, keyboard, mouse, image analysis algorithm for specific immunohistochemical marker, and software with a Windows web browser-based user interface. Virtuoso is a web-based, end-to-end, digital pathology software solution that allows pathology laboratories to acquire, manage, view, analyze, share, and report digital images of pathology specimens. Using the Virtuoso software, the pathologist can view digital images, add annotations, make measurements, perform image analysis, and generate reports.
The iScan slide scanning device captures digital images of formalin-fixed, paraffin-embedded tissues that are suitable for storage and viewing. The device includes a digital slide scanner, racks for loading glass slides, computer, scanner software, keyboard, mouse and monitor.
The Virtuoso software is designed to complement the routine workflow of a qualified pathologist in the review of immunohistochemically stained histologic slides. It allows the user to select fields of view (FOVs) in the digital image for analysis and provides quantitative data on these FOVs to assist with interpretation. The software makes no independent interpretations of the data and requires competent human intervention for all steps in the analysis process.
Here's an analysis of the acceptance criteria and study details for the VENTANA® Virtuoso™ System for IHC PR (1E2) based on the provided 510(k) summary (K111869):
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary does not explicitly list pre-defined "acceptance criteria" with specific thresholds for overall agreement, reproducibility, and precision. Instead, it reports the "overall agreements," "negative % agreement," and "positive % agreement" along with 95% confidence intervals (CIs) as the key performance metrics of the device compared to manual methods. For reproducibility and precision, it reports percent agreements and %CV.
Below is a table summarizing the reported device performance. Since explicit acceptance criteria weren't stated quantitatively in the document, I will indicate the reported performance.
Performance Metric Category | Specific Metric | Reported Device Performance and 95% CI (for n cases/FOVs) |
---|---|---|
Agreement/Concordance (vs. Manual Method) | Virtuoso Digital Read vs Manual | Across 3 sites (10%) ranged from 94.0% to 97.4% for all FOVs combined. |
- Site 1 vs 2: 96.6% (91.5%-98.7%), n=117 FOVs
- Site 1 vs 3: 97.4% (92.7%-99.1%), n=117 FOVs
- Site 2 vs 3: 94.0% (88.2%-97.1%), n=117 FOVs |
| | Inter-Scanner %CV Analyses | Site (Scanner) %CV: 0.00% (mean % positivity 25.93%, n=351 FOVs) |
| | Intra-Scanner/Inter-Day Agreement Rates | Overall percent agreements for three categories (10%) ranged from 97.3% to 98.2% for all FOVs combined. - Session 1 vs 2: 97.4% (92.7%-99.1%), n=117 FOVs
- Session 1 vs 3: 98.2% (93.7%-99.5%), n=111 FOVs
- Session 2 vs 3: 97.3% (92.4%-99.1%), n=111 FOVs |
| | Intra-Scanner/Inter-Day %CV Analyses | Day %CV: 0.00% (mean % positivity 26.014%, n=345 FOVs) |
2. Sample Sizes Used for the Test Set and Data Provenance
-
Test Set Sample Size:
- Agreement/Concordance & Reproducibility Studies: For primary comparisons against manual methods, "n" values are primarily shown as the number of cases per site: Site 1 (n=112), Site 2 (n=114 or 115), Site 3 (n=114 or 116). These numbers refer to the number of patient cases evaluated.
- Scanner Precision Study: 40 clinical cases were used. For the agreement rates, the tables denote
n=117
andn=111
which likely represent the total number of Fields of View (FOVs) analyzed (40 cases * 3 FOVs per case, with slight variations potentially due to unevaluable FOVs). For %CV analyses,n=351
(inter-scanner) andn=345
(intra-scanner) represent the number of evaluable FOVs.
-
Data Provenance: The document does not explicitly state the country of origin for the data. However, given the submitting company (Ventana Digital Pathology, Sunnyvale, CA) and FDA submission for the US market, it is highly probable the data was primarily collected in the US. The study is described as "clinical validation" implying it's derived from patient samples. It is not explicitly stated whether the study was retrospective or prospective, but clinical validation studies often use retrospectively collected samples, and the language "evaluated overall system performance" suggests a designed study rather than real-time prospective use.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Number of Experts: For the agreement/concordance and inter-pathologist reproducibility studies, three pathologists were involved in establishing the ground truth (manual read).
- Qualifications of Experts: The document states the device is "designed to assist the qualified pathologist" and requires "competent human intervention." However, it does not explicitly state the specific qualifications or years of experience for the pathologists who participated in the studies.
4. Adjudication Method for the Test Set
- The document states that the "reference manual method (with a traditional microscope)" served as the ground truth. Each pathologist's Virtuoso digital read results were compared to their own manual results, and the image analysis results were also compared to the pathologist's manual results.
- For inter-pathologist comparisons, the three manual readings across the three pathologists were compared to each other.
- The document does not describe a formal adjudication method (e.g., 2+1, 3+1 consensus) to create a single 'expert ground truth' label. Instead, it treats each pathologist's manual read as the "true score" for purposes of comparison with the digital read and image analysis, and then separately assesses inter-pathologist agreement.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- Yes, a form of multi-reader, multi-case study was performed, although it focuses on agreement/reproducibility rather than a direct "with AI vs without AI assistance" comparative effectiveness study in the sense of improved performance metrics with assistance.
- The study involved multiple pathologists (three) and multiple cases (over 100 for agreement studies, 40 for scanner precision).
- Effect Size of Human Reader Improvement with AI vs. Without AI Assistance: The document does not directly report an "effect size" of how much human readers improve with AI assistance. Instead, it establishes the agreement between the automated system's outputs (digital read and image analysis) and the a pathologist's manual interpretations. It also assesses inter- and intra-pathologist reproducibility both for manual and digital reads. The system is described as an "aid to the pathologist," and the pathologist has the "choice of accepting the result or overriding with his/her own score." The clinical study validates the agreement of the AI outputs with human pathologists, supporting its role as an aid. It does not quantify a delta improvement in diagnostic accuracy or efficiency for pathologists using the AI versus not using it.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, the "Virtuoso Image Analysis" component of the system demonstrates standalone algorithm performance. The agreement of Virtuoso Image Analysis vs. Manual Method was assessed (reported as 92-97% overall agreement across sites).
- Additionally, the "Scanner Precision" study specifically focused on the image analysis application, stating, "Limiting the study to image analysis only ensured that only scanner precision was under evaluation, as all other factors were kept constant." This further supports that standalone algorithm performance was evaluated for the image analysis module.
7. The Type of Ground Truth Used
- The primary ground truth used for agreement and reproducibility studies was expert consensus (manual read by qualified pathologists). Specifically, each pathologist's manual interpretation of the IHC PR slides using a traditional microscope served as the reference for comparison with their own digital read and the system's image analysis.
8. The Sample Size for the Training Set
- The document does not report the sample size for the training set used to develop the Virtuoso Image Analysis software. The 510(k) summary focuses on the clinical validation (test set) of the final device.
9. How the Ground Truth for the Training Set Was Established
- The document does not provide information on how the ground truth for the training set was established. This detail is typically omitted in 510(k) summaries, which focus on the post-development validation rather than the development process itself.
Ask a specific question about this device
Page 1 of 1