Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K142965
    Date Cleared
    2015-07-16

    (275 days)

    Product Code
    Regulation Number
    864.1860
    Reference & Predicate Devices
    Why did this record match?
    Reference Devices :

    K122143, K111869

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Virtuoso TM system provides automated digital slide creation, management, analysis, and viewing. It is intended for in vitro diagnostic use as an aid to the pathologist in the display, detection, counting, review and classification of tissues and cells of clinical interest based on particular morphology, color, intensity, size, pattern and shape.

    The Virtuoso™ System for IHC PR (1E2) using the VENTANA iScan HT is for the digital read application. This particular Virtuoso system is intended for use as an aid to the pathologist in the qualitative detection of progesterone receptor (PR) protein in formalin-fixed, paraffin-embedded normal and neoplastic tissue. This device is an accessory to Ventana Medical Systems, Inc. CONFIRM anti-Progesterone Receptor (PR) (1E2) Rabbit Monoclonal Primary Antibody assay. The CONFIRM™ anti-Progesterone Receptor (PR) (1E2) Rabbit Monoclonal Primary Antibody assay is indicated for use as an aid in the assessment of breast cancer patients for whom endocrine treatment is being considered (but is not the sole basis for treatment).

    Note: The IHC PR (1E2) Digital Read application is an adjunctive computer-assisted methodology for the qualified pathologist in the acquisition and interpretation of images from microscope glass slides of breast cancer specimens stained for the presence of PR protein. The accuracy of the test results depends on the quality of the immunohistochemical staining. It is the responsibility of a qualified pathologist to employ appropriate morphological studies and controls as specified in the instructions for the CONFIRM™ anti-Progesterone Receptor (PR) (1E2) Rabbit Monoclonal Primary Antibody used to assure the validity of the Virtuoso System for IHC PR Digital Read scores. The actual correlation of CONFIRM™ anti-PR antibody to clinical outcome has not been established. This device is intended for IHC slides stained on the BenchMark ULTRA stainers. For prescription use only.

    Device Description

    The Virtuoso™ System is an instrument-plus-software system designed to assist the qualified pathologist in the consistent assessment of protein expression in immunohistochemically stained histologic sections from formalin-fixed, paraffinembedded normal and neoplastic tissues. The system consists of a slide scanner, computer, monitor, keyboard, and mouse for specific immunohistochemical markers, and software with a Windows web browser-based user interface. Virtuoso is a web-based, end-to-end, digital pathology software solution that allows pathology laboratories to acquire, manage, view, analyze, share, and report digital images of pathology specimens. Using the Virtuoso software, the pathologist can view digital images, add annotations and generate reports.

    Hardware: The iScan HT scanning device captures digital images of formalinfixed, paraffin-embedded tissues that are suitable for storage and viewing. The device includes a digital slide scanner, a carousel for loading glass slides, computer, scanner software, keyboard, mouse and monitor.

    Software: The Virtuoso software is designed to complement the routine workflow of a qualified pathologist in the review of immunohistochemically stained histologic slides. The software makes no independent interpretations of the data and requires competent human intervention for all steps in the process.

    AI/ML Overview

    Acceptance Criteria and Device Performance for Virtuoso™ System for IHC PR (1E2) using the VENTANA iScan HT

    This response summarizes the acceptance criteria and study findings for the Virtuoso™ System for IHC PR (1E2) using the VENTANA iScan HT, based on the provided FDA 510(k) summary (K142965).

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly derived from the reported agreement rates. For the purpose of this table, we will highlight the reported agreement rates as the measure of meeting the acceptance of substantial equivalence.

    Performance MetricAcceptance Criteria (Implicit from Predicate Equivalence)Reported Device Performance (95% CI)Study Type
    Digital Read (DR) vs Manual Read (MR) Agreement (Overall Percent Agreement - OPA)High agreement with manual microscopic evaluation (established by predicate devices)Site 1: 98.7% (95.5-99.7)Method Comparison
    Site 2: 88.9% (83.1-92.9)
    Site 3: 96.2% (91.9-98.2)
    DR vs MR Agreement (Positive Percent Agreement - PPA)High agreement with manual microscopic evaluationSite 1: 99.1% (94.9-99.8)Method Comparison
    Site 2: 86.7% (77.8-92.4)
    Site 3: 91.9% (83.4-96.2)
    DR vs MR Agreement (Negative Percent Agreement - NPA)High agreement with manual microscopic evaluationSite 1: 98.0% (89.7-99.7)Method Comparison
    Site 2: 91.1% (82.8-95.6)
    Site 3: 100.0% (95.5-100.0)
    Intra-Pathologist/Inter-Day Reproducibility (DR - OPA)High agreement across multiple reads by the same pathologistRead 1 vs Read 2: 97.4% (86.8-99.5)Reproducibility
    Read 1 vs Read 3: 89.7% (76.4-95.9)
    Read 2 vs Read 3: 92.3% (79.7-97.3)
    Inter-Pathologist Reproducibility (DR - OPA)High agreement between different pathologists using DRSite 1 vs Site 2: 81.1% (74.3-86.5)Reproducibility
    Site 1 vs Site 3: 77.4% (70.2-83.3)
    Site 2 vs Site 3: 89.9% (84.2-93.7)
    Inter-Scanner Precision (OPA)High agreement between different scannersSite 1 vs Site 2: 90.0% (86.5-92.7)Scanner Precision
    Site 1 vs Site 3: 93.6% (90.6-95.7)
    Site 2 vs Site 3: 90.3% (86.8-92.9)
    Intra-Scanner/Inter-Day Precision (OPA)High agreement on the same scanner across different daysDay 1 vs Day 2: 92.2% (89.0-94.6)Scanner Precision
    Day 1 vs Day 3: 90.8% (87.4-93.4)
    Day 2 vs Day 3: 90.8% (87.4-93.4)

    Note: The document states that the test system was shown to be "as safe and effective (therefore substantially equivalent) as the predicate devices". The provided agreement rates are the key metrics demonstrating this substantial equivalence. Specific predefined numerical acceptance criteria are not explicitly stated as distinct thresholds in the provided text but are implicitly met by achieving high concordance with manual methods and good reproducibility, consistent with existing legally marketed devices.


    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Method Comparison Test Set:
      • Site 1: 159 cases
      • Site 2: 162 cases
      • Site 3: 156 cases
      • Overall: 477 cases (sum of evaluable cases from the three sites)
    • Sample Size for Reproducibility Test Set: 39 cases (for intra-pathologist reproducibility)
    • Sample Size for Scanner Precision Test Set: 40 cases
    • Data Provenance: The document does not explicitly state the country of origin. However, the study involved three different "sites", implying a multi-center study possibly within the US, but this is not explicitly confirmed. The study appears to be retrospective in nature, using existing formalin-fixed, paraffin-embedded tissue blocks that were then stained and digitally scanned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    • Number of Experts: 3 pathologists (referred to as "Readers" or "Investigators"). The method comparison study involved each of these 3 pathologists providing both manual and digital reads. The reproducibility studies also involved these 3 pathologists.
    • Qualifications of Experts: The document refers to them as "qualified pathologists" or "investigators." Specific details regarding their years of experience or sub-specialty (e.g., radiologist, breast pathologist) are not provided.

    4. Adjudication Method for the Test Set

    • Method Comparison: The ground truth for the device's performance was established using the manual read (MR) by the same pathologist as the reference for the digital read (DR) evaluation. Each pathologist's DR results were compared to their own MR results. There is no explicit mention of an independent adjudication committee or consensus among multiple experts for the ground truth itself.
    • Reproducibility: For intra-pathologist reproducibility, the comparison was between the same pathologist across three different reading sessions. For inter-pathologist reproducibility, the comparison was between pairs of pathologists.
    • Scanner Precision: The comparison was between clinical scoring categories agreed upon at different sites/days for the same FOVs.
    • It appears no formal "N+1" or similar adjudication method was employed to establish a single, definitive ground truth independent of the readers whose digital reads were being assessed. Instead, the agreement between the digital read and a human's manual read served as the primary performance indicator, and reproducibility between humans (both manual and digital) was also evaluated.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and Effect Size

    • Yes, a form of MRMC study was implicitly done. The method comparison study involved 3 pathologists (multi-reader) evaluating multiple cases (multi-case), where their digital reads were compared to their manual reads. The reproducibility studies also involved multiple readers and multiple cases.
    • Effect Size (Human Reader Improvement with AI vs. without AI): The document does not report an effect size for human readers improving with AI assistance. The study design is primarily focused on demonstrating the substantial equivalence of the digital read system to the manual read, rather than measuring the improvement in human performance when assisted by the AI. The Virtuoso™ System is described as an "aid to the pathologist" and an "adjunctive computer-assisted methodology," but the study evaluates its standalone performance against manual reading, and its reproducibility, not its comparative effectiveness in improving reader accuracy or efficiency.

    6. If a Standalone (Algorithm Only Without Human-in-the-Loop Performance) Was Done

    • No, a standalone (algorithm only) performance study was not done or reported. The device is described as an "aid to the pathologist" and the software explicitly "makes no independent interpretations of the data and requires competent human intervention for all steps in the process."
    • The Digital Read (DR) method involves the pathologist reviewing the digital images. The performance metrics (OPA, PPA, NPA) are based on the pathologist's interpretation using the digital system, not an automated algorithm's output directly.

    7. The Type of Ground Truth Used

    • The primary ground truth used for the method comparison study was the expert's own manual microscopic assessment of the formalin-fixed, paraffin-embedded tissue slides, using a traditional microscope. This acts as the "reference manual method."
    • For the reproducibility and precision studies, the ground truth for comparison was the clinical score assigned by pathologists (either their own previous scores for intra-reader, or other pathologists' scores for inter-reader/inter-scanner).
    • The ground truth categories were defined as:
      • Negative: PR score of 0 to 0.99% positive staining
      • Positive: PR score of ≥1% positive staining
      • For scanner precision: 0 – 0.99%, 1–10%, and ≥ 10% positive staining.
    • There is no mention of pathology or outcomes data being used as an independent, external ground truth beyond expert consensus.

    8. The Sample Size for the Training Set

    • The document does not provide information on the sample size used for the training set for the Virtuoso™ System's software. The study focuses on the clinical validation of the device, implying that the algorithm development (training) phase was completed prior to these validation studies.

    9. How the Ground Truth for the Training Set Was Established

    • The document does not provide information on how the ground truth for the training set was established. This information is typically part of the device development process and is not always included in the 510(k) summary for validation studies.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1