Search Results
Found 1 results
510(k) Data Aggregation
(344 days)
OMNYX, LLC
The Omnyx Manual Read of the Digital HER2 Application on the Omnyx IDP System is intended for in vitro diagnostic use as an aid to pathology professionals for creating, storing, annotating, measuring, and viewing digital Whole Slide Images (WSI) from formalin-fixed, paraffin-embedded (FFPE) tissue sections stained with the Dako HercepTest™.
The Omnyx Manual Read of the Digital HER2 Application on the Omnyx IDP System is intended for use as an aid to the pathologist in the detection and semi-quantitative measurement of HER2/neu (c-erbB-2) in digital images of FFPE breast cancer tissue immunohistochemically stained with the Dako HercepTest™ and viewed on a computer monitor.
The Dako HercepTest™ is indicated for use as an aid in the assessment of breast cancer patients for whom HERCEPTIN® (Trastuzumab) treatment is being considered.
The Omnyx Manual Read of the Digital HER2 Application on the Omnyx IDP System is intended to aid pathology professionals in creating, storing, annotating, measuring, and viewing digital Whole Slide Images (WSI) from formalin-fixed, paraffin-embedded (FFPE) tissue sections stained with the Dako HercepTest™.
The system is composed of the following components:
- VL4 Scanner: A hardware device that captures and compresses bright field images of tissue samples.
- Data and Workflow Infrastructure: A set of networked applications which enables case data entry, acquisition, indexing, storage and acceptance of digital images, workflow management, and retrieval of case and image data.
- Histology Workstation: The application which permits the histologist to review or enter case data and check quality of scanned images.
- Pathology Workstation: The application which allows the pathologist to retrieve case data and review and annotate slide images.
Hardware:
The Omnyx™ VL4 scanner is an automated imaging system that can be loaded with up to 4 slides at a time. The VL4 Scanner outputs its images and metadata to the Omnyx Digital Archive, which receives and stores the images and data.
Software:
The Omnyx software is composed of 1) the VL4 scanner software which performs tissue identification, scan planning, focusing, image acquisition, stitching and compression of digital slide images and sends them to the Digital Archive and 2) the DPS software that manages the Histologist and Pathologist workstation functions, image viewer, workflow service, database, interface engine, APLIS service, digital archive, image store and the administrator client application.
Here's a breakdown of the acceptance criteria and the study that proves the Omnyx Manual Read of the Digital HER2 Application meets these criteria, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the clinical comparison and precision/reproducibility studies, aiming to demonstrate non-inferiority to manual microscopy and reliable performance. The studies evaluate agreement between different modalities (manual vs. digital) and within identical modalities but different readers/days/scanners.
Category | Acceptance Criteria (Implicit) | Reported Device Performance (Summary) |
---|---|---|
Inter-Reader/Intra-Modality (MM) | High percent agreement between pathologists reading on conventional manual microscopes. | Trichotomous: Agreements ranged from 80.5% (74%-85%) to 88.0% (83%-92%). |
Binary: Overall Agreements ranged from 86.2% (81%-90%) to 93.5% (89%-96%). APA and ANA were similarly high. | ||
Inter-Reader/Intra-Modality (M-WSI) | High percent agreement between pathologists reading on the Omnyx IDP system (digital WSIs). | Trichotomous: Agreements ranged from 64.4% (57%-71%) to 87.2% (82%-91%). |
Binary: Overall Agreements ranged from 73.9% (67%-80%) to 89.9% (85%-93%). APA and ANA were similarly high. (Note: One specific pairing showed a slightly lower agreement compared to MM). | ||
Inter-Modality/Intra-Reader (MM vs. M-WSI) | High percent agreement for individual pathologists when comparing their manual reads to their digital reads. This demonstrates that the digital system does not introduce significant reading discrepancies. | Trichotomous: Not explicitly reported as a single agreement metric, but individual pathologist data is provided. Implicitly high agreement is claimed. |
Binary: Overall Percent Agreements ranged from 89.4% (79%-89%) to 93.0% (88%-96%). NPA and PPA were similarly high. | ||
Intra-Reader/Inter-Day (M-WSI) | High precision/reproducibility for individual pathologists when reading the same digital slides on different days. | Trichotomous: Agreements ranged from 62.5% (47%-76%) to 97.5% (87%-100%) across three pathologists and three reads. Pathologist 1 had lower scores (62.5%, 77.5%, 85.0%) while Pathologists 2 and 3 had higher scores (all >90%). |
Binary: Overall Agreements ranged from 70.0% (55%-82%) to 100.0% (91%-100%). NPA and PPA were similarly high, generally consistent with trichotomous results for each pathologist. | ||
Inter-Scanner/Intra-Reader (ROIs) | High agreement between ROIs sourced from different scanners when read by the same pathologist, indicating scanner consistency. | Trichotomous: Agreements ranged from 88.8% (80%-94%) to 95.0% (88%-98%) for pairwise scanner comparisons. This indicates "very high reproducibility among scanners." |
2. Sample Size Used for the Test Set and Data Provenance
- Clinical Comparison (MM vs. M-WSI):
- Sample Size: 200 breast cancer cases (samples stained with Dako HercepTest™ and controls).
- Data Provenance: Not explicitly stated, but the study was conducted comparing reads from conventional manual microscopes and the Omnyx IDP system, suggesting it was based on existing clinical samples. It is not specified if the data was retrospective or prospective, or the country of origin.
- Precision & Reproducibility (Intra-Reader/Inter-Day):
- Sample Size: 40 HercepTest™ stained slides.
- Data Provenance: Not explicitly stated (retrospective/prospective, country of origin).
- Precision & Reproducibility (Inter-Scanner/Intra-Reader):
- Sample Size: 80 regions of interest (ROIs) extracted from 40 HercepTest™ slides, with an even distribution of score categories.
- Data Provenance: The three scanners were located in "three different laboratory locations within GE Healthcare and operated by three independent operators." This indicates multi-site internal data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Clinical Comparison (MM vs. M-WSI):
- Number of Experts: Four (4) pathologists.
- Qualifications: Not explicitly stated beyond "pathologists."
- Ground Truth: For the inter-reader/intra-modality analysis, the comparisons are between pathologists, so no single "ground truth" is established by an expert panel; rather, concordance between experts is assessed. For the inter-modality/intra-reader comparison, the glass slide read by each pathologist serves as their imperfect reference for their digital read.
- Precision & Reproducibility (Intra-Reader/Inter-Day):
- Number of Experts: Three (3) pathologists.
- Qualifications: Not explicitly stated beyond "pathologist."
- Ground Truth: Not applicable in this context; the study assesses a single pathologist's reproducibility over time, not against an external "ground truth."
- Precision & Reproducibility (Inter-Scanner/Intra-Reader):
- Number of Experts: A single pathologist.
- Qualifications: Not explicitly stated beyond "a single pathologist."
- Ground Truth: The manual scores by this single pathologist served as the reference for comparing ROIs from different scanners.
4. Adjudication Method for the Test Set
- Clinical Comparison (MM vs. M-WSI): No formal adjudication method is mentioned for establishing a single "ground truth" across readers. The study focuses on pairwise agreement between pathologists and agreement of an individual pathologist across modalities. For the binary agreement calculations, "neither pathologist can be considered a reference in each pairwise reader comparison," so Average Negative Agreement (ANA) and Average Positive Agreement (APA) are used instead of traditional sensitivity/specificity against a single ground truth.
- Precision & Reproducibility (Intra-Reader/Inter-Day): Not applicable; this study assessed intra-reader variability over time.
- Precision & Reproducibility (Inter-Scanner/Intra-Reader): A single pathologist established the scores for the ROIs, effectively acting as the adjudicator/reference for scanner comparisons.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not performed in the sense of AI assistance improving human reader performance.
- The study design focused on demonstrating that the digital platform (Omnyx IDP System with manual reading) is substantially equivalent to manual microscopy. It evaluates the concordance between pathologists using traditional manual methods and pathologists using the digital system.
- The device described is the "Omnyx Manual Read of the Digital HER2 Application," implying no AI algorithm for automated scoring or triage that would "assist" the human reader. The pathologist still performs the interpretation manually, but on digital images rather than glass slides.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- No. The device is explicitly for "Manual Read of the Digital HER2 Application." Its intended use states it is "an aid to pathology professionals" and "an aid to the pathologist in the detection and semi-quantitative measurement of HER2/neu...viewed on a computer monitor." This indicates it's a tool for manual interpretation of digital images, not an automated algorithm for standalone performance.
7. The Type of Ground Truth Used
- Clinical Comparison (MM vs. M-WSI):
- For inter-reader/intra-modality comparisons, concordance between pathologists serves as the metric, not an external "ground truth."
- For inter-modality/intra-reader comparisons ("MM vs. M-WSI"), the glass slide read by each pathologist was used as their respective imperfect reference. This is explicitly stated, indicating that it's considered an imperfect reference rather than a definitive, independently verified ground truth.
- Precision & Reproducibility (Inter-Scanner/Intra-Reader): The 80 ROIs were "manually scored by a single pathologist based on the Dako HercepTest™ scoring guidelines." This single pathologist's score serves as the ground truth for evaluating inter-scanner agreement.
8. The Sample Size for the Training Set
- The document does not explicitly mention a "training set" or "validation set" in the context of an AI algorithm learning to interpret images.
- Since this device is a "Manual Read" application that facilitates human interpretation of digital slides, rather than an AI algorithm for automated diagnosis, the concept of a training set as typically understood for AI models is not applicable or discussed. The studies describe performance evaluation sets for human readers.
9. How the Ground Truth for the Training Set was Established
- As mentioned above, a "training set" for an AI model is not discussed in this document, as the device is for manual interpretation of digital slides. Therefore, the establishment of ground truth for such a set is not applicable here.
Ask a specific question about this device
Page 1 of 1