Search Results
Found 1 results
510(k) Data Aggregation
(259 days)
E1000 Dx Digital Pathology Solution
The Epredia E1000 Dx Digital Pathology Solution is an automated digital slide creation, viewing, and management system. The Epredia E1000 Dx Digital Pathology Solution is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Epredia E1000 Dx Digital Pathology Solution is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
The Epredia E1000 Dx Digital Pathology Solution consists of a Scanner (E1000 Dx Digital Pathology Scanner), which generates in MRXS image file format, E1000 Dx Scanner Software, Image Management System (E1000 Dx IMS), E1000 Dx Viewer Software, and Display (Barco MDPC-8127). The Epredia E1000 Dx Digital Pathology Solution is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Epredia E1000 Dx Digital Pathology Solution.
The E1000 Dx Digital Pathology Solution is a high-capacity, automated whole slide imaging system for the creation, viewing, and management of digital images of surgical pathology slides. It allows whole slide digital images to be viewed on a display monitor that would otherwise be appropriate for manual visualization by conventional brightfield microscopy.
The E1000 Dx Digital Pathology Solution consists of the following three components: Scanner component:
- . E1000 Dx Digital Pathology Scanner with E1000 firmware version 2.0.3
- . E1000 Dx Scanner Software version 2.0.3
Viewer component:
- E1000 Dx Image Management System (IMS) Server version 2.3.2 ●
- . E1000 Dx Viewer Software version 2.7.2
Display component:
- . Barco MDPC-8127
The E1000 Dx Digital Pathology Solution automatically creates digital whole slide images by scanning formalin-fixed, paraffin-embedded (FFPE) tissue slides, with a capacity to process up to 1,000 slides. The E1000 Dx Scanner Software (EDSS), which runs on the scanner workstation, controls the operation of the E1000 Dx Digital Pathology Scanner. The scanner workstation, provided with the E1000 Dx Digital Pathology Solution, includes a PC, monitor, kevboard, and mouse. The solution uses a proprietary MRXS format to store and transmit images between the E1000 Dx Digital Pathology Scanner and the E1000 Dx Image Management System (IMS).
The E1000 Dx IMS is a software component intended for use with the Barco MDPC-8127 display monitor and runs on a separate, customer-provided pathologist viewing workstation PC. The E1000 Dx Viewer, an application managed through the E1000 Dx IMS, allows the obtained digital whole slide images to be annotated, stored, accessed, and examined on Barco MDPC-8127 video display monitor. This functionality aids pathologists in interpreting digital images as an alternative to conventional brightfield microscopy.
Here's a breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:
Important Note: The provided text describes a Whole Slide Imaging System for digital pathology, which aids pathologists in reviewing and interpreting digital images of traditional glass slides. It does not describe an AI device for automated diagnosis or detection. Therefore, concepts like "effect size of how much human readers improve with AI vs without AI assistance" or "standalone (algorithm only without human-in-the-loop performance)" are not directly applicable to this device's proven capabilities as per the provided information.
Acceptance Criteria and Reported Device Performance
The core acceptance criterion for this device appears to be non-inferiority to optical microscopy in terms of major discordance rates when comparing digital review to a main sign-out diagnosis. Additionally, precision (intra-system, inter-system repeatability, and inter-site reproducibility) is a key performance metric.
Table 1: Overall Major Discordance Rate for MD and MO
Metric | Acceptance Criteria (Implied Non-inferiority) | Reported Device Performance (Epredia E1000 Dx) |
---|---|---|
MD Major Discordance Rate | N/A (Compared to MO's performance) | 2.51% (95% CI: 2.26%; 2.79%) |
MO Major Discordance Rate | N/A (Baseline for comparison) | 2.59% (95% CI: 2.29%; 2.82%) |
Difference MD - MO | Within an acceptable non-inferiority margin | -0.15% (95% CI: -0.40%, 0.41%) |
Study Met Acceptance Criteria | Yes, as defined in the protocol | Met |
Precision Study Acceptance Criteria and Reported Performance
Metric | Acceptance Criteria (Lower limit of 95% CI) | Reported Device Performance (Epredia E1000 Dx) |
---|---|---|
Intra-System Repeatability (Average Positive Agreement) | > 85% | 96.9% (Lower limit of 96.1%) |
Inter-System Repeatability (Average Positive Agreement) | > 85% | 95.1% (Lower limit of 94.1%) |
Inter-Site Reproducibility (Average Positive Agreement) | > 85% | 95.4% (Lower limit of 93.6%) |
All Precision Studies Met Acceptance Criteria | Yes | Met |
Study Details
2. Sample Size and Data Provenance:
-
Clinical Accuracy Study (Non-inferiority):
- Test Set Sample Size: 3897 digital image reviews (MD) and 3881 optical microscope reviews (MO). The dataset comprises surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue.
- Data Provenance: Not explicitly stated, but clinical studies for FDA clearance typically involve multiple institutions, often within the US or compliant with international standards, and are prospective in nature for device validation. The "multi-centered" description suggests multiple sites, implying diverse data. It is a "blinded, and randomized study," which are characteristics of a prospective study.
-
Precision Studies (Intra-system, Inter-system, Inter-site):
- Test Set Sample Size: A "comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems" was used. Specific slide numbers or FOV counts are mentioned as pairwise agreements (e.g., 2,511 comparison pairs for Intra-system, Inter-system; 837 comparison pairs for Inter-site) rather than raw slide counts.
- Data Provenance: Clinical specimens. Not specified directly, but likely from multiple sites for the reproducibility studies, suggesting a diverse, possibly prospective, collection.
3. Number of Experts and Qualifications:
- Clinical Accuracy Study: The study involved multiple pathologists who performed both digital and optical reviews. The exact number of pathologists is not specified beyond "pathologist" and "qualified pathologist." Their qualifications are generally implied by "qualified pathologist" and the context of a clinical study for an FDA-cleared device.
- Precision Studies:
- Intra-System Repeatability: "three different reading pathologists (RPs)."
- Inter-System Repeatability: "Three reading pathologists."
- Inter-Site Reproducibility: "three different reading pathologists, each located at one of three different sites."
- Qualifications: Referred to as "reading pathologists," implying trained and qualified professionals experienced in interpreting pathology slides.
4. Adjudication Method for the Test Set:
- Clinical Accuracy Study: The ground truth was established by a "main sign-out diagnosis (SD)." This implies a definitive diagnosis made by a primary pathologist, which served as the reference standard. It's not specified if this "main sign-out diagnosis" itself involved an adjudication process, but it is presented as the final reference.
- Precision Studies: For the precision studies, agreement rates were calculated based on the pathologists' readings of predetermined features on "fields of view (FOVs)." While individual "original assessment" seems to be the baseline for agreement in the intra-system study, the method to establish a single ground truth for all FOVs prior to the study (if any, beyond the initial "defined, clinically relevant histologic features") or an adjudication process during the study is not explicitly detailed. The agreement rates are pairwise comparisons between observers or system readings, not necessarily against a single adjudicated ground truth for each FOV.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- A comparative effectiveness study was indeed done, comparing human performance with the E1000 Dx Digital Pathology Solution (MD) to human performance with an optical microscope (MO).
- Effect Size: The study demonstrated non-inferiority of digital review to optical microscopy. The "effect size" is captured by the difference in major discordance rates:
- The estimated difference (MD - MO) was -0.15% (95% CI: -0.40%, 0.41%). This narrow confidence interval, inclusive of zero and generally close to zero, supports the non-inferiority claim, indicating no significant practical difference in major discordance rates between the two modalities when used by human readers.
6. Standalone (Algorithm Only) Performance:
- No, a standalone (algorithm only) performance study was not conducted or described. This device is a Whole Slide Imaging System intended as an aid to the pathologist for human review and interpretation, not an AI for automated diagnosis.
7. Type of Ground Truth Used:
- Clinical Accuracy Study: The ground truth used was the "main sign-out diagnosis (SD)." This is a form of expert consensus or definitive clinical diagnosis, widely accepted as the reference standard in pathology.
- Precision Study: For the precision studies, "defined, clinically relevant histologic features" were used, and pathologists recorded the presence of these features. While not explicitly stated as a "ground truth" in the same way as the sign-out diagnosis, the 'original assessment' or 'presumed correct' feature presence often serves as a practical ground truth for repeatability and reproducibility calculations.
8. Sample Size for the Training Set:
- The document does not mention a training set as this device is not an AI/ML algorithm that learns from data. It's a hardware and software system designed to digitize and display images for human review. The "development processes" mentioned are for the hardware and software functionality, not for training a model.
9. How the Ground Truth for the Training Set Was Established:
- This question is not applicable as there is no training set for this device as described. Ground truth establishment mentioned in the document relates to clinical validation and precision, not AI model training.
Ask a specific question about this device
Page 1 of 1