Search Filters

Search Results

Found 216 results

510(k) Data Aggregation

    K Number
    K250003
    Date Cleared
    2025-08-29

    (239 days)

    Product Code
    Regulation Number
    866.6080
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    DEN240068
    Manufacturer
    Date Cleared
    2025-07-31

    (248 days)

    Product Code
    Regulation Number
    864.3755
    Type
    Direct
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use
    Device Description
    AI/ML Overview
    Ask a Question

    Ask a specific question about this device

    K Number
    K243391
    Device Name
    AISight Dx
    Manufacturer
    Date Cleared
    2025-06-26

    (238 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    AISight Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. AISight Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

    It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. AISight DX is intended to be used with interoperable displays, scanners and file formats, and web browsers that have been 510(k) cleared for use with the AISight Dx or 510(k)-cleared displays, 510(k)-cleared scanners and file formats, and web browsers that have been assessed in accordance with the Predetermined Change Control Plan (PCCP) for qualifying interoperable devices.

    Device Description

    AISight Dx is a web-based, software-only device that is intended to aid pathology professionals in viewing, interpretation, and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue obtained from Hamamatsu NanoZoomer S360MD Slide scanner or Leica Aperio GT 450 DX scanner (Table 1). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the AISight Dx device, based on the provided FDA 510(k) Clearance Letter:


    Acceptance Criteria and Reported Device Performance

    Acceptance Criteria CategorySpecific Acceptance CriteriaReported Device Performance
    Pixel-wise ComparisonIdentical image reproduction (max pixelwise difference
    Ask a Question

    Ask a specific question about this device

    K Number
    K250968
    Date Cleared
    2025-06-20

    (81 days)

    Product Code
    Regulation Number
    N/A
    Reference & Predicate Devices
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use

    The PathPresenter Clinical Viewer is a software intended for viewing and managing whole slide images of scanned glass sides derived from formalin fixed paraffin embedded (FFPE) tissue. It is an aid to pathologists to review and render a diagnosis using the digital images for the purposes of primary diagnosis. PathPresenter Clinical is not intended for use with frozen sections, cytology specimens, or non-FFPE specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using PathPresenter Clinical software. PathPresenter Clinical Viewer is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner NDPI image formats viewed on the Barco NV MDPC-8127 display device.

    Device Description

    The PathPresenter Clinical Viewer (version V1.0.1) is a web-based software application designed for viewing and managing whole slide images generated from scanned glass slides of formalin-fixed, paraffin-embedded (FFPE) surgical pathology tissue. It serves as a diagnostic aid, enabling pathologists to review digital images and render a primary pathology diagnosis. Functions of the viewer include zooming and panning the image, annotating the image, measuring distances and areas in the image and retrieving multiple images from the slide tray including prior cases and deprecated slides.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the PathPresenter Clinical Viewer based on the provided FDA 510(k) clearance letter:


    Acceptance Criteria and Device Performance for PathPresenter Clinical Viewer

    1. Table of Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    Pixelwise ComparisonThe 95th percentile of the pixel-wise color difference in any image pair is less than 3 CIEDE2000 (
    Ask a Question

    Ask a specific question about this device

    K Number
    K242545
    Manufacturer
    Date Cleared
    2025-05-23

    (269 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    RadiForce MX317W-PA is intended for in vitro diagnostic use to display digital images of histopathology slides acquired from IVD-labeled whole-slide imaging scanners and viewed using IVD-labeled digital pathology image viewing software that have been validated for use with this device.

    RadiForce MX317W-PA is an aid to the pathologist and is used for review and interpretation of histopathology slides for the purposes of primary diagnosis. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using this product. The display is not intended for use with digital images from frozen section, cytology, or non- formalin-fixed, paraffin embedded (non-FFPE) hematopathology specimens.

    Device Description

    RadiForce MX317W-PA is a color LCD monitor for viewing digital images of histopathology slides. The color LCD panel employs in-plane switching (IPS) technology allowing wide viewing angles and the matrix size is 4,096 x 2,160 pixels (8MP) with a pixel pitch of 0.1674 mm.

    Since factory calibrated display modes, each of which is characterized by a specific tone curve, a specific luminance range and a specific color temperature, are stored in lookup tables within the monitor. This helps ensure tone curves even if a display controller or workstation must be replaced or serviced.

    "Patho" is for intended digital pathology use mode.

    AI/ML Overview

    The provided FDA 510(k) clearance letter for the RadiForce MX317W-PA describes a display device for digital histopathology. It does not contain information about an AI/ML medical device. Therefore, a study proving the device meets acceptance criteria related to AI/ML performance (such as accuracy, sensitivity, specificity, MRMC studies, and ground truth establishment methods for large datasets) is not present in this document.

    The document primarily focuses on the technical performance and equivalence of a display monitor to a predicate device. The "performance testing" section refers to bench tests validating display characteristics like spatial resolution, luminance, and color, not the clinical performance of an AI algorithm interpreting medical images.

    Given the information provided, here's an analysis based on the actual content:


    Based on the provided document, the RadiForce MX317W-PA is a display monitor, not an AI/ML medical device designed for image interpretation. Therefore, the acceptance criteria and study detailed below pertain to the display's technical performance and its equivalence to a predicate display, not to an AI algorithm's diagnostic accuracy.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document states that "the display characteristics of the RadiForce MX317W-PA meet the pre-defined criteria when criteria are set." However, the exact numerical acceptance criteria for each bench test (e.g., minimum luminance, pixel defect limits) are not explicitly listed in the provided text. The document only lists the types of tests performed and states that the device "has display characteristics equivalent to those of the predicate device" and "meet the pre-defined criteria."

    Acceptance Criteria CategoryReported Device Performance Summary (as per document)
    User controls (Modes & settings)Performed, assumed met
    Spatial resolutionPerformed, assumed met, equivalent to predicate
    Pixel defectsPerformed, assumed met, equivalent to predicate
    ArtifactsPerformed, assumed met, equivalent to predicate
    Temporal responsePerformed, assumed met, equivalent to predicate
    Maximum and minimum luminancePerformed, assumed met, equivalent to predicate
    GrayscalePerformed, assumed met, equivalent to predicate
    Luminance uniformity and Mura testPerformed, assumed met, equivalent to predicate
    Stability of luminance and chromaticity responsePerformed, assumed met, equivalent to predicate
    Bidirectional reflection distribution functionPerformed, assumed met, equivalent to predicate
    Gray TrackingPerformed, assumed met, equivalent to predicate
    Color scalePerformed, assumed met, equivalent to predicate
    Color gamut volumePerformed, assumed met, equivalent to predicate

    Note: The document only states that these tests were performed and that the results show equivalence to the predicate device and that the device meets pre-defined criteria. It does not provide the specific numerical results or the exact numerical acceptance criteria for each test.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document describes bench tests performed on a single device, the RadiForce MX317W-PA (it's a physical monitor, not a software algorithm processing a dataset). There is no mention of a "test set" in the context of a dataset of medical images.
    • Data Provenance: Not applicable. The "data" here refers to the measured performance characteristics of the physical display device itself during bench testing, not patient data.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • Not applicable. The ground truth for a display monitor's technical performance is established by standardized measurement equipment and protocols, not by expert interpretation of images. The device itself is the object under test for its physical characteristics.

    4. Adjudication Method for the Test Set

    • Not applicable. This concept applies to human or AI interpretation of medical images, where discrepancies among readers or algorithms might need resolution. For physical device performance, measurements are generally objective.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Not performed/Applicable. An MRMC study is designed to assess the performance of a diagnostic aid (like AI) on image interpretation by human readers. This device is a display monitor, not an AI algorithm. Its function is to display images, not to interpret them or assist human interpreters in a diagnostic decision-making process that would warrant an MRMC study.

    6. Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Study

    • Not applicable. As stated, this is a display monitor, not an algorithm.

    7. Type of Ground Truth Used:

    • The ground truth for the display's performance tests would be metrology-based standards and calibration references (e.g., standard luminance values, colorimetry standards) against which the display's output is measured. It is not expert consensus, pathology, or outcomes data, as these relate to diagnostic accuracy studies.

    8. The Sample Size for the Training Set

    • Not applicable. This device is hardware; it does not involve training data or machine learning algorithms.

    9. How the Ground Truth for the Training Set Was Established

    • Not applicable. No training set exists for this device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K250414
    Device Name
    CaloPix
    Manufacturer
    Date Cleared
    2025-05-14

    (90 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use Only

    CaloPix is a software only device for viewing and management of digital images of scanned surgical pathology slides prepared from Formalin-Fixed Paraffin Embedded (FFPE) tissue.

    CaloPix is intended for in vitro diagnostic use as an aid to the pathologist to review, interpret and manage these digital slide images for the purpose of primary diagnosis.

    CaloPix is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

    It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and the validity of the interpretation of images using CaloPix.

    CaloPix is intended to be used with the interoperable components specified in the below Table:

    Scanner HardwareScanner Output File FormatInteroperable Displays
    Leica Aperio GT 450 DX scannerSVSDell U3223QE
    Hamamatsu NanoZoomer S360MD Slide scannerNDPIJVC Kenwood JD-C240BN01A
    Device Description

    CaloPix, version 6.1.0 IVDUS, is a web-based software-only device that is intended to aid pathology professionals in viewing, interpreting and managing digital Whole Slide Images (WSI) of glass slides obtained from the Hamamatsu NanoZoomer S360MD slide scanner (NDPI file format) and viewed on the JVC Kenwood JD-C240BN01A display, as well as those obtained from the Leica Aperio GT 450 DX scanner (SVS file format) and viewed on the Dell U3223QE display.

    CaloPix does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.

    CaloPix is for viewing digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.

    As a whole, CaloPix is a pathology Image Management System (IMS) which brings case-centric digital pathology image management, collaboration, and image processing. CaloPix consists of:

    • Integration with Laboratory Information Systems (LIS): Allows to obtain automatically from the LIS patient data associated with the cases, scanned whole slide images and other related medical images to be analyzed. The data stored in the database is automatically updated according to the interface protocol with the LIS.

    • DataBase: After ingestion, scanned WSI can be organized in the CaloPix database consisting of folders (cases) containing patient identification data and examination results from a LIS.

    Ingestion of the slides is performed through an integrated module that allows their automatic indexation based on patient data retrieved from the LIS. After their ingestion, image files are stored in a CaloPix-specific file storage environment, that can be on premises or in the cloud.

    • The CaloPix viewer component to process scanned whole slide images, that includes functions for panning, zooming, screen capture, annotations, distance and surface measurement, and image registration. This viewer relies on image servers (IMGSRV) which extract image tiles from the whole slide image file and send these tiles to the CaloPix viewer for smooth and fast viewing.
    AI/ML Overview

    The FDA 510(k) clearance letter for CaloPix indicates that the device's performance was evaluated through a series of tests to demonstrate its safety and effectiveness. The primary study described in the provided document focuses on technical performance testing rather than a clinical multi-reader multi-case (MRMC) study.

    Here's a breakdown of the requested information based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    TestAcceptance CriteriaReported Device Performance
    Pixel-wise comparison (Image Reproduction Accuracy)The 95th percentile of the pixel-wise color differences (CIEDE2000, ΔE00) in any image pair between CaloPix and the predicate device's IRMS must be less than 3 (ΔE00
    Ask a Question

    Ask a specific question about this device

    K Number
    K242244
    Device Name
    Viewer+
    Manufacturer
    Date Cleared
    2025-03-14

    (226 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use

    Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.

    It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.

    Device Description

    Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.

    Viewer+ is operated as follows:

      1. Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
      1. Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
      1. The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriterionReported Device Performance
    Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner)The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate.
    Turnaround time (for opening, panning, and zooming an image)Found to be adequate for the intended use of the device.
    Measurement accuracy (using scanned images of biological slides)Viewer+ was found to perform accurate measurements with respect to its intended use.
    Usability testingDemonstrated that the subject device is safe and effective for the intended users, uses, and use environments.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.

    The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.

    6. Standalone Performance (Algorithm Only without Human-in-the-Loop)

    The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.

    7. Type of Ground Truth Used

    • Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
    • Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
    • Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.

    8. Sample Size for the Training Set

    The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.

    9. How the Ground Truth for the Training Set Was Established

    As no training set is mentioned for this device, information on how its ground truth was established is not applicable.

    Ask a Question

    Ask a specific question about this device

    K Number
    K243871
    Date Cleared
    2025-03-06

    (79 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. The PIPS 5.1 is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The PIPS 5.1 is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

    The PIPS 5.1 comprises the Imagement System (IMS) 4.2, Ultra Fast Scanner (UFS), Pathology Scanner SG20. Pathology Scanner SG60, Pathology Scanner SG300 and Philips PP27QHD display, a Beacon C411W display or a Barco MDCC-4430 display. The PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using PIPS 5.1.

    Device Description

    The Philips IntelliSite Pathology Solution (PIPS) 5.1 is an automated digital slide creation, viewing, and management system. PIPS 5.1 consists of two subsystems and a display component:

      1. A scanner in any combination of the following scanner models
      • . Ultra Fast Scanner (UFS)
      • Pathology Scanner SG with different versions for varying slide capacity . Pathology Scanner SG20, Pathology Scanner SG60, Pathology Scanner SG300
      1. Image Management System (IMS) 4.2
      1. Clinical display
      • PP27QHD or C411W or MDCC-4430 .

    PIPS 5.1 is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis. The pathologists only view the scanned images and utilize the image review manipulation software in the PIPS 5.1.

    AI/ML Overview

    This document is a 510(k) summary for the Philips IntelliSite Pathology Solution (PIPS) 5.1. It describes the device, its intended use, and compares it to a legally marketed predicate device (also PIPS 5.1, K242848). The key change in the subject device is the introduction of a new clinical display, Barco MDCC-4430.

    Here's the breakdown of the acceptance criteria and study information:

    1. Table of Acceptance Criteria and Reported Device Performance

    The submission focuses on demonstrating substantial equivalence of the new display (Barco MDCC-4430) to the predicate's display (Philips PP27QHD). The acceptance criteria are largely derived from the FDA's "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" (TPA Guidance) and compliance with international consensus standards. The performance is reported as successful verification showing equivalence.

    Acceptance Criteria (TPA Guidance 항목)Reported Device Performance (Subject Device with Barco MDCC-4430)Conclusion on Substantial Equivalence
    Display typeColor LCDSubstantially equivalent: Minor difference in physical display size is a minor change and does not raise any questions of safety or effectiveness.
    ManufacturerBarco N.V.Same as above.
    TechnologyIPS technology with a-Si Thin Film Transistor (unchanged from predicate)Substantially equivalent: Proposed and predicate device are considered substantially equivalent.
    Physical display size714 mm x 478 mm x 74 mmSubstantially equivalent: Minor change, does not raise safety/effectiveness questions.
    Active display area655 mm x 410 mm (30.4 inch diagonal)Substantially equivalent: Slightly higher viewable area is a minor change. Verification testing confirms image quality is equivalent to the predicate device.
    Aspect ratio16:10Substantially equivalent: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent.
    Resolution2560 x 1600 pixelsSubstantially equivalent: Slightly higher resolution and pixel size is a minor change. Verification testing confirms image quality is equivalent to the predicate device. Conclusion: This change does not raise any new concerns on safety and effectiveness. Proposed and predicate device are considered substantially equivalent.
    Pixel Pitch0.256 mm x 0.256 mmSame as above.
    Color calibration tools (software)QAWeb Enterprise version 2.14.0 installed on the workstationSubstantially equivalent: New display uses different calibration software, but calibration method (built-in front sensor), calibration targets, and frequency of quality control tests remain unchanged. Conclusion: This change does not raise new safety/effectiveness concerns.
    Color calibration tools (hardware)Built-in front sensor (same as predicate)Same as above.
    Additional Non-clinical Performance Tests (TPA Guidance)Verification that technological characteristics of the display were not affected by the new panel, including: Spatial resolution, Pixel defects, Artifacts, Temporal response, Maximum and minimum luminance, Grayscale, Luminance uniformity, Stability of luminance and chromaticity, Bidirectional reflection distribution function, Grav tracking, Color scale response, Color gamut volume.Conclusion: Verification for the new display showed that the proposed device has similar technological characteristics compared to the predicate device following the TPA guidance. In compliance with international/FDA-recognized consensus standards (IEC 60601-1, IEC 60601-1-6, IEC 62471, ISO 14971). Safe and effective, conforms to intended use.

    2. Sample Size Used for the Test Set and Data Provenance

    The document does not explicitly state a "sample size" in terms of cases or images for the non-clinical performance tests. The tests were performed on "the display of the proposed device" to verify its technological characteristics. This implies testing on representative units of the Barco MDCC-4430 display.

    The data provenance is not specified in terms of country of origin or retrospective/prospective, as the tests were bench testing (laboratory-based performance evaluation of the display hardware) rather than clinical studies with patient data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and their Qualifications

    This information is not applicable to this submission. The tests performed were technical performance evaluations of hardware (the display), not clinical evaluations requiring expert interpretation of medical images. Ground truth for these technical tests would be established by objective measurements against specified technical standards and parameters.

    4. Adjudication Method for the Test Set

    This information is not applicable to this submission. As the tests were technical performance evaluations of hardware, there would not be an adjudication process involving multiple human observers interpreting results in the same way there would be for a clinical trial.

    5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done

    No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done.

    The submission explicitly states: "The proposed device with the new display did not require clinical performance data since substantial equivalence to the currently marketed predicate device was demonstrated with the following attributes: Intended Use / Indications for Use, Technological characteristics, Non-clinical performance testing, and Safety and effectiveness."

    Therefore, there is no effect size reported for human readers with and without AI assistance, as AI functionality for diagnostic interpretation is not the subject of this 510(k) (the PIPS 5.1 "does not include any automated image analysis applications that would constitute computer aided detection or diagnosis").

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done

    This information is not applicable. The PIPS 5.1 is a digital slide creation, viewing, and management system, not an AI algorithm for diagnostic interpretation. The focus of this 510(k) is the display component. The device itself is designed for human-in-the-loop use by a pathologist.

    7. The Type of Ground Truth Used

    For the non-clinical performance data, the "ground truth" was based on:

    • International and FDA-recognized consensus standards: This includes IEC 60601-1, IEC 60601-1-6, IEC 62471, and ISO 14971.
    • TPA Guidance: The "Technical Performance Assessment of Digital Pathology Whole Slide Imaging Devices" guidance document, which specifies technical parameters for displays.
    • Predicate device characteristics: Demonstrating that the new display's performance matches or is equivalent to the legally marketed predicate device's display across various technical parameters.

    In essence, the ground truth was established by engineering specifications, technical performance targets, and regulatory standards for display devices.

    8. The Sample Size for the Training Set

    This information is not applicable. The PIPS 5.1, as described, is a system for digital pathology, not an AI algorithm that requires a training set of data. The 510(k) specifically mentions: "The PIPS does not include any automated image analysis applications that would constitute computer aided detection or diagnosis." Therefore, there is no AI training set.

    9. How the Ground Truth for the Training Set Was Established

    This information is not applicable, as there is no AI training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K241717
    Date Cleared
    2025-02-28

    (259 days)

    Product Code
    Regulation Number
    864.3700
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Epredia E1000 Dx Digital Pathology Solution is an automated digital slide creation, viewing, and management system. The Epredia E1000 Dx Digital Pathology Solution is intended for in vitro diagnostic use as an aid to the pathologist to review and interpret digital images of surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. The Epredia E1000 Dx Digital Pathology Solution is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.

    The Epredia E1000 Dx Digital Pathology Solution consists of a Scanner (E1000 Dx Digital Pathology Scanner), which generates in MRXS image file format, E1000 Dx Scanner Software, Image Management System (E1000 Dx IMS), E1000 Dx Viewer Software, and Display (Barco MDPC-8127). The Epredia E1000 Dx Digital Pathology Solution is for creation and viewing of digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy. It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images obtained using Epredia E1000 Dx Digital Pathology Solution.

    Device Description

    The E1000 Dx Digital Pathology Solution is a high-capacity, automated whole slide imaging system for the creation, viewing, and management of digital images of surgical pathology slides. It allows whole slide digital images to be viewed on a display monitor that would otherwise be appropriate for manual visualization by conventional brightfield microscopy.

    The E1000 Dx Digital Pathology Solution consists of the following three components: Scanner component:

    • . E1000 Dx Digital Pathology Scanner with E1000 firmware version 2.0.3
    • . E1000 Dx Scanner Software version 2.0.3

    Viewer component:

    • E1000 Dx Image Management System (IMS) Server version 2.3.2 ●
    • . E1000 Dx Viewer Software version 2.7.2

    Display component:

    • . Barco MDPC-8127
      The E1000 Dx Digital Pathology Solution automatically creates digital whole slide images by scanning formalin-fixed, paraffin-embedded (FFPE) tissue slides, with a capacity to process up to 1,000 slides. The E1000 Dx Scanner Software (EDSS), which runs on the scanner workstation, controls the operation of the E1000 Dx Digital Pathology Scanner. The scanner workstation, provided with the E1000 Dx Digital Pathology Solution, includes a PC, monitor, kevboard, and mouse. The solution uses a proprietary MRXS format to store and transmit images between the E1000 Dx Digital Pathology Scanner and the E1000 Dx Image Management System (IMS).

    The E1000 Dx IMS is a software component intended for use with the Barco MDPC-8127 display monitor and runs on a separate, customer-provided pathologist viewing workstation PC. The E1000 Dx Viewer, an application managed through the E1000 Dx IMS, allows the obtained digital whole slide images to be annotated, stored, accessed, and examined on Barco MDPC-8127 video display monitor. This functionality aids pathologists in interpreting digital images as an alternative to conventional brightfield microscopy.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study proving the device meets them, based on the provided text:

    Important Note: The provided text describes a Whole Slide Imaging System for digital pathology, which aids pathologists in reviewing and interpreting digital images of traditional glass slides. It does not describe an AI device for automated diagnosis or detection. Therefore, concepts like "effect size of how much human readers improve with AI vs without AI assistance" or "standalone (algorithm only without human-in-the-loop performance)" are not directly applicable to this device's proven capabilities as per the provided information.


    Acceptance Criteria and Reported Device Performance

    The core acceptance criterion for this device appears to be non-inferiority to optical microscopy in terms of major discordance rates when comparing digital review to a main sign-out diagnosis. Additionally, precision (intra-system, inter-system repeatability, and inter-site reproducibility) is a key performance metric.

    Table 1: Overall Major Discordance Rate for MD and MO

    MetricAcceptance Criteria (Implied Non-inferiority)Reported Device Performance (Epredia E1000 Dx)
    MD Major Discordance RateN/A (Compared to MO's performance)2.51% (95% CI: 2.26%; 2.79%)
    MO Major Discordance RateN/A (Baseline for comparison)2.59% (95% CI: 2.29%; 2.82%)
    Difference MD - MOWithin an acceptable non-inferiority margin-0.15% (95% CI: -0.40%, 0.41%)
    Study Met Acceptance CriteriaYes, as defined in the protocolMet

    Precision Study Acceptance Criteria and Reported Performance

    MetricAcceptance Criteria (Lower limit of 95% CI)Reported Device Performance (Epredia E1000 Dx)
    Intra-System Repeatability (Average Positive Agreement)> 85%96.9% (Lower limit of 96.1%)
    Inter-System Repeatability (Average Positive Agreement)> 85%95.1% (Lower limit of 94.1%)
    Inter-Site Reproducibility (Average Positive Agreement)> 85%95.4% (Lower limit of 93.6%)
    All Precision Studies Met Acceptance CriteriaYesMet

    Study Details

    2. Sample Size and Data Provenance:

    • Clinical Accuracy Study (Non-inferiority):

      • Test Set Sample Size: 3897 digital image reviews (MD) and 3881 optical microscope reviews (MO). The dataset comprises surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue.
      • Data Provenance: Not explicitly stated, but clinical studies for FDA clearance typically involve multiple institutions, often within the US or compliant with international standards, and are prospective in nature for device validation. The "multi-centered" description suggests multiple sites, implying diverse data. It is a "blinded, and randomized study," which are characteristics of a prospective study.
    • Precision Studies (Intra-system, Inter-system, Inter-site):

      • Test Set Sample Size: A "comprehensive set of clinical specimens with defined, clinically relevant histologic features from various organ systems" was used. Specific slide numbers or FOV counts are mentioned as pairwise agreements (e.g., 2,511 comparison pairs for Intra-system, Inter-system; 837 comparison pairs for Inter-site) rather than raw slide counts.
      • Data Provenance: Clinical specimens. Not specified directly, but likely from multiple sites for the reproducibility studies, suggesting a diverse, possibly prospective, collection.

    3. Number of Experts and Qualifications:

    • Clinical Accuracy Study: The study involved multiple pathologists who performed both digital and optical reviews. The exact number of pathologists is not specified beyond "pathologist" and "qualified pathologist." Their qualifications are generally implied by "qualified pathologist" and the context of a clinical study for an FDA-cleared device.
    • Precision Studies:
      • Intra-System Repeatability: "three different reading pathologists (RPs)."
      • Inter-System Repeatability: "Three reading pathologists."
      • Inter-Site Reproducibility: "three different reading pathologists, each located at one of three different sites."
      • Qualifications: Referred to as "reading pathologists," implying trained and qualified professionals experienced in interpreting pathology slides.

    4. Adjudication Method for the Test Set:

    • Clinical Accuracy Study: The ground truth was established by a "main sign-out diagnosis (SD)." This implies a definitive diagnosis made by a primary pathologist, which served as the reference standard. It's not specified if this "main sign-out diagnosis" itself involved an adjudication process, but it is presented as the final reference.
    • Precision Studies: For the precision studies, agreement rates were calculated based on the pathologists' readings of predetermined features on "fields of view (FOVs)." While individual "original assessment" seems to be the baseline for agreement in the intra-system study, the method to establish a single ground truth for all FOVs prior to the study (if any, beyond the initial "defined, clinically relevant histologic features") or an adjudication process during the study is not explicitly detailed. The agreement rates are pairwise comparisons between observers or system readings, not necessarily against a single adjudicated ground truth for each FOV.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • A comparative effectiveness study was indeed done, comparing human performance with the E1000 Dx Digital Pathology Solution (MD) to human performance with an optical microscope (MO).
    • Effect Size: The study demonstrated non-inferiority of digital review to optical microscopy. The "effect size" is captured by the difference in major discordance rates:
      • The estimated difference (MD - MO) was -0.15% (95% CI: -0.40%, 0.41%). This narrow confidence interval, inclusive of zero and generally close to zero, supports the non-inferiority claim, indicating no significant practical difference in major discordance rates between the two modalities when used by human readers.

    6. Standalone (Algorithm Only) Performance:

    • No, a standalone (algorithm only) performance study was not conducted or described. This device is a Whole Slide Imaging System intended as an aid to the pathologist for human review and interpretation, not an AI for automated diagnosis.

    7. Type of Ground Truth Used:

    • Clinical Accuracy Study: The ground truth used was the "main sign-out diagnosis (SD)." This is a form of expert consensus or definitive clinical diagnosis, widely accepted as the reference standard in pathology.
    • Precision Study: For the precision studies, "defined, clinically relevant histologic features" were used, and pathologists recorded the presence of these features. While not explicitly stated as a "ground truth" in the same way as the sign-out diagnosis, the 'original assessment' or 'presumed correct' feature presence often serves as a practical ground truth for repeatability and reproducibility calculations.

    8. Sample Size for the Training Set:

    • The document does not mention a training set as this device is not an AI/ML algorithm that learns from data. It's a hardware and software system designed to digitize and display images for human review. The "development processes" mentioned are for the hardware and software functionality, not for training a model.

    9. How the Ground Truth for the Training Set Was Established:

    • This question is not applicable as there is no training set for this device as described. Ground truth establishment mentioned in the document relates to clinical validation and precision, not AI model training.
    Ask a Question

    Ask a specific question about this device

    K Number
    K241232
    Date Cleared
    2025-01-24

    (267 days)

    Product Code
    Regulation Number
    864.3750
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Panel :

    Pathology (PA)

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Galen™ Second Read™ is a software only device intended to analyze scanned histopathology whole slide images (WSIs) from prostate core needle biopsies (PCNB) prepared from hematoxylin & eosin (H&E) stained formalin-fixed paraffin embedded (FFPE) tissue. The device is intentify cases initially diagnosed as benign for further review by a pathologist. If Galen™ Second Read™ detects tissue morphology suspicious for prostate adenocarcinoma (AdC), it provides case- and slide-level alerts (flags) which includes a heatmap of tissue areas in the WSI that is likely to contain cancer.

    Galen™ Second Read™ is intended to be used with slide images digitized with Philips Ultra Fast Scanner and visualized using the Galen™ Second Read™ user interface.

    Galen™ Second Read™ outputs are not intended to be used on a standalone basis for diagnosis, to rule out prostatic AdC or to preclude pathological assessment of WSIs according to the standard of care.

    Device Description

    The Galen Second Read is an in vitro diagnostic medical device software, derived from a deterministic deep convolutional network that has been developed with digitized WSIs of H&E-stained prostate core needle biopsy (PCNB) slides originating from formalin-fixed paraffinembedded (FFPE) tissue sections, that were initially diagnosed as benign by the pathologist.

    The Galen Second Read is cloud-hosted and utilizes external accessories [e.g., scanner and image management systems (IMS)] for automatic ingestion of the input. The device identifies WSIs that are more likely to contain prostatic adenocarcinoma (AdC). For each input WSI, the Galen Second Read automatically analyzes the WSI and outputs the following:

    • Binary classification of the likelihood (high/low) to contain AdC based on a predetermined . threshold of the neural network output.
    • For slides classified with high likelihood to contain AdC, slide-level findings are flagged . and visualized (AdC score and heatmap) for additional review by a pathologist alongside the WSI.
    • For slides classified as low likelihood to contain AdC, no additional output is available. .

    Galen Second Read key functionalities include image upload and analysis, flag slides of high likelihood to contain AdC and display of all the WSIs uploaded to the system alongside their analysis results. Flagged findings constitute a recommendation for additional review by a pathologist.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study information for the Galen™ Second Read™ device, based on the provided text:

    Acceptance Criteria and Device Performance

    The document does not explicitly state pre-defined acceptance criteria with specific numerical targets. Instead, it presents the device's performance metrics from clinical studies. The implied acceptance criteria are that the device should improve the detection of prostatic adenocarcinoma (AdC) in initially benign cases when assisting pathologists.

    Here are the reported device performance metrics from the provided studies:

    Table 1: Device Performance (Clinical Study 1 - Standalone Performance)

    ParameterEstimate95% CIContext
    Slide-Level
    Sensitivity81.0%(69.2%; 92.9%)Ability to correctly identify GT positive slides
    Specificity91.6%(90.9%; 92.3%)Ability to correctly identify GT negative slides
    Case-Level
    Sensitivity80.8%(74.1%; 87.6%)Ability to correctly identify GT positive cases
    Specificity46.9%(39.5%; 54.3%)Ability to correctly identify GT negative cases

    Table 2: Device Performance (Clinical Study 2 - Human-in-the-Loop Performance)

    ParameterPerformance with Galen Second Read AI AssistancePerformance with Standard of Care (SoC)Difference95% CI (Difference)
    Combined Pathologists (Overall)
    Sensitivity93.9%90.5%3.5%(2.3%; 4.5%)
    Specificity87.9%91.1%-3.2%(-4.3%; -1.9%)
    For Slides Initially Assessed as Benign by Pathologists
    Sensitivity36.3%0% (SoC)36.3%(28.0%; 45.5%)
    Specificity96.5%100% (SoC)-3.5% (approx)(95.2%; 97.5%)

    Study Information:

    1. Sample Size and Data Provenance

    Analytical Performance Studies (Precision and Localization):

    • Sample Size: Not explicitly stated as a single number for these studies. The tables show "n/N" values for positive and negative slides. For repeatability, there were 39 positive slides and 38 negative slides in each run (total for repeatability: 3 runs * 39 positive + 3 runs * 38 negative = 231 slide-reads). For reproducibility, it was also based on "39" and "38" slides for each scanner/operator combination.
    • Data Provenance: Retrospectively collected, de-identified slides.
    • Country of Origin: Not specified for these analytical studies.

    Clinical Performance Study 1 (Standalone Performance):

    • Sample Size: 347 cases (initially diagnosed as benign) with associated whole slide images (WSIs).
    • Data Provenance: Retrospectively collected samples.
    • Country of Origin: Three sites, including 2 US sites and 1 Outside the US (OUS) site.

    Clinical Performance Study 2 (Human-in-the-Loop Performance):

    • Sample Size: 772 cases/slides (376 negative cases and 396 positive cases).
    • Data Provenance: Retrospectively collected slides.
    • Country of Origin: Four sites, including 3 US sites and 1 OUS site.

    2. Number of Experts and Qualifications for Test Set Ground Truth

    Analytical Performance Studies:

    • Number of Experts: Not explicitly stated, but "GT determined as 'positive', or 'benign' by the GT pathologists" implies multiple pathologists.
    • Qualifications: "GT pathologists" - no specific experience level mentioned.

    Clinical Performance Study 1 (Standalone Performance):

    • Number of Experts: Two independent expert pathologists for initial review, with a third independent expert pathologist for tie-breaking.
    • Qualifications: "Independent expert pathologists" - no specific experience level mentioned.

    Clinical Performance Study 2 (Human-in-the-Loop Performance):

    • Number of Experts: Not explicitly detailed for the GT determination for this specific study, but it is likely consistent with Study 1's method, as it shares similar retrospective data characteristics.
    • Qualifications: Not explicitly detailed for the GT determination for this specific study.

    3. Adjudication Method for the Test Set

    Clinical Performance Study 1 (Standalone Performance):

    • Adjudication Method: 2+1 (Two independent expert pathologists, with a third independent expert pathologist to review disagreements and determine the majority rule for the final ground truth).

    Analytical Performance Studies & Clinical Performance Study 2:

    • Adjudication Method: Not explicitly detailed, but implied to be expert consensus.

    4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Yes, a MRMC comparative effectiveness study was done (Clinical Performance Study 2).
    • Effect Size of Human Readers Improvement with AI vs. without AI Assistance:
      • Sensitivity: The combined sensitivity for pathologists improved by 3.5% (95% CI: 2.3%; 4.5%) with Galen Second Read assistance compared to SoC.
      • Specificity: The combined specificity for pathologists decreased by 3.2% (95% CI: -4.3%; -1.9%) with Galen Second Read assistance compared to SoC.
      • For slides initially assessed as benign by pathologists (the intended use population), sensitivity increased by 36.3% (from 0% in SoC to 36.3% with Galen Second Read). Specificity for these slides decreased by 3.5% (from 100% in SoC to 96.5% with Galen Second Read).

    5. Standalone Performance Study

    • Yes, a standalone (algorithm only without human-in-the-loop performance) was done (Clinical Performance Study 1).
    • The results are shown in "Table 1: Device Performance (Clinical Study 1 - Standalone Performance)" above.

    6. Type of Ground Truth Used

    • Expert Consensus: For both clinical performance studies, the ground truth for slides was established by expert pathologists via a consensus process (two independent experts, with a third for adjudication in cases of disagreement). The ground truth for cases was derived from the slide-level ground truth.

    7. Sample Size for the Training Set

    • Not provided in the document. The document describes the device as a "deterministic deep convolutional network that has been developed with digitized WSIs...". However, it does not state the specific sample size, origin, or characteristics of the training dataset.

    8. How Ground Truth for the Training Set Was Established

    • Not provided in the document. While it mentions the network was "developed with digitized WSIs," details on how the ground truth for these training images was established are not included in the provided text.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 22