Search Results
Found 10 results
510(k) Data Aggregation
(238 days)
QKQ
AISight Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. AISight Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. AISight DX is intended to be used with interoperable displays, scanners and file formats, and web browsers that have been 510(k) cleared for use with the AISight Dx or 510(k)-cleared displays, 510(k)-cleared scanners and file formats, and web browsers that have been assessed in accordance with the Predetermined Change Control Plan (PCCP) for qualifying interoperable devices.
AISight Dx is a web-based, software-only device that is intended to aid pathology professionals in viewing, interpretation, and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue obtained from Hamamatsu NanoZoomer S360MD Slide scanner or Leica Aperio GT 450 DX scanner (Table 1). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the AISight Dx device, based on the provided FDA 510(k) Clearance Letter:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixel-wise Comparison | Identical image reproduction (max pixelwise difference |
Ask a specific question about this device
(90 days)
QKQ
For In Vitro Diagnostic Use Only
CaloPix is a software only device for viewing and management of digital images of scanned surgical pathology slides prepared from Formalin-Fixed Paraffin Embedded (FFPE) tissue.
CaloPix is intended for in vitro diagnostic use as an aid to the pathologist to review, interpret and manage these digital slide images for the purpose of primary diagnosis.
CaloPix is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and the validity of the interpretation of images using CaloPix.
CaloPix is intended to be used with the interoperable components specified in the below Table:
Scanner Hardware | Scanner Output File Format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | SVS | Dell U3223QE |
Hamamatsu NanoZoomer S360MD Slide scanner | NDPI | JVC Kenwood JD-C240BN01A |
CaloPix, version 6.1.0 IVDUS, is a web-based software-only device that is intended to aid pathology professionals in viewing, interpreting and managing digital Whole Slide Images (WSI) of glass slides obtained from the Hamamatsu NanoZoomer S360MD slide scanner (NDPI file format) and viewed on the JVC Kenwood JD-C240BN01A display, as well as those obtained from the Leica Aperio GT 450 DX scanner (SVS file format) and viewed on the Dell U3223QE display.
CaloPix does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.
CaloPix is for viewing digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
As a whole, CaloPix is a pathology Image Management System (IMS) which brings case-centric digital pathology image management, collaboration, and image processing. CaloPix consists of:
-
Integration with Laboratory Information Systems (LIS): Allows to obtain automatically from the LIS patient data associated with the cases, scanned whole slide images and other related medical images to be analyzed. The data stored in the database is automatically updated according to the interface protocol with the LIS.
-
DataBase: After ingestion, scanned WSI can be organized in the CaloPix database consisting of folders (cases) containing patient identification data and examination results from a LIS.
Ingestion of the slides is performed through an integrated module that allows their automatic indexation based on patient data retrieved from the LIS. After their ingestion, image files are stored in a CaloPix-specific file storage environment, that can be on premises or in the cloud.
- The CaloPix viewer component to process scanned whole slide images, that includes functions for panning, zooming, screen capture, annotations, distance and surface measurement, and image registration. This viewer relies on image servers (IMGSRV) which extract image tiles from the whole slide image file and send these tiles to the CaloPix viewer for smooth and fast viewing.
The FDA 510(k) clearance letter for CaloPix indicates that the device's performance was evaluated through a series of tests to demonstrate its safety and effectiveness. The primary study described in the provided document focuses on technical performance testing rather than a clinical multi-reader multi-case (MRMC) study.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria | Reported Device Performance |
---|---|---|
Pixel-wise comparison (Image Reproduction Accuracy) | The 95th percentile of the pixel-wise color differences (CIEDE2000, ΔE00) in any image pair between CaloPix and the predicate device's IRMS must be less than 3 (ΔE00 |
Ask a specific question about this device
(226 days)
QKQ
For In Vitro Diagnostic Use
Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.
Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.
Viewer+ is operated as follows:
-
- Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
-
- Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
-
- The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner) | The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate. |
Turnaround time (for opening, panning, and zooming an image) | Found to be adequate for the intended use of the device. |
Measurement accuracy (using scanned images of biological slides) | Viewer+ was found to perform accurate measurements with respect to its intended use. |
Usability testing | Demonstrated that the subject device is safe and effective for the intended users, uses, and use environments. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.
The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.
7. Type of Ground Truth Used
- Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
- Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
- Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.
8. Sample Size for the Training Set
The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for this device, information on how its ground truth was established is not applicable.
Ask a specific question about this device
(248 days)
QKQ
For In Vitro Diagnostic Use
FullFocus is a software intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended to be used with the interoperable components specified in the below Table.
Table: Interoperable components of FullFocus
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
FullFocus, version 2.29, is a web-based software-only device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from FDA cleared digital pathology scanners on FDA cleared displays. FullFocus renders these digitized pathology images for review, management and navigation for pathology primary diagnosis.
Image acquisition is performed using the intended scanner (s), with the operator conducting quality control on the digital WSI images according to the scanner's instructions for use and lab specifications to determine if re-scans are needed. Please see the Intended Use section and below tables for specifics on scanners and respective displays for clinical use.
Once a whole slide image is acquired using the intended scanner and becomes available in the scanner's database file system, a separate medical image communications software (not part of the device), automatically uploads the image and corresponding metadata to persistent cloud storage. Integrity checks are performed during the upload to ensure data accuracy.
The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
FullFocus operates with and is validated for use with the FDA cleared components specified in the tables below:
Scanner Hardware | Scanner Output file format | Interoperable Displays |
---|---|---|
Leica Aperio GT 450 DX scanner | DICOM, SVS | Dell UP3017 |
Dell U3023E | ||
Hamamatsu NanoZoomer S360MD Slide Scanner | NDPI | Dell U3223QE |
JVC-Kenwood JD-C240BN01A |
Table 1: Interoperable Components Intended for Use with FullFocus
FullFocus version 2.29 was not validated for the use with images generated with Philips Ultra Fast Scanner.
Table 2: Computer Environment/System Requirements for during the use of FullFocus
Environment | Component | Minimum Requirements |
---|---|---|
Hardware | Processor | 1 CPU, 2 cores, 1.6GHz |
Memory | 4 GB RAM | |
Network | Bandwidth of 10Mbps | |
Software | Operating System | • Windows |
• macOS | ||
Browser | • Google Chrome (129.0.6668.90 or higher) | |
• Microsoft Edge (129.0.2792.79 or higher) |
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criterion | Reported Device Performance |
---|---|
Pixel-wise comparison: The 95th percentile of pixel-wise color differences in any image pair across all required screenshots must be less than 3.0 ΔE00 when compared to comparator (predicate device's Image Review Manipulation Software - IRMS) for identical image reproduction. This indicates visual adequacy for human readers. | The 95th percentile of pixel-wise differences between FullFocus and the comparators were less than 3 CIEDE2000, indicating that their output images can be considered to be pixel-wise identical. FullFocus has been found to visually adequately reproduce digital pathology images to human readers with respect to its intended use. |
Turnaround time (Case selection): It should not take longer than 10 seconds until the image is fully loaded when selecting a case. | System requirements fulfilled: Not longer than 10 seconds until the image is fully loaded. |
Turnaround time (Panning/Zooming): It shall not take longer than 7 seconds until the image is fully loaded when panning and zooming the image. | System requirements fulfilled: Not longer than 7 seconds until the image is fully loaded. |
Measurement Accuracy (Straight Line): The 1mm measured line should match the reference value exactly 1mm ± 0mm. | All straight-line measurements compared to the reference were exactly 1mm, with no error. |
Measurement Accuracy (Area): The measured area must match the reference area exactly 0.2 x 0.2 mm for a total of 0.04 mm² ± 0 mm². | All area measurements compared to the reference value were exactly 0.04mm², with no error. |
Measurement Accuracy (Scalebar): 2mm scalebar is accurate. | All Tests Passed. |
Human Factors Testing: (Implied from previous clearance) Safe and effective use by representative users for critical user tasks and use scenarios. | Human factors study designed around critical user tasks and use scenarios performed by representative users were conducted for previously cleared FullFocus, version 1.2.1, in K201005, per FDA guidance “Applying Human Factors and Usability Engineering to Medical Devices (2016)". Human factors validation testing is not necessary as the user interface hasn't changed. |
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Pixel-wise Comparison: 30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides, representing a range of human anatomical sites.
- Sample Size for Turnaround Time & Measurements: Not explicitly stated as a number of distinct cases or images beyond the 30 slides used for pixel-wise comparison. For measurements, a "1 Calibration Slide" was used per test.
- Data Provenance: The text does not explicitly state the country of origin. The slides are described as "representing a range of human anatomical sites," implying a diverse set of real-world pathology samples. It is a retrospective study as it states "30 formalin-fixed paraffin-embedded (FFPE) tissue glass slides... were scanned".
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications
- Pixel-wise Comparison: "For each WSI, three regions of interest (ROIs) were identified to highlight relevant pathological features, as verified by a pathologist."
- Number of Experts: At least one pathologist.
- Qualifications: "A pathologist" (specific qualifications like years of experience are not provided).
- Measurements: No expert was explicitly mentioned for establishing ground truth for measurements; it relies on a "test image containing objects with known sizes" (calibration slide) and "reference value."
4. Adjudication Method for the Test Set
- The text does not mention an explicit adjudication method (like 2+1 or 3+1 consensus) for the pixel-wise comparison or measurement accuracy. For the pixel-wise comparison, ROIs were "verified by a pathologist," suggesting a single-expert verification rather than a consensus process.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
- No, an MRMC comparative effectiveness study was not done in this context. The study focused on demonstrating identical image reproduction (pixel-wise comparison) and technical performance (turnaround time, measurement accuracy) of the FullFocus viewer against predicate devices' viewing components. It did not directly assess the improvement in human reader performance (e.g., diagnostic accuracy or efficiency) with or without AI assistance. The device is a "viewer and management software," not an AI diagnostic aid in the sense of providing specific findings or interpretations.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- Yes, a standalone "algorithm only" performance was effectively done for the technical aspects. The pixel-wise comparison directly compares the image rendering of FullFocus with the predicate viewer's rendering without human intervention in the comparison process itself (though a pathologist verified ROIs). Similarly, turnaround times and measurement accuracy are intrinsic technical performances of the software.
7. The Type of Ground Truth Used
- Pixel-wise Comparison: The ground truth for this test was the digital image data as rendered by the predicate device's IRMS. The goal was to show that FullFocus reproduces the same image data. The "relevant pathological features" within ROIs were "verified by a pathologist" which served as a reference for what areas to test, not necessarily a diagnostic ground truth for the device's output.
- Measurements: The ground truth was based on known physical dimensions within a calibration slide and corresponding "reference values."
8. The Sample Size for the Training Set
- The provided text does not mention a training set. This is expected because FullFocus is a viewer and management software for digital pathology images, not an AI or machine learning algorithm that is "trained" on data to make predictions or assist in diagnosis directly. Its core function is to display existing image data accurately and efficiently.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned (since it's a viewer software), this question is not applicable based on the provided text.
Ask a specific question about this device
(269 days)
QKQ
For In Vitro Diagnostic Use
MetaLite DX Digital Pathology Software is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue for the purposes of pathology primary diagnosis. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides.
MetaLite DX Digital Pathology Software is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. MetaLite DX Digital Pathology Software is intended for use with Philips Ultra Fast Scanner and the Barco MDPC-8127 display.
MetaLite DX Digital Pathology Software, Model MLDXUS, version 1.2.1 is software designed for viewing digital pathology images of glass slides from the Philips IntelliSite Pathology Solution Ultra-Fast Scanner (PIPS-UFS), version 1.8.4 on Barco MDPC-8127 display.
MetaLite DX Digital Pathology Software is operated as follows:
Before scanning the slide on the PIPS-UFS, the technician performs quality control on the tissue of interest. The images captured by the PIPS-UFS are compressed using Philips' proprietary iSyntax format and are transmitted to the Philips Image Management System (IMS).
(1) After the Whole Slide Images (WSIs) are successfully, acquired by using PIPS-UFS, the WSIs are stored in the Local file system. A qualified pathologist will upload compatible iSyntax format digital pathology images, and the software will load them to the "Main Viewer" area of the graphical interface for the pathologist to view.
(2) Once properly loaded, the pathologist will use the inherent features of the device (including tools that allow for adjusting the position and viewing angle of the image, measuring lengths between two coordinates, and adding annotations to specific regional areas).
(3) After viewing all images for a patient (case), the pathologist will make a diagnosis. The diagnosis will be documented in another system, e.g., a Laboratory Information System (LIS).
The software has various features such as zoom-in and zoom-out functions, scale display, thumbnail view, measurement function, annotation function, and panning function to help pathologists interpret, diagnose and manage digital whole slide images. The MetaLite DX Digital Pathology Software is validated for use with the components specified the tables below.
Let's break down the acceptance criteria and the study proving the device meets them based on the provided text.
Based on the provided text, the "MetaLite DX Digital Pathology Software" (MLDXUS) is a software-only device intended for viewing and managing digital images of scanned surgical pathology slides for primary diagnosis. The performance testing section describes the studies conducted to demonstrate its safety and effectiveness.
Here's the information organized as requested:
1. Table of Acceptance Criteria and Reported Device Performance
Test | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Pixel-wise comparison | The output images of the MetaLite DX Digital Pathology Software should be visually identical to those produced by the predicate device (PIPS IMS) for the same file. | The 95th percentile of pixel-wise differences between MetaLite DX Digital Pathology Software and PIPS IMS was less than 3 CIEDE2000, indicating that their output images are pixel-wise identical and visually adequate. |
Turnaround time | Opening, panning, and zooming an image should be within an adequate timeframe for intended use (implicitly, within 5 seconds based on the outcome). | The turnaround time for opening, panning, and zooming an image is within 5 seconds. This was determined and found to be adequate for the intended use. |
Measurements | The software should perform accurate measurements. | Measurement accuracy was verified using a scanned image of a calibration scale slide. MetaLite DX Digital Pathology Software was found to perform accurate measurements with respect to its intended use. (Note: Predicate device also measures area, but this device only explicitly states distance measurement in the comparison table, although the performance statement is general for "measurements"). |
Usability testing | The device should be safe and effective for its intended users, uses, and use environments. | Conducted per FDA guidance "Applying Human Factors and Usability Engineering to Medical Devices (2016)". The test result demonstrated that the subject device has been found to be safe and effective for the intended users, uses, and use environments. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify the sample size for the test sets used in the pixel-wise comparison, turnaround time, measurement accuracy, or usability testing.
The document does not specify the provenance of the data (e.g., country of origin, retrospective or prospective). It only states that the images used were "iSyntax file generated from UFS 1.8.4," which refers to the Philips Ultra Fast Scanner, a compatible component.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not explicitly state the number of experts used or their specific qualifications for establishing ground truth for any of the performance tests.
- For the pixel-wise comparison, the ground truth seems to be the output of the predicate device (PIPS IMS) for the same initial image file, rather than expert judgment on clinical images.
- For turnaround time and measurements, the ground truth would be objectively measurable (time, known distances on a calibration slide).
- For usability testing, ground truth typically involves observing user interactions and identifying errors or difficulties, rather than a clinical ground truth established by experts.
4. Adjudication Method for the Test Set
The document does not describe any adjudication method (e.g., 2+1, 3+1, none) for the test set.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not explicitly conducted for the MetaLite DX Digital Pathology Software as described in this document. The studies performed focus on technical performance (pixel comparison, speed, measurement accuracy) and usability of the software as a viewing and management tool, not on its impact on human reader diagnostic accuracy or efficiency with and without AI assistance. The device is purely a viewer/manager and does not incorporate AI for diagnosis.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study
The studies described are primarily standalone in the sense that they evaluate the software's technical performance attributes (pixel reproduction, speed, measurement accuracy) independent of a pathologist's diagnostic performance. The usability test involved human interaction but assessed the usability of the software interface, not the diagnostic accuracy of the human using it. The device itself is described as "software only" and an "aid to the pathologist," rather than an AI diagnostic algorithm.
7. Type of Ground Truth Used
- Pixel-wise comparison: The ground truth appears to be the output of a reference system (PIPS IMS) for the same iSyntax file. This is a technical ground truth based on image fidelity.
- Turnaround time: The ground truth is objective measurement of time.
- Measurements: The ground truth is objective, known distances on a calibration scale slide.
- Usability testing: The ground truth is based on observed user interactions, identification of use errors, and compliance with human factors principles, as guided by FDA guidelines.
There is no mention of clinical ground truth (e.g., expert consensus on pathology diagnoses, or outcomes data) being used for these particular performance tests, as the device is not a diagnostic AI algorithm.
8. Sample Size for the Training Set
The document does not provide any information regarding a training set size. This is consistent with the device being a viewer and manager of digital images, not an AI algorithm that requires a training set for model development.
9. How the Ground Truth for the Training Set Was Established
As no training set is described for this type of device, this information is not applicable and not provided in the document.
Ask a specific question about this device
(246 days)
QKQ
For In Vitro Diagnostic Use
aetherSlide is a software-only device intended for viewing and managing digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. aetherSlide is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. aetherSlide is intended for use with the Philips Ultra Fast Scanner (UFS) and the Philips PS27QHDCR monitor.
aetherSlide, version 101692 is a web-based, software only device that is intended to aid pathology professionals in viewing, interpretation and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed paraffin-embedded (FFPE) tissue obtained from Philips Ultra Fast Scanner (UFS). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.
Here's a breakdown of the acceptance criteria and study information for the aetherSlide device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Pixel-wise comparison (Image Fidelity): Images reproduced by aetherSlide should be visually adequate. | The 95th percentile of pixel-wise differences between aetherSlide and PIPS IMS was less than 3 CIEDE2000, indicating pixel-wise identical output images, and color images were visually adequate. |
Turnaround time - Case Selection: Not longer than 10 seconds until the image is fully loaded after selecting a case. | Turnaround times for opening an image were determined and found to be adequate for the intended use. |
Turnaround time - Panning: Not longer than 7 seconds until the image is fully loaded when panning one-quarter of the monitor. | Turnaround times for panning were determined and found to be adequate for the intended use. |
Measurements (Accuracy): Perform accurate measurements. | Measurement accuracy was verified using a scanned image of a grid micrometer and found to be accurate for the intended use. |
Usability: Safe and effective for intended users, uses, and use environments. | The usability test demonstrated the subject device is safe and effective for the intended users, uses, and use environments. |
Study Details
-
Sample size used for the test set and the data provenance:
- Test Set Sample Size: Not explicitly stated. The pixel-wise comparison mentions "the same iSyntax file" but does not quantify the number of such files or cases used. Similarly, for turnaround time, measurements, and usability, specific sample sizes are not provided.
- Data Provenance: Not explicitly stated. The document mentions the device is intended for use with "scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue" obtained from the Philips Ultra Fast Scanner (UFS). This implies the data would be clinical pathology slides, but their origin (country, retrospective/prospective collection) is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The studies described are primarily technical performance assessments (pixel comparison, turnaround time, measurement accuracy, usability) rather than diagnostic accuracy studies requiring expert consensus as ground truth. The "ground truth" for these tests would be the reference values or expected outcomes based on the technical specifications of the images or system.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided as the studies are focused on technical performance rather than diagnostic outcomes.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. The aetherSlide is described as a "software-only device intended for viewing and managing digital images... It is an aid to the pathologist to review, interpret and manage digital images...". It is a WSI viewer and manager, not an AI-assisted diagnostic tool designed to improve human reader performance (e.g., by detecting abnormalities). Therefore, this type of study would not be applicable for this device.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, standalone technical performance assessments were done. The pixel-wise comparison, turnaround time tests, and measurement accuracy tests are evaluations of the algorithm's direct performance in rendering images, speed, and accuracy, independent of a pathologist's diagnostic performance. The usability testing, while involving human users, focuses on the system's interface and interaction, not diagnostic accuracy.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- Technical Reference Standards:
- Pixel-wise comparison: The ground truth was the image reproduced by the predicate device (Philips IntelliSite Pathology Solution - PIPS IMS) for the same iSyntax file.
- Turnaround time: The ground truth was the pre-defined target times (10 seconds for case selection, 7 seconds for panning).
- Measurements: The ground truth was the known dimensions on a "scanned image of the grid micrometer."
- Usability: The ground truth would be the safety and effectiveness criteria outlined in the usability engineering guidance (e.g., task completion rates, error rates, user feedback conforming to safety and effectiveness).
- Technical Reference Standards:
-
The sample size for the training set:
- The document does not mention a training set size. The aetherSlide is a viewer and management software, not a machine learning model that typically requires a dedicated training set. Its functionality is based on rendering and interacting with existing digital slide images.
-
How the ground truth for the training set was established:
- As there is no mention of a machine learning component or a training set for diagnostic purposes, the concept of establishing ground truth for a training set is not applicable to the information provided.
Ask a specific question about this device
(377 days)
QKQ
Novo is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. Novo is not intended for use with frozen sections, cytology, or non- FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Novo is intended for use with the Philips Ultra Fast Scanner and the Barco PP27QHD or Philips PS27QHDCR display.
The PathAI Novo device is a web-based software-only device that is intended to aid pathology professionals in the viewing, interpretation, and management of digital whole slide images (WSIs) of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue using the Philips IntelliSite Pathology Solution (PIPS) Ultra Fast Scanner (UFS).
The proposed device is typically operated as follows:
-
- A user prepares and scans slides and reviews the slide quality in accordance with the PIPS UFS IFU and standard lab procedures. The Novo device workflow is initiated when a user uploads WSIs from the local file system to the cloud storage using Novo.
-
- After uploading WSIs to cloud storage using Novo, a user builds a patient accession using the patient's medical record number (MRN), date of birth (DOB) and accession ID to support linkage of one or more slides from a single procedure using patient identifiers in Novo.
-
- A pathologist uses the slide viewer to perform their primary diagnosis workflow including zooming and panning images.
After viewing all images belonging to a particular accession, the pathologist will make a diagnosis.
The provided text describes the regulatory clearance for the "Novo" device, a software-only whole slide imaging system, and references a clinical study conducted to establish its substantial equivalence to a predicate device. However, the document primarily focuses on regulatory approval and does not contain the detailed acceptance criteria table or comprehensive study breakdown as requested in the prompt.
Therefore, the following response will extract what is available and highlight where information is missing based on your request.
Acceptance Criteria and Device Performance for Novo (as described by available information)
Based on the provided FDA 510(k) summary, details regarding specific quantifiable acceptance criteria and performance beyond a non-inferiority finding are limited. The document focuses on demonstrating substantial equivalence to a predicate device (Philips IntelliSite Pathology Solution - PIPS).
Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric (Inferred/Stated) | Acceptance Threshold (Inferred/Stated) | Reported Device Performance |
---|---|---|---|
Clinical Equivalence | Major Discordance Rate | Upper limit of 95% CI for difference in major discordance rates |
Ask a specific question about this device
(349 days)
QKQ
Dynamyx Digital Pathology Software is intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue. It is an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis.
Dynamyx Digital Pathology Software is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using Dynamyx Digital Pathology Software.
The Dynamyx Digital Pathology Software consists of the Installed Pathologist Client and the Pathologist Workstation Web Client. The Installed Pathologist Client is intended for use with Leica's Aperio AT2 DX scanner and Dell MR2416 monitor as well as Philips' Ultra Fast Scanner and Philips PP27QHD monitor. The Pathologist Workstation Web Client is intended for use with Philips' Ultra Fast Scanner and Philips PP27QHD monitor.
Dynamyx Digital Pathology Software is a client-server software device used for importing, displaying, navigating, and annotating whole slide images obtained from the Leica Aperio AT2 DX scanner or the Philips Ultra Fast Scanner.
Whole slide images are created by scanning glass microscope slides using a digital slide scanner which are then imported into the Dynamyx Digital Archive server. Dynamyx uses the image decoding libraries licensed by Leica and Philips for the native images. Dynamyx then uses lossless compression to send the images to the Dynamyx viewer.
Note that Dynamyx has two different applications for two different inputs as specified below.
-
The Dynamyx Web Application running in the Chrome browser can only display WSI from the Philips Ultra Fast Scanner.
-
The Dynamyx Installed Client Application can display WSI from both the Leica AT2 DX Scanner and the Philips Ultra Fast Scanner.
Whole slide image files are viewed in the Dynamyx image viewer window by histologists and by pathologists who can also navigate (pan and zoom) and annotate the images.
Dynamyx incorporates typical histology/pathology workflow and is operated as follows:
-
- Dynamyx receives whole slide images from the scanner as specified above and extracts a copy of the images' metadata. The unaltered images are then sent to the external image storage (Digital Archive). A copy of the image metadata (e.g. the pixel size) is stored in the subject device's database to increase the operational performance (e.g. response times) of Dynamyx.
-
- Depending upon a laboratory's workflow, whole slide images may be reviewed first by histologists to confirm image quality and initiate any slide rescans as necessary prior to being viewed by pathologists. The digital slide review OC status determined by the histologist indicates which slides have been reviewed and approved. The OC status is available to the reading pathologist.
-
- The reading pathologist selects a patient case from a selected worklist within Dynamyx whereby the case images are retrieved from the digital archive.
-
- The reading pathologist uses Dynamyx to view, navigate, annotate, and interpret the digital images. The pathologist can perform the following actions to displayed image:
- a. Zoom and pan the image at will;
- b. Adjust the apparent image observed magnification level;
- c. Measure distances and areas;
- d. Annotate images and cases;
-
- The above steps are repeated as required.
After viewing all images, the pathologist will make a diagnosis which is documented in a laboratory information system.
There is no information regarding acceptance criteria and a study proving the device meets it for an AI/ML clinical decision support function in the provided text. The document refers to the Dynamyx Digital Pathology Software, which is a viewing and management software for digital images of pathology slides, not an AI/ML device.
The document discusses non-clinical performance testing for image reproduction, turnaround time, measurement accuracy, and usability, demonstrating substantial equivalence to predicate devices, but this is for the core functionality of a digital pathology viewer, not an AI feature.
Therefore, I cannot provide the requested information for acceptance criteria and a study proving an AI device meets them.
Ask a specific question about this device
(90 days)
QKQ
FullFocus™ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of pathology slides for primary diagnosis. FullFocus is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. FullFocus is intended for use with Philips Ultra Fast Scanner and monitor displays validated with verified test methods to meet required performance characteristics.
FullFocus is a web-based software-only device for viewing and manipulating digital pathology images of glass slides obtained from the Philips IntelliSite Pathology Solution (PIPS) Ultra Fast Scanner (UFS) on the monitor displays that are validated with verified test methods to meet required performance characteristics. FullFocus reproduces the whole slide images and is an aid to the pathologist to review. interpret and manage digital images of pathology slides for primary diagnosis.
Here's a breakdown of the acceptance criteria and the study information for the FullFocus device, based on the provided FDA 510(k) summary:
Acceptance Criteria and Reported Device Performance
Acceptance Criteria | Reported Device Performance |
---|---|
Pixel-wise comparison | Visually adequately reproduces digital pathology images to human readers with respect to its intended use (compared to PIPS, including zooming and panning). |
Turnaround time (Case selection) | Not longer than 10 seconds until the image is fully loaded. |
Turnaround time (Panning) | Not longer than 7 seconds until the image is fully loaded (for panning one quarter of the monitor). |
Measurements Accuracy | Performs accurate measurements (verified using a test image containing objects with known sizes). |
Human factors testing | Found to be safe and effective for the intended users, uses, and use environments; user interface is intuitive, safe, and effective for the range of intended users. |
Further Study Information
-
Sample size used for the test set and data provenance:
- Clinical Study: No clinical study involving diagnosis by human readers for diagnostic accuracy comparison is mentioned in this document. The "studies" described are non-clinical technical performance assessments and human factors testing.
- Pixel-wise comparison: The document doesn't specify a sample size for slides or images, only that it "was conducted to compare color images reproduced by FullFocus and PIPS IMS." Data provenance is not mentioned, but it's implied the images were generated by a Philips Ultra Fast Scanner, given the device's compatibility and comparison to the PIPS IMS.
- Measurements: "a test image containing objects with known sizes" was used. Specific sample size is not indicated.
- Human Factors Testing: "Task-based usability tests" were performed. The number of participants (intended users) is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- For the pixel-wise comparison, the "ground truth" was essentially the visual fidelity to images produced by the predicate device (PIPS IMS). The "human readers" mentioned in the performance description are not described as experts establishing ground truth, but rather as observers confirming visual adequacy. No specific number or qualifications of these readers are given.
- For measurements, the ground truth was the "known sizes" of objects within a test image. This would not require expert pathologists to establish.
- For human factors testing, the "ground truth" relates to usability and safety, which is assessed directly by intended users during task performance, rather than established by an "expert" in the diagnostic sense.
-
Adjudication method for the test set:
- Not applicable as there was no study described that involved diagnostic interpretations requiring adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, and the effect size of how much human readers improve with AI vs without AI assistance:
- No. The document explicitly states that FullFocus is a "software only device intended for viewing and management of digital images... It is an aid to the pathologist to review, interpret, and manage digital images...". It is a viewer, not an AI-assisted diagnostic tool. Therefore, an MRMC study comparing human readers with and without AI assistance was not performed or described.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. FullFocus is a viewing and management system for pathologists, not a standalone diagnostic algorithm. Its function is to facilitate human review.
-
The type of ground truth used:
- For Pixel-wise comparison, the ground truth was the visual representation and fidelity of images from the predicate device (PIPS IMS).
- For Measurements, the ground truth was "known sizes" of objects in a test image.
- For Turnaround time and Human factors testing, the ground truth was based on pre-defined system requirements and direct usability observations/feedback.
-
The sample size for the training set:
- Not applicable. FullFocus is described as a viewing and management software, not an AI or machine learning algorithm that requires a training set in the typical sense for diagnostic performance.
-
How the ground truth for the training set was established:
- Not applicable, as no training set for an AI/ML algorithm is described.
Ask a specific question about this device
(151 days)
QKQ
For In Vitro Diagnostic Use
Sectra Digital Pathology Module device is a software intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review and interpret these digital images for the purposes of primary diagnosis.
Sectra Digital Pathology Module is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using Sectra Digital Pathology Module.
Sectra Digital Pathology Module is intended for use with Leica's Aperio AT2 DX scanner and Dell MR2416 monitor.
Sectra Digital Pathology Module is a software-only device running under the Microsoft Windows operating system for displaying and manipulating digital pathology images (scanned slides) obtained from the Aperio AT2 DX scanner.
Sectra Digital Pathology Module may only be used in combination with Sectra PACS which consists of Sectra Workstation (K081469) and Sectra Core (identified as a Class I exempt by the FDA in 2000).
The Sectra Pathology Import Server (SPIS) is used for importing digital pathology images (scanned slides) from the scanner. These images are viewed and manipulated by end users in the Pathology Image Window which is displayed on the Sectra Workstation IDS7 (using the Dell MR2416 monitor).
Here's an analysis of the acceptance criteria and the study conducted for the Sectra Digital Pathology Module, extracted from the provided text:
1. Table of Acceptance Criteria and Reported Device Performance:
Acceptance Criteria | Reported Device Performance |
---|---|
Color Reproducibility | Pixel-wise comparison towards Aperio ImageScopeDX (predicate) across multiple tiles, including zooming and panning operations. Sectra Digital Pathology Module was found to reproduce colors adequately and was non-inferior to ImageScopeDX. |
Turnaround Time (Case Load) | When selecting a case, it should not take longer than 7 seconds until the image is fully loaded, provided system requirements are fulfilled. |
Turnaround Time (Panning) | When panning the image (one quarter of the monitor), it should not take longer than 0.5 seconds until the image is fully loaded, provided system requirements are fulfilled. |
Measurements | Measurement accuracy has been verified using a test image containing objects with known sizes. The device was found to perform accurate measurements. |
Human Factors | Task-based usability tests showed the Sectra Digital Pathology Module user interface to be intuitive, safe, and effective for the range of intended users. The device was found to be safe and effective for the intended users, uses, and use environments. |
2. Sample Size for Test Set and Data Provenance:
The document does not explicitly state the sample size for any specific test set regarding color reproducibility, turnaround time, or measurements. It mentions a "test image containing objects with known sizes" for measurements, and "multiple tiles" for color reproducibility.
The data provenance is not specified (e.g., country of origin). The testing seems to be internal development and verification testing rather than involving external patient data.
3. Number of Experts for Ground Truth and Qualifications:
The document does not mention the use of experts to establish ground truth for the technical performance tests described (color reproducibility, turnaround time, measurements, human factors). These appear to be objective, quantifiable tests directly comparing the device's output to a known standard or the predicate device's output.
For "Human factors testing," it states "Task-based usability tests showed the Sectra Digital Pathology Module user interface to be intuitive, safe, and effective for the range of intended users." While suggesting involvement of intended users (pathologists likely), the number and specific qualifications of these users are not detailed.
4. Adjudication Method for Test Set:
No adjudication method is mentioned. The tests described are primarily objective comparisons or measurements.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
No MRMC comparative effectiveness study was done. The document explicitly states: "Substantial equivalence determination is not based upon clinical study results."
6. Standalone (Algorithm Only) Performance Study:
Yes, standalone (algorithm only, meaning the software itself without human interaction for diagnostic purposes) performance was done. The studies listed under "Non-clinical test results" (Color Reproducibility, Turnaround times, Measurements) evaluate the technical performance of the software. The device itself is described as a "software-only device."
7. Type of Ground Truth Used:
- Color Reproducibility: Ground truth was implicitly the visual output and pixel data of the predicate device (Aperio ImageScopeDX).
- Turnaround Time: Ground truth was the pre-defined maximum acceptable time limits (7 seconds for case load, 0.5 seconds for panning).
- Measurements: Ground truth was a "test image containing objects with known sizes."
- Human Factors: Ground truth was established by assessing usability (intuitiveness, safety, effectiveness) through "task-based usability tests" likely with intended users.
8. Sample Size for the Training Set:
The document does not mention a training set. The Sectra Digital Pathology Module is described as a "software intended for viewing and management of digital images," rather than an AI/ML algorithm that requires a training set.
9. How Ground Truth for Training Set Was Established:
Since no training set is mentioned for an AI/ML algorithm, this question is not applicable based on the provided text. The device is a viewer and management tool, not an AI diagnostic algorithm.
Ask a specific question about this device
Page 1 of 1