Search Results
Found 1 results
510(k) Data Aggregation
(237 days)
For In Vitro Diagnostic Use
HALO AP Dx is a software only device intended as an aid to the pathologist to review, interpret and manage digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue for the purposes of pathology primary diagnosis. HALO AP Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. HALO AP Dx is intended for use with the Hamamatsu NanoZoomer S360MD Slide scanner and the JVC Kenwood JD-C240BN01A display.
HALO AP Dx, version 2.1 is a browser-based software-only device intended to aid pathology professionals in viewing, manipulating, management and interpretation of digital pathology whole slide images (WSI) of glass slides obtained from the Hamamatsu Photonics K.K. NanoZoomer S360MD scanner and viewed on the JVC Kenwood JD-C240BN01A display.
HALO AP Dx is typically operated as follows:
- Image acquisition is performed using the predicate device, NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
- Once image acquisition is complete, the unaltered image is saved by the scanner's software to an image storage location. HALO AP Dx ingests the image, and a copy of image metadata is stored in the subject device's database to improve viewing response times.
- Scanned images are reviewed by scanning personnel such as histotechnicians to confirm image quality and initiate any re-scans before making it available to the pathologist.
- The reading pathologist selects a patient case from a selected worklist within HALO AP Dx whereby the subject device fetches the associated images from external image storage.
- The reading pathologist uses the subject device to view the images and can perform the following actions, as needed:
a. Zoom and pan the image.
b. Measure distances and areas in the image.
c. Annotate images.
d. View multiple images side by side in a synchronized fashion.
The above steps are repeated as necessary.
After viewing all images belonging to a particular case (patient), the pathologist will make a diagnosis which is documented in another system, such as a Laboratory Information System (LIS).
The interoperable components of HALO AP Dx are provided in table 1 below:
Table 1. Interoperable Components for Use with HALO AP Dx
| Components | Manufacturer | Model |
|---|---|---|
| Scanner | Hamamatsu | NanoZoomer S360MD Slide scanner |
| Display | JVC | JD-C240BN01A |
This FDA 510(k) clearance letter pertains to HALO AP Dx, a software-only device for digital pathology image review. The documentation indicates that the device has been deemed substantially equivalent to a predicate device, the Hamamatsu NanoZoomer S360MD Slide scanner system (K213883).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided document defines the performance data points primarily through "Performance Data" and "Summary of Studies" sections, focusing on comparisons to the predicate device and usability. There are no explicit quantitative "acceptance criteria" presented as specific thresholds, but rather statements of adequacy and similarity to the predicate.
| Acceptance Criterion (Implicit) | Reported Device Performance |
|---|---|
| Image Reproduction Quality (Color Accuracy) | Criteria: Identical image reproduction compared to the predicate device (NZViewMD viewer), specifically regarding pixel-wise color accuracy.Performance: Pixel-level comparisons demonstrated that the 95th percentile CIEDE2000 values across all Regions of Interest (ROIs) from varied tissue types and diagnoses were less than 3 ΔE00. This was determined to be "identical image reproduction." |
| Turnaround Time (Image Loading - Case Selection) | Criteria: "When selecting a case, it should not take longer than 4 seconds until the image is fully loaded."Performance: Determined to be "adequate for the intended use of the subject device." (No specific value reported, but implies <= 4 seconds). |
| Turnaround Time (Image Loading - Panning) | Criteria: "When panning the image, it should not take longer than 3 seconds until the image is fully loaded."Performance: Determined to be "adequate for the intended use of the subject device." (No specific value reported, but implies <= 3 seconds). |
| Measurement Accuracy | Criteria: Ability to perform accurate measurements (distance and area).Performance: "The subject device has been found to perform accurate measurements with respect to its intended use." (Verified using a test image with known sizes). |
| System Responsiveness under Load | Criteria: Maintain responsiveness under constant utilization.Performance: "Concurrent multi-user load testing confirms HALO AP Dx performance remains responsive under constant utilization over a long time period." |
| Human Factors/Usability (Safety and Effectiveness for Users) | Criteria: User interface is intuitive, safe, and effective for intended users.Performance: "Task-based usability tests verified the HALO AP Dx user interface to be intuitive, safe, and effective for the range of intended users." (Conducted per FDA's Guidance on Applying Human Factors and Usability Engineering to Medical Devices (2016)). |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state the numerical sample size for the test set (e.g., number of whole slide images). It mentions "multiple tiles at multiple magnification levels" and "all ROIs taken from images with varied tissue types and diagnoses" for image reproduction testing, and "a test image containing objects with known sizes" for measurement accuracy. This suggests a varied, though unspecified, set of images or data points were used for testing.
- Data Provenance: Not explicitly stated regarding country of origin. The study appears to be an internal non-clinical performance evaluation. The type of tissue used for image reproduction testing is "varied tissue types and diagnoses," implying real FFPE tissue samples, but it doesn't specify if they were retrospective or prospectively collected for the study.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- The document describes non-clinical performance testing (e.g., pixel-wise comparison, turnaround times, measurement accuracy, human factors testing). These tests typically rely on predefined objective metrics or reference standards rather than expert consensus on diagnostic interpretation.
- For the human factors/usability testing, the study "verified the HALO AP Dx user interface to be intuitive, safe, and effective for the range of intended users." This implies involvement of intended users (pathologists or similar professionals), but neither the specific number nor their qualifications are detailed.
4. Adjudication Method for the Test Set
- Adjudication methods (e.g., 2+1, 3+1) are typically relevant for studies where a "ground truth" is established by multiple human readers for diagnostic accuracy.
- Since this document focuses on technical performance and usability, rather than diagnostic accuracy (which would involve human pathologists making diagnoses with and without AI assistance), traditional adjudication methods were not applicable or described. The "ground truth" in these tests consists of objective technical parameters or usability feedback.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was described. The study primarily focuses on showing the technical performance of HALO AP Dx is substantially equivalent to the viewing software component of the predicate device (NZViewMD), not on improving human reader performance or diagnostic accuracy with AI assistance.
- The device is a "viewer" intended to "aid the pathologist," not an AI-powered diagnostic algorithm. Therefore, an MRMC study comparing human readers with and without AI assistance to measure effect size is not relevant to this submission, which focuses on the viewing platform itself.
6. Standalone (Algorithm Only) Performance
- This device, HALO AP Dx, is a standalone software-only device in the sense that it functions as a digital pathology image viewer.
- However, it does not involve a diagnostic AI algorithm where "standalone performance" (e.g., sensitivity/specificity of the AI itself) would be measured. Its "performance" is about accurate image reproduction, speed, and usability of the viewing functions.
7. Type of Ground Truth Used
The ground truth used for the technical performance evaluations was objective and predefined:
- For Image Reproduction: A pixel-wise comparison to the predicate device's viewer (NZViewMD) was performed. The "ground truth" was essentially the image rendered by the predicate's software.
- For Turnaround Times: Time taken for specific actions (loading, panning) against predefined numerical thresholds (4 seconds, 3 seconds).
- For Measurement Accuracy: A "test image containing objects with known sizes." The known sizes were the ground truth.
- For Human Factors: User feedback and task completion during usability tests against predefined criteria for intuitiveness, safety, and effectiveness.
8. Sample Size for the Training Set
- The document does not mention a "training set" in the context of machine learning. This is because HALO AP Dx is described as a "software only device intended as an aid to the pathologist to review, interpret and manage digital images," not an AI/ML-based diagnostic algorithm that would require a training set.
- Its core functionality is image display and manipulation, not pattern recognition or classification that would necessitate machine learning.
9. How the Ground Truth for the Training Set Was Established
- As no machine learning training set is mentioned or implied for this device's functionality, this question is not applicable.
Ask a specific question about this device
Page 1 of 1