Search Results
Found 219 results
510(k) Data Aggregation
(397 days)
Ask a specific question about this device
(88 days)
HALO AP Dx is a software only device intended as an aid to the pathologist to review, interpret and manage digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue for the purposes of pathology primary diagnosis. HALO AP Dx is not intended for use with frozen section, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. HALO AP Dx is intended for use with the interoperable components specified in the Table below.
HALO AP Dx, version 2.4 (abbreviated as v2.4), is a browser-based software-only device intended to aid pathology professionals in viewing, manipulating, management, and interpretation of digital pathology whole slide images (WSI) of glass slides obtained from the Hamamatsu Photonics K.K. NanoZoomer S360MD scanner or the Leica Biosystems Imaging, Inc. Aperio GT 450 DX.
HALO AP Dx is typically operated as follows:
-
Image acquisition is performed using the predicate device. The operator performs quality control of the digital slides per the instructions of the predicate device and lab specifications to determine if re-scans are necessary.
-
Once image acquisition is complete, according to its Instructions for Use, the unaltered image is saved in an external image storage location. HALO AP Dx ingests the image, and a copy of image metadata is stored in the subject device's database to improve viewing response times.
-
Depending upon a laboratory's workflow, the scanned images may first be reviewed by histotechnicians to confirm image quality and initiate any re-scans. After review, the histotechnician may modify the case status and make it available to the pathologist.
-
The reading pathologist selects a patient case from a selected worklist within HALO AP Dx whereby the subject device fetches the associated images from external image storage.
-
The reading pathologist uses the subject device to view the images and can perform the following actions, as needed:
a. Zoom and pan the image.
b. Measure distances and areas in the image.
c. Annotate images.
d. View multiple images side by side in a synchronized fashion. -
The above steps are repeated as necessary.
After viewing all images belonging to a particular case (patient), the pathologist will make a diagnosis which is documented in another system, such as a Laboratory Information System.
HALO AP Dx operates and is validated for use with the FDA cleared components listed in the intended use statement table above.
N/A
Ask a specific question about this device
(449 days)
The Tempus xR IVD assay is a qualitative next generation sequencing-based in vitro diagnostic device that uses targeted high throughput hybridization-based capture technology for detection of rearrangements in two genes, using RNA isolated from formalin-fixed paraffin embedded (FFPE) tumor tissue specimens from patients with solid malignant neoplasms.
Information provided by xR IVD is intended to be used by qualified health care professionals in accordance with professional guidelines in oncology for patients with previously diagnosed solid malignant neoplasms. Results from xR IVD are not intended to be prescriptive or conclusive for labeled use of any specific therapeutic product.
xR IVD is a next generation sequencing (NGS)-based assay for the detection of alterations from RNA that has been extracted from routinely obtained FFPE tumor samples. Extracted RNA undergoes conversion to double stranded cDNA and library construction, followed by hybridization-based capture using a whole-exome targeting probe set with supplemental custom Tempus-designed probes. Using the Illumina® NovaSeq 6000 platform qualified by Tempus, hybrid-capture–selected libraries are sequenced, targeting > 6 million unique deduplicated reads. Sequencing data is processed and analyzed by a bioinformatics pipeline to detect gene rearrangements, including rearrangements in BRAF and RET.
Alterations are classified for purposes of reporting on the clinical report as Level 2 or Level 3 alterations in accordance with the FDA Fact Sheet describing the CDRH's Approach to Tumor Profiling for Next Generation Sequencing Tests and as follows:
- Level 2: Genomic Findings with Evidence of Clinical Significance
- Level 3: Genomic Findings with Potential Clinical Significance
xR IVD is intended to be performed with the following key components, each qualified and controlled by Tempus under its Quality Management System (QMS):
- Reagents
- Specimen Collection Box
- Software
- Sequencing Instrumentation
1. Reagents
All reagents used with respect to the operation of xR IVD are qualified by Tempus.
2. Test Kit Contents
xR IVD includes a specimen collection and shipping box (the Specimen Box). The Specimen Box contains the following components:
- Informational Brochure with Specimen Requirements
- Collection Box Sleeve
- Collection Box Tray
- Seal Sticker
- ISO Label
3. Software
The proprietary xR IVD bioinformatics pipeline comprises data analysis software necessary for the xR IVD assay (software version is displayed on the xR IVD clinical report). The software is used with sequence data generated from NovaSeq 6000 instruments qualified by Tempus. Data generated from the pipeline is saved to a cloud infrastructure.
4. Instrument
xR IVD uses the Illumina NovaSeq 6000 Sequencer, a high throughput sequencing system employing sequencing-by-synthesis chemistry. The xR IVD device is intended to be performed with serial number-controlled instruments. All instruments are qualified by Tempus utilizing the Tempus Quality Management System (QMS).
5. Sample preparation
FFPE (Formalin Fixed Paraffin Embedded) tumor specimens are received either as unstained tissue sections on slides or as an FFPE block using materials supplied in the Specimen Box and prepared following standard pathology practices. Preparation and review of a Hematoxylin and Eosin (H&E) slide is performed prior to initiation of the xR IVD assay. H&E stained slides are reviewed by a board-certified pathologist to ensure that adequate tissue, tumor content and sufficient nucleated cells are present to satisfy minimum tumor content (tumor purity).
Specifically, the minimum recommended tumor purity for detection of alterations by xR IVD is 20%, with macrodissection required for specimens with tumor purity lower than 20%. The recommended tumor size and minimum tumor content needed for testing are shown in Table 1, below.
Table 1: Tumor Volume and Minimum Tumor Content
| Tissue Type | Recommended Size | Minimum Tumor Content | Macro-Dissection Requirements* | Limitations | Storage |
|---|---|---|---|---|---|
| FFPE blocks or 5 μm slides | 1mm³ of total tissue is recommended | 20% | Macro-dissection must be done if the tumor content/purity is less than 20% | Archival paraffin embedded material subjected to acid decalcification is unsuitable for analysis. Samples decalcified in EDTA are accepted. | Room temperature |
*These requirements are based on the specimen's tumor content
6. RNA extraction
Nucleic acids are extracted from tissue specimens using a magnetic bead-based automated methodology followed by DNAse treatment. The remaining RNA is assessed for quantity and quality (sizing) at RNA QC1, which is a quality check (QC) to ensure adequate RNA extraction. The minimum amount of RNA required to perform the test is 50 ng. RNA is fragmented using heat and magnesium, with variable parameters, to yield similar sized fragments from RNA inputs with different starting size distributions.
7. Library preparation
Strand-specific RNA library preparation is performed by synthesizing the first-strand cDNA using a reverse transcriptase (RT) enzyme followed by second-strand synthesis using a DNA polymerase to create double stranded cDNA. Adapters are ligated to the cDNA and the adapter-ligated libraries are cleaned using a magnetic bead-based method. The libraries are amplified with high fidelity, low-bias PCR using primers complementary to adapter sequences. Amplified libraries are subjected to a 1X magnetic bead based clean-up to eliminate unused primers, and quantity is assessed (QC2) to ensure that pre-captured libraries were successfully prepared. Each amplified sample library contains a minimum of 150 ng of cDNA to proceed to hybridization.
8. Hybrid capture
After library preparation and amplification, the adapter-ligated library targets are captured by hybridization, clean-up of hybridized targets is performed, and unbound fragments are washed away. The captured targets are enriched by PCR amplification followed by a magnetic bead-based clean-up to remove primer dimers and residual reagents. To reduce non-specific binding of untargeted regions, human COT DNA and blockers are included in the hybridization step. Each post-capture library pool must satisfy a minimum calculated molarity (≥2.7 nM) to proceed to sequencing (QC3). The molarity is used to load the appropriate concentration of library pools onto sequencing flow cells.
9. Sequencing
The amplified target-captured libraries are sequenced with a 2x76 read length to an average of 50 million total reads on an Illumina NovaSeq 6000 System using patterned flowcells (SP/S1, S2, or S4). Pooled sample libraries are fluorometrically quantified and normalized into a sequencing pool of up to 28 samples (SP flowcell), 56 samples (S1 flowcell), 140 samples (S2 flowcell), 336 samples (S4 flowcell) with each flowcell including 2 external controls. Partial batches are supported using a set threshold of loading capacity down to a defined percentage. Pooled sample libraries are loaded on a sequencing flow cell and sequenced.
10. Data Analysis
a. Data Management System (DMS): Sequence data is automatically processed using software that tracks sample names, sample metadata processing status from sequencing through to analysis and reporting. Reports of identified alterations are available in a web-based user interface for download. Sequencing and sample metrics are available in run and case reports, including sample and sequencing quality.
b. Demultiplexing and FASTQ Generation: Demultiplexing software generates FASTQ files containing sequence reads and quality scores for each of the samples on a sequencing run. The FASTQ formatted data files are used for subsequent processing of samples.
c. Indexing QC Check: Samples are checked for an expected yield of sequence reads identified to detect mistakes in pooling samples. Samples outside the expected range are marked as failed.
d. Read Alignment and BAM Generation: Genome alignment is performed to map sequence reads for each sample to the human reference genome (hg19). Alignments are saved as Binary Alignment Map (BAM) formatted files, which contain read placement information relative to the reference genome with quality scores. Aligned BAM files are further processed in a pipeline to identify genomic alterations.
e. Sample QC check: A sample QC check (QC4) evaluates the quality of the samples processed through the bioinformatics pipeline (sample level metrics in Table 2). Samples are evaluated for contamination by evaluating the percent of a tumor sample contaminated with foreign nucleic acid with a threshold below 5%. Sample sequencing coverage is assessed through RNA gene-ids expressed which counts all genes raw expression abundance (>12,000) and RNA GC-distribution (45-59%). The sample mapping rate (>80%), RNA strand % sense (>88%) and RNA strand % failed (≤ 10%) metrics provide confidence in the sample quality.
f. Alteration calling: A fully automated pipeline for bioinformatic analysis is used to identify gene rearrangements. The assay is validated to report specific gene rearrangements. Gene rearrangements are identified based on observations of reads supporting gene rearrangements in genomic alignments of discordantly mapped or split read pairs.
11. Controls
a. Negative control: A no template control (NTC) is processed to serve as a negative control to validate the acceptability of all the test samples processed through extraction, library preparation and hybridization and capture steps by testing for sample or reagent contamination. The NTC is not included on the sequencing run.
b. Positive control: xR IVD uses multiple external controls consisting of contrived material with synthetically derived alterations or a pool of multiple cell lines. A positive control sample containing known gene rearrangements will be included with each sequencing run. The external controls are processed from library preparation through sequencing to serve as an end to end control to demonstrate assay performance. The external controls are checked during library preparation and after sequencing. Failure of the external control to meet the pre-defined quality metrics will result in all test samples on the run being reported as Quality Control (QC) failure.
12. Result reporting
xR IVD reports oncologically relevant gene rearrangements as genomic findings with evidence of clinical significance or with potential clinical significance. Gene rearrangements are assessed as oncogenic based on required genomic regions specified in a Tempus-developed curated database. Gene rearrangements that retain the genomic region(s) required for oncogenicity are assigned a level of clinical significance consistent with FDA's Fact Sheet and reported. Gene rearrangements that do not retain the region(s) required for oncogenicity are not reported.
13. Quality metrics
Reporting takes into account the quality metrics outlined in Table 2. Quality metrics are assessed across the following categories:
- Batch-level: Metrics that are quantified per sequencing run; if the positive control fails these criteria, no results are reported for the entire batch of samples.
- Sample-level: Metrics that are quantified per sample; no device results are generated for samples failing these metrics. These metrics are also referred to as sequencing quality control (QC4).
- Analyte-level: Metrics that are quantified for individual alteration types. Alterations passing analyte-level metrics (threshold) are reported.
Table 2: Summary of xR IVD Post-Sequencing Key Quality Metrics at Batch, Sample (QC4), and Analyte Levels
| Quality Metric | Batch/Sample/Analyte | Required Value |
|---|---|---|
| Positive Control | Batch level | Known sequence mutations are detected |
| Expression Positive Control | Batch level | ≥0.9 r2 |
| RNA gene IDs expressed | Sample level | >12,000 |
| RNA GC distribution | Sample level | 45-59% |
| Mapping rate | Sample level | >80% |
| RNA strand percent sense | Sample level | >88% |
| RNA strand percent failed | Sample level | ≤10% |
| Unique deduplicated reads | Sample level | >6,000,000 |
| Tumor RNA junction saturation 50_100 | Sample level | >1% |
| Contamination fraction | Sample level | <5% |
| Gene Rearrangements (BRAF, RET) | Analyte level | ≥4 reads |
N/A
Ask a specific question about this device
(239 days)
Ask a specific question about this device
(248 days)
Ask a specific question about this device
(238 days)
AISight Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. AISight Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. AISight DX is intended to be used with interoperable displays, scanners and file formats, and web browsers that have been 510(k) cleared for use with the AISight Dx or 510(k)-cleared displays, 510(k)-cleared scanners and file formats, and web browsers that have been assessed in accordance with the Predetermined Change Control Plan (PCCP) for qualifying interoperable devices.
AISight Dx is a web-based, software-only device that is intended to aid pathology professionals in viewing, interpretation, and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue obtained from Hamamatsu NanoZoomer S360MD Slide scanner or Leica Aperio GT 450 DX scanner (Table 1). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the AISight Dx device, based on the provided FDA 510(k) Clearance Letter:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Pixel-wise Comparison | Identical image reproduction (max pixelwise difference < 1 CIEDE2000) | Maximum pixelwise difference was 0 CIEDE2000, indicating pixelwise identical output images. Meets criteria. |
| Non-inferiority to Glass Slide Reads (Major Discordance Rate - Hamamatsu Scanner) | Upper limit of 95% CI for difference in major discordance rate (MD vs. GT vs. MO vs. GT) less than 4%. | Upper limit of 95% CI was 1.16%. Meets criteria. |
| Non-inferiority to Glass Slide Reads (Major Discordance Rate - Leica Scanner) | Upper limit of 95% CI for difference in major discordance rate (MD vs. GT vs. MO vs. GT) less than 4%. | Upper limit of 95% CI was 2.52%. Meets criteria. |
| Turnaround Time | Adequate for intended use (image processing, loading, panning, zooming). | Test results showed these to be adequate for the intended use. Meets criteria. |
| Measurement Accuracy | Accurate distance and area measurements. | Tests verified that distances and areas measured in AISight Dx accurately reflected those on a calibrated slide. Meets criteria. |
| Human Factors | Safe and effective for intended users, uses, and use environments. | AISight Dx has been found to be safe and effective for the intended users, uses, and use environments. Meets criteria. |
Study Details
-
Sample size used for the test set and the data provenance:
- The document states that two separate clinical studies were conducted, one for each scanner (Hamamatsu NanoZoomer S360MD and Leica Aperio GT 450 DX).
- The sample sizes for these clinical studies are not explicitly stated in the provided text.
- Data Provenance: Not explicitly mentioned, but the study compares performance against "the original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution," suggesting the data is derived from clinical practice, likely retrospective or a mix, given the "original sign-out" aspect. The country of origin is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The study involved "3 reading pathologists" for assessing the differences in major discordance rates.
- Qualifications of experts: Not explicitly stated, but they are referred to as "reading pathologists," indicating they are qualified to make primary diagnoses.
-
Adjudication method for the test set:
- The "reference (main) diagnosis" was the "original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution."
- The document implies that this "original sign-out" acted as the ground truth. There's no explicit mention of an adjudication process (e.g., 2+1, 3+1 consensus) to establish this ground truth beyond the initial clinical diagnosis. The major discordance rate was calculated between MD (manual digital read), MO (manual optical read), and GT (original sign-out).
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC-like study was done as it involved "3 reading pathologists" evaluating cases using both manual digital (MD) and manual optical (MO) methods.
- Effect size of improvement with AI vs without AI assistance: This study did not measure the improvement of human readers with AI assistance. The AISight Dx is presented as a viewer and management software, not an AI-assisted diagnostic tool. The study aimed to demonstrate non-inferiority of digital viewing (MD) versus traditional optical viewing (MO) for primary diagnosis, where the software is simply the viewing platform, not an aid in interpretation itself. Therefore, no "effect size of human readers improving with AI vs without AI assistance" is reported because the device is not described as providing AI assistance for diagnostic tasks in this context.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. The AISight Dx is explicitly described as "an aid to the pathologist to review, interpret, and manage digital images." The clinical study evaluated "manual digital read (MD)" which is a human pathologist reading digital slides using the AISight Dx, compared to "manual optical (MO)" which is a human pathologist reading glass slides. The device is not an autonomous AI diagnostic algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth (GT) for the clinical study was the original sign-out pathologic diagnosis using manual optical microscopy (MO) rendered at the institution. This can be categorized as a form of expert (pathologist) ground truth based on clinical practice/standard of care.
-
The sample size for the training set:
- The document for AISight Dx does not mention a training set size. This is expected as AISight Dx is described as a viewing and management software, not an AI model that requires a training set for diagnostic capabilities. The performance data focuses on its function as a display and interpretation platform for human pathologists.
-
How the ground truth for the training set was established:
- As there's no mention of a training set for an AI model within the AISight Dx software (it's a viewer), there's no information on how a ground truth for such a set would be established.
Ask a specific question about this device
(81 days)
For In Vitro Diagnostic Use
The PathPresenter Clinical Viewer is a software intended for viewing and managing whole slide images of scanned glass sides derived from formalin fixed paraffin embedded (FFPE) tissue. It is an aid to pathologists to review and render a diagnosis using the digital images for the purposes of primary diagnosis. PathPresenter Clinical is not intended for use with frozen sections, cytology specimens, or non-FFPE specimens. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using PathPresenter Clinical software. PathPresenter Clinical Viewer is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner NDPI image formats viewed on the Barco NV MDPC-8127 display device.
The PathPresenter Clinical Viewer (version V1.0.1) is a web-based software application designed for viewing and managing whole slide images generated from scanned glass slides of formalin-fixed, paraffin-embedded (FFPE) surgical pathology tissue. It serves as a diagnostic aid, enabling pathologists to review digital images and render a primary pathology diagnosis. Functions of the viewer include zooming and panning the image, annotating the image, measuring distances and areas in the image and retrieving multiple images from the slide tray including prior cases and deprecated slides.
Here's a breakdown of the acceptance criteria and study information for the PathPresenter Clinical Viewer based on the provided FDA 510(k) clearance letter:
Acceptance Criteria and Device Performance for PathPresenter Clinical Viewer
1. Table of Acceptance Criteria and Reported Device Performance
| Test | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Pixelwise Comparison | The 95th percentile of the pixel-wise color difference in any image pair is less than 3 CIEDE2000 (< 3 ΔE00) when comparing the PathPresenter Clinical Viewer (Subject Viewer) with the Hamamatsu NanoZoomer S360MD Slide scanner with NZViewMD viewer (Predicate Viewer). | The device demonstrates substantial equivalence, with the 95th percentile of the pixel-wise color difference being less than 3 CIEDE2000 (< 3 ΔE00) for all comparisons (PathPresenter Clinical Viewer on Microsoft Edge vs. Predicate, and PathPresenter Clinical Viewer on Google Chrome vs. Predicate). |
| Turnaround Time (TAT) Study - Image Loading | Loading of the first image visible to the user: ≤ 8 seconds | Actual: 2.72 seconds |
| Turnaround Time (TAT) Study - Panning | Loading of the whole field of view after panning: ≤ 2 seconds | Actual: 1.22 seconds |
| Turnaround Time (TAT) Study - Zooming | Loading of the whole field of view after zooming: ≤ 3 seconds | Actual: 0.60 seconds |
| Measurement Accuracy | All measured values match predetermined measurements relevant to the zoom level of the viewer, with no allowable deviation. | All measured values matched the reference values exactly, with zero observed error across multiple magnification settings for both length and area measurements. |
| Human Factors Study | Critical tasks required for operation are performed accurately and without any use-related errors that could result in patient harm or diagnostic inaccuracies. Device meets HFE/UE requirements and is acceptable for deployment use in clinical settings. | Validation results confirmed that critical tasks were performed accurately and without any use-related errors that could result in patient harm or diagnostic inaccuracies. Observed usability issues were considered easily mitigable through design improvements, training, and labeling, posing no unacceptable risk. |
2. Sample Size Used for the Test Set and Data Provenance
- Pixelwise Comparison:
- Sample Size: Images from 30 FFPE tissue glass slides, with 3 Regions of Interest (ROIs) selected per slide, and 2 zoom levels (20x and 40x) per ROI. This resulted in 180 total image pairs (30 slides x 3 ROIs x 3 Zoom Levels [implied, likely meaning 20x, 40x, and perhaps an overview] x 2 Browsers).
- Data Provenance: Not specified, but generally regulatory submissions imply real-world pathology samples. It is not explicitly stated whether the data was retrospective or prospective, nor the country of origin.
- Turnaround Time Study:
- Sample Size: Minimum of 20 slides.
- Data Provenance: Not specified.
- Measurement Accuracy:
- Sample Size: An image of a calibration scale slide with objects of known sizes.
- Data Provenance: Not specified.
- Human Factors Study:
- Sample Size: "Representative users" (board-certified pathologists). No specific number provided.
- Data Provenance: Not specified.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Pixelwise Comparison: One board-certified Pathologist was used to pre-identify the Regions of Interest (ROIs). Their specific years of experience are not mentioned beyond "board-certified Pathologist." This pathologist was blinded to the rest of the pixel-wise comparison study.
- Measurement Accuracy: The ground truth was established by using an image of a calibration scale slide with objects of known sizes. This does not involve human experts for ground truth establishment.
- Human Factors Study: "Board-certified pathologists" served as representative users, performing critical tasks. They were the subjects of the study, not necessarily establishing ground truth for device performance but rather testing usability and safety in a clinical context.
4. Adjudication Method for the Test Set
- The document does not explicitly describe an adjudication method for establishing ground truth for the pixelwise comparison or measurement accuracy tests.
- For the pixelwise comparison, the individual pathologist selected ROIs, but the automated comparison of pixel differences is a technical measurement against a predicate, not requiring multi-expert adjudication.
- For measurement accuracy, it was a direct comparison against known values from a calibration slide.
- The Human Factors study involved observations of user performance, not adjudication of diagnostic interpretations.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of Human Reader Improvement with AI vs. Without AI Assistance
- No, an MRMC comparative effectiveness study was not performed, or at least not described in this summary. The PathPresenter Clinical Viewer is described as a "software intended for viewing and managing whole slide images" and an "aid to pathologists." It is a viewer, not an AI diagnostic algorithm that provides an independent reading or an assistive output that would typically be evaluated in an MRMC study comparing human performance with and without AI assistance. The studies focused on technical equivalence to a predicate viewer and usability.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
- The document describes standalone performance tests for certain aspects of the software, particularly the pixelwise comparison and measurement accuracy. These tests evaluated the software's output (pixel rendering, measurement values) directly against a known reference (predicate viewer, calibration slide) without human interpretation as part of the core evaluation criteria.
- However, the overall device is an "aid to pathologists" implying human-in-the-loop use. There is no mention of the software making diagnostic recommendations or interpretations by itself.
7. The Type of Ground Truth Used
- Pixelwise Comparison: The ground truth for comparison was the image output from the predicate device (Hamamatsu NanoZoomer S360MD Slide scanner with NZViewMD viewer). The "ground truth" here is the established rendering quality of the predicate viewer.
- Turnaround Time Study: The ground truth involved established time targets for specific operations.
- Measurement Accuracy: The ground truth was the known measurements of objects on a calibration scale slide.
- Human Factors Study: The "ground truth" for success was the accurate and error-free performance of critical tasks by representative users, consistent with safety.
8. The Sample Size for the Training Set
- The provided document does not mention a training set for the PathPresenter Clinical Viewer. This is expected as the viewer is described as image management and viewing software, not an AI algorithm that learns from data.
9. How the Ground Truth for the Training Set Was Established
- As no training set is mentioned or implied for this device, information on how its ground truth was established is not applicable here.
Ask a specific question about this device
(269 days)
RadiForce MX317W-PA is intended for in vitro diagnostic use to display digital images of histopathology slides acquired from IVD-labeled whole-slide imaging scanners and viewed using IVD-labeled digital pathology image viewing software that have been validated for use with this device.
RadiForce MX317W-PA is an aid to the pathologist and is used for review and interpretation of histopathology slides for the purposes of primary diagnosis. It is the responsibility of the pathologist to employ appropriate procedures and safeguards to assure the validity of the interpretation of images using this product. The display is not intended for use with digital images from frozen section, cytology, or non- formalin-fixed, paraffin embedded (non-FFPE) hematopathology specimens.
RadiForce MX317W-PA is a color LCD monitor for viewing digital images of histopathology slides. The color LCD panel employs in-plane switching (IPS) technology allowing wide viewing angles and the matrix size is 4,096 x 2,160 pixels (8MP) with a pixel pitch of 0.1674 mm.
Since factory calibrated display modes, each of which is characterized by a specific tone curve, a specific luminance range and a specific color temperature, are stored in lookup tables within the monitor. This helps ensure tone curves even if a display controller or workstation must be replaced or serviced.
"Patho" is for intended digital pathology use mode.
The provided FDA 510(k) clearance letter for the RadiForce MX317W-PA describes a display device for digital histopathology. It does not contain information about an AI/ML medical device. Therefore, a study proving the device meets acceptance criteria related to AI/ML performance (such as accuracy, sensitivity, specificity, MRMC studies, and ground truth establishment methods for large datasets) is not present in this document.
The document primarily focuses on the technical performance and equivalence of a display monitor to a predicate device. The "performance testing" section refers to bench tests validating display characteristics like spatial resolution, luminance, and color, not the clinical performance of an AI algorithm interpreting medical images.
Given the information provided, here's an analysis based on the actual content:
Based on the provided document, the RadiForce MX317W-PA is a display monitor, not an AI/ML medical device designed for image interpretation. Therefore, the acceptance criteria and study detailed below pertain to the display's technical performance and its equivalence to a predicate display, not to an AI algorithm's diagnostic accuracy.
1. Table of Acceptance Criteria and Reported Device Performance
The document states that "the display characteristics of the RadiForce MX317W-PA meet the pre-defined criteria when criteria are set." However, the exact numerical acceptance criteria for each bench test (e.g., minimum luminance, pixel defect limits) are not explicitly listed in the provided text. The document only lists the types of tests performed and states that the device "has display characteristics equivalent to those of the predicate device" and "meet the pre-defined criteria."
| Acceptance Criteria Category | Reported Device Performance Summary (as per document) |
|---|---|
| User controls (Modes & settings) | Performed, assumed met |
| Spatial resolution | Performed, assumed met, equivalent to predicate |
| Pixel defects | Performed, assumed met, equivalent to predicate |
| Artifacts | Performed, assumed met, equivalent to predicate |
| Temporal response | Performed, assumed met, equivalent to predicate |
| Maximum and minimum luminance | Performed, assumed met, equivalent to predicate |
| Grayscale | Performed, assumed met, equivalent to predicate |
| Luminance uniformity and Mura test | Performed, assumed met, equivalent to predicate |
| Stability of luminance and chromaticity response | Performed, assumed met, equivalent to predicate |
| Bidirectional reflection distribution function | Performed, assumed met, equivalent to predicate |
| Gray Tracking | Performed, assumed met, equivalent to predicate |
| Color scale | Performed, assumed met, equivalent to predicate |
| Color gamut volume | Performed, assumed met, equivalent to predicate |
Note: The document only states that these tests were performed and that the results show equivalence to the predicate device and that the device meets pre-defined criteria. It does not provide the specific numerical results or the exact numerical acceptance criteria for each test.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size for Test Set: The document describes bench tests performed on a single device, the RadiForce MX317W-PA (it's a physical monitor, not a software algorithm processing a dataset). There is no mention of a "test set" in the context of a dataset of medical images.
- Data Provenance: Not applicable. The "data" here refers to the measured performance characteristics of the physical display device itself during bench testing, not patient data.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Not applicable. The ground truth for a display monitor's technical performance is established by standardized measurement equipment and protocols, not by expert interpretation of images. The device itself is the object under test for its physical characteristics.
4. Adjudication Method for the Test Set
- Not applicable. This concept applies to human or AI interpretation of medical images, where discrepancies among readers or algorithms might need resolution. For physical device performance, measurements are generally objective.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Not performed/Applicable. An MRMC study is designed to assess the performance of a diagnostic aid (like AI) on image interpretation by human readers. This device is a display monitor, not an AI algorithm. Its function is to display images, not to interpret them or assist human interpreters in a diagnostic decision-making process that would warrant an MRMC study.
6. Standalone (i.e., Algorithm Only Without Human-in-the-Loop Performance) Study
- Not applicable. As stated, this is a display monitor, not an algorithm.
7. Type of Ground Truth Used:
- The ground truth for the display's performance tests would be metrology-based standards and calibration references (e.g., standard luminance values, colorimetry standards) against which the display's output is measured. It is not expert consensus, pathology, or outcomes data, as these relate to diagnostic accuracy studies.
8. The Sample Size for the Training Set
- Not applicable. This device is hardware; it does not involve training data or machine learning algorithms.
9. How the Ground Truth for the Training Set Was Established
- Not applicable. No training set exists for this device.
Ask a specific question about this device
(90 days)
For In Vitro Diagnostic Use Only
CaloPix is a software only device for viewing and management of digital images of scanned surgical pathology slides prepared from Formalin-Fixed Paraffin Embedded (FFPE) tissue.
CaloPix is intended for in vitro diagnostic use as an aid to the pathologist to review, interpret and manage these digital slide images for the purpose of primary diagnosis.
CaloPix is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and the validity of the interpretation of images using CaloPix.
CaloPix is intended to be used with the interoperable components specified in the below Table:
| Scanner Hardware | Scanner Output File Format | Interoperable Displays |
|---|---|---|
| Leica Aperio GT 450 DX scanner | SVS | Dell U3223QE |
| Hamamatsu NanoZoomer S360MD Slide scanner | NDPI | JVC Kenwood JD-C240BN01A |
CaloPix, version 6.1.0 IVDUS, is a web-based software-only device that is intended to aid pathology professionals in viewing, interpreting and managing digital Whole Slide Images (WSI) of glass slides obtained from the Hamamatsu NanoZoomer S360MD slide scanner (NDPI file format) and viewed on the JVC Kenwood JD-C240BN01A display, as well as those obtained from the Leica Aperio GT 450 DX scanner (SVS file format) and viewed on the Dell U3223QE display.
CaloPix does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis.
CaloPix is for viewing digital images of scanned glass slides that would otherwise be appropriate for manual visualization by conventional light microscopy.
As a whole, CaloPix is a pathology Image Management System (IMS) which brings case-centric digital pathology image management, collaboration, and image processing. CaloPix consists of:
-
Integration with Laboratory Information Systems (LIS): Allows to obtain automatically from the LIS patient data associated with the cases, scanned whole slide images and other related medical images to be analyzed. The data stored in the database is automatically updated according to the interface protocol with the LIS.
-
DataBase: After ingestion, scanned WSI can be organized in the CaloPix database consisting of folders (cases) containing patient identification data and examination results from a LIS.
Ingestion of the slides is performed through an integrated module that allows their automatic indexation based on patient data retrieved from the LIS. After their ingestion, image files are stored in a CaloPix-specific file storage environment, that can be on premises or in the cloud.
- The CaloPix viewer component to process scanned whole slide images, that includes functions for panning, zooming, screen capture, annotations, distance and surface measurement, and image registration. This viewer relies on image servers (IMGSRV) which extract image tiles from the whole slide image file and send these tiles to the CaloPix viewer for smooth and fast viewing.
The FDA 510(k) clearance letter for CaloPix indicates that the device's performance was evaluated through a series of tests to demonstrate its safety and effectiveness. The primary study described in the provided document focuses on technical performance testing rather than a clinical multi-reader multi-case (MRMC) study.
Here's a breakdown of the requested information based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
| Test | Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Pixel-wise comparison (Image Reproduction Accuracy) | The 95th percentile of the pixel-wise color differences (CIEDE2000, ΔE00) in any image pair between CaloPix and the predicate device's IRMS must be less than 3 (ΔE00 < 3). | Test results showed that the 95th percentile of pixel-wise differences between CaloPix and the comparators (NZViewMD and Aperio WebViewer DX) were less than 3 CIEDE2000. The output images were considered pixel-wise identical and visually adequate. |
| Measurements of Area and Distance | Accuracy of measurements made in CaloPix reviewer confirmed by comparing to predicate device viewers. (Specific numerical criteria not provided). | Tests verified that the distance and area measurements made in the CaloPix viewer accurately reflected the measurements made in the NZViewMD viewer and the Aperio WebViewer DX viewer. |
| Turnaround Time | - Image loading: ≤ 10 seconds- Panning: ≤ 7 seconds- Zooming: ≤ 7 seconds | Turnaround times for opening an image and panning or zooming have been determined and found to be adequate for the intended use of the subject device. (Specific measured times not provided, but stated to meet criteria). |
| Human Factors Validation Study | CaloPix to be safe and effective for the intended users, uses, and use environments (per FDA guidance "Applying Human Factors and Usability Engineering to Medical Devices (2016)"). | CaloPix has been found to be safe and effective for the intended users, uses and use environments. |
2. Sample Size and Data Provenance for Test Set
- Sample Size for Pixel-wise Comparison: 30 whole slide images (WSIs). Specifically:
- 25 H&E-stained (Formalin-Fixed Paraffin Embedded - FFPE) tissue glass slides.
- 5 IHC-stained (Masson's trichrome stain, CD8, CD3, or CD20) FFPE tissue glass slides.
- Data Provenance: Not explicitly stated regarding country of origin. The study appears to be a technical performance test using pre-scanned slides. It is unclear if these were from a retrospective or prospective collection for the purpose of this specific test. The mention of "FFPE tissue glass slides" implies clinical relevance.
3. Number of Experts and Qualifications for Ground Truth
- Number of Experts: One pathologist.
- Qualifications of Experts: The text states, "as verified by a pathologist," but does not specify the pathologist's experience level (e.g., years of experience, board certification). This was for identifying "relevant pathological features" for ROIs.
4. Adjudication Method for Test Set
- No "adjudication method" in the sense of multiple readers coming to a consensus for diagnostic ground truth is described. The ground truth for the pixel-wise comparison was based on pre-identified Regions of Interest (ROIs) verified by a single pathologist. For the measurement and turnaround time tests, the comparison was against predicate device software performance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No MRMC comparative effectiveness study was done comparing human reader performance with and without AI assistance. The described tests are technical performance evaluations of the viewing and management software's rendering accuracy, measurement functions, and speed.
6. Standalone (Algorithm Only) Performance
- This device, CaloPix, is described as a "software only device for viewing and management of digital images." It "does not include any automated Image Analysis Applications that would constitute computer aided detection or diagnosis." Therefore, no standalone (algorithm only) performance, as typically understood for an AI/CADe/CADx device, was conducted or is applicable since the device's function is solely as a viewing platform, not an analytical algorithm.
7. Type of Ground Truth Used
- Technical Ground Truth: For the pixel-wise comparison, the "ground truth" was established by comparing CaloPix's image rendering output against the image rendering of the predicate device's Image Review Manipulation Software (IRMS) (i.e., NZViewMD and Aperio WebViewer DX), with the acceptance criteria based on CIEDE2000 values. The relevant pathological features within the slides were identified by a pathologist for ROI selection.
- For measurements, the ground truth was the measurements obtained from the predicate device's IRMS.
- For turnaround time, the ground truth was the pre-defined time limits.
- For human factors, the ground truth was the satisfaction of usability and safety criteria.
8. Sample Size for the Training Set
- Not Applicable / Not Provided: As CaloPix is a viewing and management software without AI/CAD algorithms, there is no mention of a "training set" in the context of machine learning model development. The tests described are functional and technical performance evaluations, not AI model validation.
9. How Ground Truth for Training Set was Established
- Not Applicable / Not Provided: See point 8.
Ask a specific question about this device
(226 days)
For In Vitro Diagnostic Use
Viewer+ is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret and manage digital images of pathology slides for primary diagnosis. Viewer+ is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. Viewer+ is intended for use with Hamamatsu NanoZoomer S360MD Slide scanner and BARCO MDPC-8127 display.
Viewer+, version 1.0.1, is a web-based software device that facilitates the viewing and navigating of digitized pathology images of slides prepared from FFPE-tissue specimens acquired from Hamamatsu NanoZoomer S360MD Slide scanner and viewed on BARCO MDPC-8127 display. Viewer+ renders these digitized pathology images for review, management, and navigation for pathology primary diagnosis.
Viewer+ is operated as follows:
-
- Image acquisition is performed using the NanoZoomer S360MD Slide scanner according to its Instructions for Use. The operator performs quality control of the digital slides per the instructions of the NanoZoomer and lab specifications to determine if re-scans are necessary.
-
- Once image acquisition is complete and the image becomes available in the scanner's database file system, a separate medical image communications software (not part of the device) automatically uploads the image and its corresponding metadata to persistent cloud storage. Image and data integrity checks are performed during the upload to ensure data accuracy.
-
- The subject device enables the reading pathologist to open a patient case, view the images, and perform actions such as zooming, panning, measuring distances and areas, and annotating images as needed. After reviewing all images for a case, the pathologist will render a diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the Viewer+ device, based on the provided FDA 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Pixel-wise comparison (of images reproduced by Viewer+ and NZViewMD for the same file generated from NanoZoomer S360md Slide Scanner) | The 95th percentile of pixel-wise differences between Viewer+ and NZViewMD was less than 3 CIEDE2000, indicating their output images are pixel-wise identical and visually adequate. |
| Turnaround time (for opening, panning, and zooming an image) | Found to be adequate for the intended use of the device. |
| Measurement accuracy (using scanned images of biological slides) | Viewer+ was found to perform accurate measurements with respect to its intended use. |
| Usability testing | Demonstrated that the subject device is safe and effective for the intended users, uses, and use environments. |
2. Sample Size Used for the Test Set and Data Provenance
The document does not explicitly state the specific sample size of images or cases used for the "Test Set" in the performance studies. It mentions "scanned images of the biological slides" for measurement accuracy and "images reproduced by Viewer+ and NZViewMD for the same file" for pixel-wise comparison.
The data provenance (country of origin, retrospective/prospective) is also not specified in the provided text.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
The document does not specify the number of experts or their qualifications used to establish ground truth for the test set. It mentions that the device is "an aid to the pathologist" and that "It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision." However, this relates to the intended use and not a specific part of the performance testing described.
4. Adjudication Method for the Test Set
The document does not describe any specific adjudication method (e.g., 2+1, 3+1) used for establishing ground truth or evaluating the test set results. The pixel-wise comparison relies on quantitative color differences, and usability is assessed according to FDA guidance.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance is mentioned or implied in the provided text. The device is a "viewer" and not an AI-assisted diagnostic tool that would typically involve such a study.
6. Standalone Performance (Algorithm Only without Human-in-the-Loop)
The performance tests described (pixel-wise comparison, turnaround time, measurements) primarily relate to the technical functionality of the Viewer+ software itself, which is a viewing and management tool. These tests can be interpreted as standalone assessments of the software's performance in rendering images and providing basic functions like measurements. However, it's crucial to note that Viewer+ is an "aid to the pathologist" and not intended to provide automated diagnoses without human intervention. The "standalone" performance here refers to its core functionalities as a viewer, not as an autonomous diagnostic algorithm.
7. Type of Ground Truth Used
- Pixel-wise comparison: The ground truth for this test was the image reproduced by the predicate device's software (NZViewMD) for the same scanned file. The comparison was quantitative (CIEDE2000).
- Measurements: The ground truth would likely be established by known physical dimensions on the biological slides, verified by other means, or through precise calibration. The document states "Measurement accuracy has been verified using scanned images of the biological slides."
- Usability testing: The ground truth here is the fulfillment of usability requirements and user satisfaction/safety criteria, as assessed against FDA guidance.
8. Sample Size for the Training Set
The document does not mention the existence of a "training set" in the context of the Viewer+ device. This is a software-only device for viewing and managing images, not an AI/ML algorithm that typically requires a training set for model development.
9. How the Ground Truth for the Training Set Was Established
As no training set is mentioned for this device, information on how its ground truth was established is not applicable.
Ask a specific question about this device
Page 1 of 22