Search Results
Found 1 results
510(k) Data Aggregation
(238 days)
AISight Dx is a software only device intended for viewing and management of digital images of scanned surgical pathology slides prepared from formalin-fixed paraffin embedded (FFPE) tissue. It is an aid to the pathologist to review, interpret, and manage digital images of these slides for primary diagnosis. AISight Dx is not intended for use with frozen sections, cytology, or non-FFPE hematopathology specimens.
It is the responsibility of a qualified pathologist to employ appropriate procedures and safeguards to assure the quality of the images obtained and, where necessary, use conventional light microscopy review when making a diagnostic decision. AISight DX is intended to be used with interoperable displays, scanners and file formats, and web browsers that have been 510(k) cleared for use with the AISight Dx or 510(k)-cleared displays, 510(k)-cleared scanners and file formats, and web browsers that have been assessed in accordance with the Predetermined Change Control Plan (PCCP) for qualifying interoperable devices.
AISight Dx is a web-based, software-only device that is intended to aid pathology professionals in viewing, interpretation, and management of digital whole slide images (WSI) of scanned surgical pathology slides prepared from formalin-fixed, paraffin-embedded (FFPE) tissue obtained from Hamamatsu NanoZoomer S360MD Slide scanner or Leica Aperio GT 450 DX scanner (Table 1). It aids the pathologist in the review, interpretation, and management of pathology slide digital images used to generate a primary diagnosis.
Here's a breakdown of the acceptance criteria and the study details for the AISight Dx device, based on the provided FDA 510(k) Clearance Letter:
Acceptance Criteria and Reported Device Performance
| Acceptance Criteria Category | Specific Acceptance Criteria | Reported Device Performance |
|---|---|---|
| Pixel-wise Comparison | Identical image reproduction (max pixelwise difference < 1 CIEDE2000) | Maximum pixelwise difference was 0 CIEDE2000, indicating pixelwise identical output images. Meets criteria. |
| Non-inferiority to Glass Slide Reads (Major Discordance Rate - Hamamatsu Scanner) | Upper limit of 95% CI for difference in major discordance rate (MD vs. GT vs. MO vs. GT) less than 4%. | Upper limit of 95% CI was 1.16%. Meets criteria. |
| Non-inferiority to Glass Slide Reads (Major Discordance Rate - Leica Scanner) | Upper limit of 95% CI for difference in major discordance rate (MD vs. GT vs. MO vs. GT) less than 4%. | Upper limit of 95% CI was 2.52%. Meets criteria. |
| Turnaround Time | Adequate for intended use (image processing, loading, panning, zooming). | Test results showed these to be adequate for the intended use. Meets criteria. |
| Measurement Accuracy | Accurate distance and area measurements. | Tests verified that distances and areas measured in AISight Dx accurately reflected those on a calibrated slide. Meets criteria. |
| Human Factors | Safe and effective for intended users, uses, and use environments. | AISight Dx has been found to be safe and effective for the intended users, uses, and use environments. Meets criteria. |
Study Details
-
Sample size used for the test set and the data provenance:
- The document states that two separate clinical studies were conducted, one for each scanner (Hamamatsu NanoZoomer S360MD and Leica Aperio GT 450 DX).
- The sample sizes for these clinical studies are not explicitly stated in the provided text.
- Data Provenance: Not explicitly mentioned, but the study compares performance against "the original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution," suggesting the data is derived from clinical practice, likely retrospective or a mix, given the "original sign-out" aspect. The country of origin is not specified.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The study involved "3 reading pathologists" for assessing the differences in major discordance rates.
- Qualifications of experts: Not explicitly stated, but they are referred to as "reading pathologists," indicating they are qualified to make primary diagnoses.
-
Adjudication method for the test set:
- The "reference (main) diagnosis" was the "original sign-out pathologic diagnosis using MO [ground truth, (GT)] rendered at the institution."
- The document implies that this "original sign-out" acted as the ground truth. There's no explicit mention of an adjudication process (e.g., 2+1, 3+1 consensus) to establish this ground truth beyond the initial clinical diagnosis. The major discordance rate was calculated between MD (manual digital read), MO (manual optical read), and GT (original sign-out).
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- Yes, an MRMC-like study was done as it involved "3 reading pathologists" evaluating cases using both manual digital (MD) and manual optical (MO) methods.
- Effect size of improvement with AI vs without AI assistance: This study did not measure the improvement of human readers with AI assistance. The AISight Dx is presented as a viewer and management software, not an AI-assisted diagnostic tool. The study aimed to demonstrate non-inferiority of digital viewing (MD) versus traditional optical viewing (MO) for primary diagnosis, where the software is simply the viewing platform, not an aid in interpretation itself. Therefore, no "effect size of human readers improving with AI vs without AI assistance" is reported because the device is not described as providing AI assistance for diagnostic tasks in this context.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- No. The AISight Dx is explicitly described as "an aid to the pathologist to review, interpret, and manage digital images." The clinical study evaluated "manual digital read (MD)" which is a human pathologist reading digital slides using the AISight Dx, compared to "manual optical (MO)" which is a human pathologist reading glass slides. The device is not an autonomous AI diagnostic algorithm.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The ground truth (GT) for the clinical study was the original sign-out pathologic diagnosis using manual optical microscopy (MO) rendered at the institution. This can be categorized as a form of expert (pathologist) ground truth based on clinical practice/standard of care.
-
The sample size for the training set:
- The document for AISight Dx does not mention a training set size. This is expected as AISight Dx is described as a viewing and management software, not an AI model that requires a training set for diagnostic capabilities. The performance data focuses on its function as a display and interpretation platform for human pathologists.
-
How the ground truth for the training set was established:
- As there's no mention of a training set for an AI model within the AISight Dx software (it's a viewer), there's no information on how a ground truth for such a set would be established.
Ask a specific question about this device
Page 1 of 1