(247 days)
The ACIS Automated Cellular Imaging System is intended for & In Vitro Diagnostic Uses as an aid to the pathologist in the classification and counting of cells of interest based on particular color, size and shape.
The Automated Cellular Imaging System (ACIS) device is an automated intelligent microscope cell locating device that detects cells (objects) of interest, by color and pattern recognition techniques. The system consists of software resident in computer memory and includes keyboard, color monitor, microscope, printer, and automatic slide handlinq equipment controlled and operated by a health care professional for interpretation and diagnosis.
The ChromaVision Medical Systems, Inc. Automated Cellular Imaging System (ACIS) is intended as an aid to pathologists in the classification and counting of cells of interest. The studies provided demonstrate the device's reproducibility, accuracy, sensitivity, and specificity, particularly in the context of identifying cytokeratin-positive tumor cells in bone marrow specimens.
Here's an analysis of the acceptance criteria and the studies conducted:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined quantitative acceptance criteria in a formal table with pass/fail thresholds. Instead, it presents performance characteristics and studies demonstrating the device's capabilities in comparison to manual methods and between instruments/pathologists. Based on the reported findings, the implicit acceptance criteria appear to be:
Performance Characteristic | Implicit Acceptance Criteria | Reported Device Performance |
---|---|---|
Reproducibility | Consistent identification and presentation of specific cells (XY coordinates) across multiple runs and instruments. Minimal variability in cell counts across runs, instruments, and pathologists. | Between-Instrument Reproducibility (Study 1): 100% reproducibility in identifying the same tumor cells by location across 3 ACIS systems over repeated scanning (n=27, 3 slides run 3 times on 3 ACIS). CV% and SD were 0. Between-Instrument Reproducibility (Study 2): Perfect agreement in tumor cell counts (CV% and SD of 0 for all variance components) across 3 ACIS systems over repeated runs (5 times each) by the same pathologist, for 4 cytospin slides (2 biological, 2 spiked). Between-Pathologist Reproducibility: Differences in tumor cell counts between pathologists for ACIS-assisted method (-3 to +32) were similar to manual counts (-4 to +13), indicating ACIS does not exacerbate inter-pathologist variability. |
Accuracy / Correlation | High agreement with manual microscopy in identifying the presence or absence of tumor cells. | Study 1 (Spiked Specimen): 100% overall agreement between ACIS-assisted reading and manual microscopy for identifying the presence or absence of tumor cells in 30 spiked and normal bone marrow slides. Study 2 (Real Tumor Specimen): In 17 out of 39 cases (44%), ACIS-assisted method identified tumor cells that were overlooked by manual microscopy. In 3 cases, ACIS-assisted method re-classified specimens as non-tumor, contradicting manual microscopy. These discrepancies were verified by a second blinded independent manual and ACIS read by a third pathologist, with 100% verification of ACIS observations (21 of 21 cases). |
Sensitivity | Ability to detect tumor cells, including those difficult to identify manually. | ACIS-assisted method identified tumor cells that were initially overlooked by manual microscopy in 17 out of 39 cases (44%). This suggests improved sensitivity over manual microscopy in these challenging real tumor specimens. |
Specificity | Ability to correctly identify the absence of tumor cells. | In Study 1 (Spiked Specimen), ACIS-assisted method correctly identified 10 out of 10 cases without tumor cells, demonstrating 100% specificity for absence of tumor. In Study 2 (Real Tumor Specimen), ACIS-assisted method led to re-classification of 3 cases from positive (manual) to negative, implying ACIS can aid in more specific identification. |
2. Sample Size Used for the Test Set and Data Provenance
- Reproducibility Study 1:
- Test Set Sample Size: 3 full slides, each run 3 times on 3 different ACIS systems (total of 27 runs).
- Data Provenance: Clinical specimens (heparinized bone marrow) from human subjects with breast cancer. Prospectively processed for the study.
- Reproducibility Study 2:
- Test Set Sample Size: 4 cytospin slides (2 biological from human donors with breast cancer, 2 spiked from normal human donors) each read 5 times on 3 different ACIS systems (total of 60 reads).
- Data Provenance: Heparinized bone marrow from human subjects with breast cancer (biological) and normal human donors spiked with tissue-cultured human carcinoma cells (spiked). Prospectively processed for the study.
- Accuracy/Correlation Study 1 (Spiked Specimen):
- Test Set Sample Size: 30 slides (2 sets of 10 spiked slides with approx. 4 and 50 tumor cells respectively, plus an additional set of 10 normal human bone marrow slides).
- Data Provenance: Normal human bone marrow specimens, either spiked with tissue-cultured human breast carcinoma cells or normal. Prospectively processed for the study.
- Accuracy/Correlation Study 2 (Real Tumor Specimen):
- Test Set Sample Size: 39 heparinized human bone marrow specimens from patients with breast cancer.
- Data Provenance: Clinical specimens (heparinized human bone marrow) from patients with breast cancer. Retrospective, as these were "actual human clinical tumor specimens" analyzed at a later date, but the manual reads were done initially for clinical purposes.
- Between Pathologist Reproducibility Study:
- Test Set Sample Size: 11 slides.
- Data Provenance: Heparinized bone marrow from human subjects with breast cancer. Prospectively processed for the study.
The country of origin for the data is not specified, but the specimens are from human subjects/patients.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Reproducibility Study 1: Ground truth for tumor cell locations (XY coordinates) was established by an "exhaustive manual scan" on each slide. The identified tumor cells by the pathologist during the review process were compared to these manual scans. While a pathologist performed the manual scan, the exact number beyond "a pathologist" and their specific qualifications are not detailed beyond being capable of "intensive examination."
- Reproducibility Study 2: Ground truth (initial manual count) was established by the "same pathologist" who later used the ACIS. Qualifications are not specified.
- Accuracy/Correlation Study 1: Ground truth for the presence/absence of tumor cells was based on the knowledge of spiking (for spiked samples) and confirmed by "manual microscopy" for the overall agreement comparison. A "single pathologist" read the slides manually and with ACIS. Qualifications are not specified.
- Accuracy/Correlation Study 2:
- Initial ground truth (manual microscopy results) was established by "two different pathologists in two different laboratories" for each of the 39 specimens.
- For verification of discrepant results, a "third pathologist" performed a "second blinded independent manual and ACIS read."
- Qualifications of these pathologists are not specified beyond being pathologists.
- Between Pathologist Reproducibility Study: Ground truth was not explicitly established as the study aimed to compare inter-pathologist variability. "Two different pathologists" read the 11 slides manually and with ACIS. Qualifications are not specified.
4. Adjudication Method for the Test Set
- Reproducibility Study 1: Comparison against an "exhaustive manual scan" (presumably by a single expert) to ensure consistent cell presentation. No explicit adjudication process for disagreements is mentioned, as the system achieved 100% agreement.
- Reproducibility Study 2: Comparison against an "initial manual count" by the same pathologist. No explicit adjudication process for disagreements is mentioned, as the system achieved perfect agreement.
- Accuracy/Correlation Study 1: For spiked specimens, the "number of cases with tumor" was known by design (spiking levels). For the "correlation" part (ACIS vs. Manual), the single pathologist was blinded to the other method's results. No specific adjudication for discordant results is described, as 100% overall agreement was reported.
- Accuracy/Correlation Study 2:
- Initial manual reads were done by two different pathologists.
- For the reported table comparing manual to ACIS-assisted, it seems the combined manual results served as a reference.
- For the 20 discrepant cases (17 positive by ACIS, 3 negative by ACIS confirmed on re-analysis), these were further verified by a "third pathologist" using "blinded re-analysis" with both manual and ACIS methods. This implies an adjudication process where the third pathologist's findings served to confirm the ACIS observations.
- Between Pathologist Reproducibility Study: No adjudication method described as it was a study of inter-pathologist variability.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
Yes, elements of MRMC comparative effectiveness were included, particularly in Accuracy/Correlation Study 2 and the Between Pathologist Reproducibility Study.
- Accuracy/Correlation Study 2: This study involved multiple readers (two initial pathologists, then a third for verification) and multiple cases (39 real tumor specimens). It compared manual microscopy to ACIS-assisted reading.
- Effect Size of Human Readers Improve with AI vs. Without AI Assistance: In 17 out of 39 cases (44%), the pathologist was successful in identifying tumor cells with the assistance of the ACIS device that "had been overlooked using manual microscopy." This indicates a substantial improvement in detection for these challenging cases. Additionally, in 3 cases, ACIS assistance led to a re-classification from positive to negative, suggesting improved specificity or reduction in false positives. The impact of the ACIS in improving pathologist performance is significant, as it allowed detection of previously missed positive cases and led to re-evaluation of others.
- Between Pathologist Reproducibility Study: While not directly quantifying improvement, it noted that "the differences in tumor cell counts between the pathologists ranged from -4 to +13 for manual counts and from -3 to +32 for ACIS-assisted tumor cell counts. The differences were similar for both methods." It concluded that ACIS provides "an equal or greater number of candidate cells for classification" and "the differences which exist between pathologists in their identification procedures are not expected to be affected by use of the ACIS device." This suggests ACIS doesn't worsen inter-reader variability, and potentially provides more comprehensive data for review.
6. If a Standalone Study Was Done
The ACIS is described as "an aid to the pathologist," and all studies describe "ACIS-assisted" performance or involve a pathologist's review of ACIS output (e.g., location of IHC stained positive cells, montage images).
While the system does automatically detect cells ("detects cells (objects) of interest, by color and pattern recognition techniques"), the performance data consistently integrates the pathologist's interpretation as part of the overall system's effectiveness. Therefore, a purely standalone (algorithm only without human-in-the-loop performance) study is not explicitly presented in this document. The results always reflect a human-AI collaboration.
7. The Type of Ground Truth Used
The ground truth varied depending on the study:
- Expert Consensus / Expert Review:
- Reproducibility Study 1: An "exhaustive manual scan" presumably by an expert pathologist, serving as the reference for XY coordinates.
- Reproducibility Study 2: "Initial manual count" performed by a pathologist.
- Accuracy/Correlation Study 2: Initial "manual microscopy" performed by two pathologists. Discrepant findings were further verified by a "second blinded independent manual and ACIS read by a third pathologist." This leans heavily on expert review/consensus.
- Known by Design (Spiked Samples):
- Accuracy/Correlation Study 1: For the spiked specimens, the presence and approximate number of tumor cells were "known" by the nature of the experimental setup (spiking with known numbers of cells). This served as a strong reference for accuracy.
No mention of pathology or long-term outcomes data as direct ground truth.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set used to develop or train the ACIS algorithms. The focus of this 510(k) summary is on the validation studies demonstrating the device's performance after its development.
9. How the Ground Truth for the Training Set Was Established
Since a training set size is not provided, the method for establishing its ground truth is also not detailed in this document.
§ 864.5260 Automated cell-locating device.
(a)
Identification. An automated cell-locating device is a device used to locate blood cells on a peripheral blood smear, allowing the operator to identify and classify each cell according to type. (Peripheral blood is blood circulating in one of the body's extremities, such as the arm.)(b)
Classification. Class II (performance standards).