Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K984188
    Date Cleared
    1999-07-28

    (247 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    K925670/A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The ACIS Automated Cellular Imaging System is intended for & In Vitro Diagnostic Uses as an aid to the pathologist in the classification and counting of cells of interest based on particular color, size and shape.

    Device Description

    The Automated Cellular Imaging System (ACIS) device is an automated intelligent microscope cell locating device that detects cells (objects) of interest, by color and pattern recognition techniques. The system consists of software resident in computer memory and includes keyboard, color monitor, microscope, printer, and automatic slide handlinq equipment controlled and operated by a health care professional for interpretation and diagnosis.

    AI/ML Overview

    The ChromaVision Medical Systems, Inc. Automated Cellular Imaging System (ACIS) is intended as an aid to pathologists in the classification and counting of cells of interest. The studies provided demonstrate the device's reproducibility, accuracy, sensitivity, and specificity, particularly in the context of identifying cytokeratin-positive tumor cells in bone marrow specimens.

    Here's an analysis of the acceptance criteria and the studies conducted:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined quantitative acceptance criteria in a formal table with pass/fail thresholds. Instead, it presents performance characteristics and studies demonstrating the device's capabilities in comparison to manual methods and between instruments/pathologists. Based on the reported findings, the implicit acceptance criteria appear to be:

    Performance CharacteristicImplicit Acceptance CriteriaReported Device Performance
    ReproducibilityConsistent identification and presentation of specific cells (XY coordinates) across multiple runs and instruments. Minimal variability in cell counts across runs, instruments, and pathologists.Between-Instrument Reproducibility (Study 1): 100% reproducibility in identifying the same tumor cells by location across 3 ACIS systems over repeated scanning (n=27, 3 slides run 3 times on 3 ACIS). CV% and SD were 0. Between-Instrument Reproducibility (Study 2): Perfect agreement in tumor cell counts (CV% and SD of 0 for all variance components) across 3 ACIS systems over repeated runs (5 times each) by the same pathologist, for 4 cytospin slides (2 biological, 2 spiked). Between-Pathologist Reproducibility: Differences in tumor cell counts between pathologists for ACIS-assisted method (-3 to +32) were similar to manual counts (-4 to +13), indicating ACIS does not exacerbate inter-pathologist variability.
    Accuracy / CorrelationHigh agreement with manual microscopy in identifying the presence or absence of tumor cells.Study 1 (Spiked Specimen): 100% overall agreement between ACIS-assisted reading and manual microscopy for identifying the presence or absence of tumor cells in 30 spiked and normal bone marrow slides. Study 2 (Real Tumor Specimen): In 17 out of 39 cases (44%), ACIS-assisted method identified tumor cells that were overlooked by manual microscopy. In 3 cases, ACIS-assisted method re-classified specimens as non-tumor, contradicting manual microscopy. These discrepancies were verified by a second blinded independent manual and ACIS read by a third pathologist, with 100% verification of ACIS observations (21 of 21 cases).
    SensitivityAbility to detect tumor cells, including those difficult to identify manually.ACIS-assisted method identified tumor cells that were initially overlooked by manual microscopy in 17 out of 39 cases (44%). This suggests improved sensitivity over manual microscopy in these challenging real tumor specimens.
    SpecificityAbility to correctly identify the absence of tumor cells.In Study 1 (Spiked Specimen), ACIS-assisted method correctly identified 10 out of 10 cases without tumor cells, demonstrating 100% specificity for absence of tumor. In Study 2 (Real Tumor Specimen), ACIS-assisted method led to re-classification of 3 cases from positive (manual) to negative, implying ACIS can aid in more specific identification.

    2. Sample Size Used for the Test Set and Data Provenance

    • Reproducibility Study 1:
      • Test Set Sample Size: 3 full slides, each run 3 times on 3 different ACIS systems (total of 27 runs).
      • Data Provenance: Clinical specimens (heparinized bone marrow) from human subjects with breast cancer. Prospectively processed for the study.
    • Reproducibility Study 2:
      • Test Set Sample Size: 4 cytospin slides (2 biological from human donors with breast cancer, 2 spiked from normal human donors) each read 5 times on 3 different ACIS systems (total of 60 reads).
      • Data Provenance: Heparinized bone marrow from human subjects with breast cancer (biological) and normal human donors spiked with tissue-cultured human carcinoma cells (spiked). Prospectively processed for the study.
    • Accuracy/Correlation Study 1 (Spiked Specimen):
      • Test Set Sample Size: 30 slides (2 sets of 10 spiked slides with approx. 4 and 50 tumor cells respectively, plus an additional set of 10 normal human bone marrow slides).
      • Data Provenance: Normal human bone marrow specimens, either spiked with tissue-cultured human breast carcinoma cells or normal. Prospectively processed for the study.
    • Accuracy/Correlation Study 2 (Real Tumor Specimen):
      • Test Set Sample Size: 39 heparinized human bone marrow specimens from patients with breast cancer.
      • Data Provenance: Clinical specimens (heparinized human bone marrow) from patients with breast cancer. Retrospective, as these were "actual human clinical tumor specimens" analyzed at a later date, but the manual reads were done initially for clinical purposes.
    • Between Pathologist Reproducibility Study:
      • Test Set Sample Size: 11 slides.
      • Data Provenance: Heparinized bone marrow from human subjects with breast cancer. Prospectively processed for the study.

    The country of origin for the data is not specified, but the specimens are from human subjects/patients.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    • Reproducibility Study 1: Ground truth for tumor cell locations (XY coordinates) was established by an "exhaustive manual scan" on each slide. The identified tumor cells by the pathologist during the review process were compared to these manual scans. While a pathologist performed the manual scan, the exact number beyond "a pathologist" and their specific qualifications are not detailed beyond being capable of "intensive examination."
    • Reproducibility Study 2: Ground truth (initial manual count) was established by the "same pathologist" who later used the ACIS. Qualifications are not specified.
    • Accuracy/Correlation Study 1: Ground truth for the presence/absence of tumor cells was based on the knowledge of spiking (for spiked samples) and confirmed by "manual microscopy" for the overall agreement comparison. A "single pathologist" read the slides manually and with ACIS. Qualifications are not specified.
    • Accuracy/Correlation Study 2:
      • Initial ground truth (manual microscopy results) was established by "two different pathologists in two different laboratories" for each of the 39 specimens.
      • For verification of discrepant results, a "third pathologist" performed a "second blinded independent manual and ACIS read."
      • Qualifications of these pathologists are not specified beyond being pathologists.
    • Between Pathologist Reproducibility Study: Ground truth was not explicitly established as the study aimed to compare inter-pathologist variability. "Two different pathologists" read the 11 slides manually and with ACIS. Qualifications are not specified.

    4. Adjudication Method for the Test Set

    • Reproducibility Study 1: Comparison against an "exhaustive manual scan" (presumably by a single expert) to ensure consistent cell presentation. No explicit adjudication process for disagreements is mentioned, as the system achieved 100% agreement.
    • Reproducibility Study 2: Comparison against an "initial manual count" by the same pathologist. No explicit adjudication process for disagreements is mentioned, as the system achieved perfect agreement.
    • Accuracy/Correlation Study 1: For spiked specimens, the "number of cases with tumor" was known by design (spiking levels). For the "correlation" part (ACIS vs. Manual), the single pathologist was blinded to the other method's results. No specific adjudication for discordant results is described, as 100% overall agreement was reported.
    • Accuracy/Correlation Study 2:
      • Initial manual reads were done by two different pathologists.
      • For the reported table comparing manual to ACIS-assisted, it seems the combined manual results served as a reference.
      • For the 20 discrepant cases (17 positive by ACIS, 3 negative by ACIS confirmed on re-analysis), these were further verified by a "third pathologist" using "blinded re-analysis" with both manual and ACIS methods. This implies an adjudication process where the third pathologist's findings served to confirm the ACIS observations.
    • Between Pathologist Reproducibility Study: No adjudication method described as it was a study of inter-pathologist variability.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done

    Yes, elements of MRMC comparative effectiveness were included, particularly in Accuracy/Correlation Study 2 and the Between Pathologist Reproducibility Study.

    • Accuracy/Correlation Study 2: This study involved multiple readers (two initial pathologists, then a third for verification) and multiple cases (39 real tumor specimens). It compared manual microscopy to ACIS-assisted reading.
      • Effect Size of Human Readers Improve with AI vs. Without AI Assistance: In 17 out of 39 cases (44%), the pathologist was successful in identifying tumor cells with the assistance of the ACIS device that "had been overlooked using manual microscopy." This indicates a substantial improvement in detection for these challenging cases. Additionally, in 3 cases, ACIS assistance led to a re-classification from positive to negative, suggesting improved specificity or reduction in false positives. The impact of the ACIS in improving pathologist performance is significant, as it allowed detection of previously missed positive cases and led to re-evaluation of others.
    • Between Pathologist Reproducibility Study: While not directly quantifying improvement, it noted that "the differences in tumor cell counts between the pathologists ranged from -4 to +13 for manual counts and from -3 to +32 for ACIS-assisted tumor cell counts. The differences were similar for both methods." It concluded that ACIS provides "an equal or greater number of candidate cells for classification" and "the differences which exist between pathologists in their identification procedures are not expected to be affected by use of the ACIS device." This suggests ACIS doesn't worsen inter-reader variability, and potentially provides more comprehensive data for review.

    6. If a Standalone Study Was Done

    The ACIS is described as "an aid to the pathologist," and all studies describe "ACIS-assisted" performance or involve a pathologist's review of ACIS output (e.g., location of IHC stained positive cells, montage images).

    While the system does automatically detect cells ("detects cells (objects) of interest, by color and pattern recognition techniques"), the performance data consistently integrates the pathologist's interpretation as part of the overall system's effectiveness. Therefore, a purely standalone (algorithm only without human-in-the-loop performance) study is not explicitly presented in this document. The results always reflect a human-AI collaboration.

    7. The Type of Ground Truth Used

    The ground truth varied depending on the study:

    • Expert Consensus / Expert Review:
      • Reproducibility Study 1: An "exhaustive manual scan" presumably by an expert pathologist, serving as the reference for XY coordinates.
      • Reproducibility Study 2: "Initial manual count" performed by a pathologist.
      • Accuracy/Correlation Study 2: Initial "manual microscopy" performed by two pathologists. Discrepant findings were further verified by a "second blinded independent manual and ACIS read by a third pathologist." This leans heavily on expert review/consensus.
    • Known by Design (Spiked Samples):
      • Accuracy/Correlation Study 1: For the spiked specimens, the presence and approximate number of tumor cells were "known" by the nature of the experimental setup (spiking with known numbers of cells). This served as a strong reference for accuracy.

    No mention of pathology or long-term outcomes data as direct ground truth.

    8. Sample Size for the Training Set

    The document does not specify the sample size for the training set used to develop or train the ACIS algorithms. The focus of this 510(k) summary is on the validation studies demonstrating the device's performance after its development.

    9. How the Ground Truth for the Training Set Was Established

    Since a training set size is not provided, the method for establishing its ground truth is also not detailed in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K973050
    Date Cleared
    1997-11-07

    (84 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    K925670/A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    For In Vitro Diagnostic Use

    Intelligent Medical Imaging, Inc.'s MICRO21 with WBC Estimate is a laboratory instrument for locating, digitally storing and displaying white blood cells to aid the technologist in performing the WBC Estimate. Examination and determination of the results must be performed by qualified individuals.

    Device Description

    The MICRO21 with WBC Estimate is a new intended use that follows the same process as the currently approved MICRO21 with White Blood Cell (WBC) Differential (Diff). The MICRO21 with WBC Diff, Ref. No. K925670/A, is an automated microscopic system that locates WBCs, stores digital images of the cells and displays the images in an organized manner to aid technologists in performing the WBC Diff procedure. The MICRO21 process is substantially equivalent to the manual microscopic process.

    The MICRO21 with WBC Estimate is an automated microscopic procedure that calculates an estimate of WBCs/uL using the information collected during the MICRO2/ with WBC Differential. Upon completion of the review process by a technologist, an algorithm calculates the WBC Estimate by using the number of classified WBCs and the number for low-power (200x magnification) fields visited. An estimated range for the WBCs/uL is calculated and reported.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the MICRO21 with WBC Estimate device:

    Acceptance Criteria and Study Analysis for MICRO21 with WBC Estimate

    The primary objective of the studies was to demonstrate the substantial equivalence of the MICRO21 with WBC Estimate to a Manual WBC Estimate and to automated cell counter results for total WBC count.

    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not explicitly state pre-defined acceptance criteria with specific thresholds (e.g., "accuracy must be >X%"). Instead, it describes a correlation study where the MICRO21's performance is compared against established methods. The "acceptance" is implied by demonstrating substantial equivalence, meaning the results are comparable and within clinically acceptable ranges.

    Acceptance Criteria (Implied)Reported Device Performance
    Accuracy: Equivalence of MICRO21 WBC Estimate with automated cell counter Total WBC Count.Correlated (specific metrics not provided, but the conclusion states "confirm that the MICRO21 with WBC Estimate is substantially equivalent to a Manual WBC Estimate" and "correlated the WBC Estimate on the MICRO21 with the total White Blood Cell Count generated by an automated cell counter").
    Accuracy: Equivalence of MICRO21 WBC Estimate with Manual WBC Estimate.Confirmed substantially equivalent.
    Precision: Intra-instrument precision (variability when processing the same specimen multiple times).Results were used to provide intra-instrument and within specimen precision (specific metrics like CV% not provided, but implied to be acceptable for equivalence).
    Reporting Range: Ability to report WBC estimates within a specified clinical range.Reportable range from 100 to ≥ 25,000 WBCs/μL, displayed as MICRO21 WBC Estimate x,xxx - y,yyy WBC/μL.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: 86 blood smear samples.
    • Data Provenance: Not explicitly stated, but given the nature of a 510(k) submission for commercialization, it is highly likely a prospective study using clinical samples. The document mentions "various sites" for automated cell counters, suggesting a multi-center data collection effort, but specific countries are not mentioned.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts:
      • For Manual WBC Estimate (Accuracy Study): Three technologists.
      • For MICRO21 WBC Estimate (Precision and Classification Review): One technologist for the precision study, and three technologists for the "WBC Classification Review" section, which contributed to confirming "real cells" used in the MICRO21 calculation.
    • Qualifications of Experts: Described as "technologists." Specific experience levels (e.g., years of experience, certification) are not provided in the document. The conclusion states "Examination and determination of the results must be performed by qualified individuals," but doesn't detail the qualifications of those who performed the study.

    4. Adjudication Method for the Test Set

    The document does not describe a formal adjudication method (like 2+1 or 3+1).

    • For the Manual WBC Estimate, it states "Three technologists performed the manual reviews," implying they each performed their own estimate, but it doesn't specify how their results were consolidated or if consensus was required.
    • For the MICRO21 Classification Review, "Three technologists performed the MICRO21 reviews" to verify image classification. Again, it doesn't mention an adjudication process if there were disagreements among them.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    An MRMC study, specifically comparing human readers with AI assistance versus without AI assistance, was not explicitly done or reported in this document. The MICRO21 is described as an "automated microscopic system that locates WBCs, stores digital images of the cells and displays the images in an organized manner to aid technologists." The "aid" here is the display and organization of images, and then the automated calculation based on the technologist's reviewed classifications. The study focuses on the accuracy and precision of the device's estimate compared to manual/automated methods, not on the improvement in human performance due to the aid.

    Therefore, no effect size of how much human readers improve with AI vs without AI assistance is provided.

    6. Standalone (Algorithm Only) Performance

    A standalone performance study was performed for the core calculation of the WBC Estimate. The algorithm automatically calculates the WBC Estimate based on:

    • The number of classified WBCs (which are verified by a technologist, but the calculation itself is algorithmic).
    • The number of low-power fields visited.
    • A Correction Factor.

    The "Test Method 1, Accuracy" and "Test Method 2, Precision" evaluate the output of this algorithmic estimate against established methods. While human input (technologist review) confirms the "Real Cells" used in the formula, the final calculation and the resulting "MICRO21 WBC Estimate" range are generated by the device's algorithm.

    7. Type of Ground Truth Used

    The ground truth used was a combination of:

    • Established device measurements: Total White Blood Cell Count generated by automated cell counters (e.g., TOA Sysmex NE 9000, Coulter STKS, Technicon H2). These automated counters are considered a gold standard or a highly reliable reference for total WBC count.
    • Expert Consensus/Reference Standard: Manual WBC Estimate performed by three technologists. This serves as a clinical reference method for comparison.

    8. Sample Size for the Training Set

    The document does not explicitly state the sample size for a training set. The 86 specimens mentioned are for the "Accuracy" test method, which appears to be the primary validation set.
    There is a "Correction Factor" (CF) that was "defined as the mean value of the 86 specimens tested." This suggests that the 86 specimens were used to determine or fine-tune this factor, which could be considered a form of calibration or training data for that specific parameter. However, it's not a typical "training set" in the context of modern machine learning where a completely separate hold-out or test set would be used.

    9. How the Ground Truth for the Training Set Was Established

    As noted above, a distinct "training set" is not described. However, the Correction Factor was derived from the mean value of the 86 specimens tested. For these 86 specimens, the ground truth involved:

    • Total WBC Counts from automated cell counters (controlled and calibrated according to manufacturer's specifications).
    • Manual WBC Estimates performed by three technologists.

    These established ground truths for the 86 specimens were then used to define the Correction Factor, making it essentially a calibrated parameter for the algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K964165
    Date Cleared
    1997-01-03

    (78 days)

    Product Code
    Regulation Number
    866.5100
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    K925670/A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MICRO21 with nDNA is a new MICRO21 intended use that follows the same process as the MICRO21 with WBC Diff, but instead locates, digitally stores and displays nDNA Images to aid the technologist in performing a nDNA Screen for Positive or Negative results. A nDNA Screen is a microscopic exam of a patient serum sample that has been set-up using an indirect enzyme antibody test for the semi-quantitative detection of nDNA which is an aid in the detection of systemic rheumatic disease.

    Device Description

    The MICRO21 with nDNA is a new MICRO21 intended use that follows the same process as the MICRO21 with WBC Diff, but instead locates, digitally stores and displays nDNA Images to aid the technologist in performing a nDNA Screen for Positive or Negative results. A nDNA Screen is a microscopic exam of a patient serum sample that has been set-up using an indirect enzyme antibody test for the semi-quantitative detection of nDNA which is an aid in the detection of systemic rheumatic disease. The nDNA Test System used on the MICRO21 is Immuno Concepts® Colorzyme® nDNA Test System. A summary of the MICRO21 with nDNA process is as follow:

    1. Patient serum samples are prepared following the Color zyme Test Procedure and then placed in designated wells on the nDNA slide.
    2. Each slide has three control wells and nine patient wells.
    3. Barcode the slides, place the slides into a frame holder, and insert the slides on the MICRO21 for processing.
    4. The MICRO21 locates the central area of each well on the slide and captures four images from each well.
    5. The nDNA images are stored by the instrument and displayed on a color monitor for review by a technologist.
    6. The technologist reviews the images and confirms a positive determination by selecting the appropriate result.
    7. A report of the nDNA screening result for each patient well is printed.
    AI/ML Overview

    This document describes the validation study for the MICRO21™ with nDNA automated cell locating device.

    Here's an analysis of the provided information:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance CriteriaReported Device Performance
    Equivalence of nDNA image presentation on MICRO21 to manual method (bright light microscope)"The results which are reported in the Summary of Results confirm that the nDNA image presentation on the MICRO21 is equivalent to the manual method."

    2. Sample size used for the test set and the data provenance:

    • Sample Size: 205 patient nDNA images.
    • Data Provenance: Retrospective, as the images were pre-identified as Positive or Negative by a technologist at Immuno Concepts manually reading the tests. The country of origin is not explicitly stated.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    • Number of Experts: One technologist.
    • Qualifications of Experts: A technologist at Immuno Concepts who manually read the 205 nDNA patient tests using a bright light microscope. No further details on experience or specific certifications are provided.

    4. Adjudication method for the test set:

    • The document implies a single-reader manual interpretation for establishing ground truth, followed by a comparison of the MICRO21 displayed images to this ground truth. There is no mention of multiple expert agreement or an adjudication process (e.g., 2+1, 3+1).

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    • No, a multi-reader multi-case (MRMC) comparative effectiveness study was not explicitly done in this context. The study focuses on the equivalence of image presentation and not on human reader performance improvement with AI assistance. The MICRO21 is presented as a tool to aid the technologist by locating, storing, and displaying images, effectively streamlining the manual review process rather than replacing it or directly enhancing diagnostic accuracy through AI interpretation.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    • No, a standalone (algorithm-only) performance was not done. The device's function is to "aid the technologist," and the final determination of Positive or Negative results is made by a human technologist reviewing the displayed images. The device itself does not provide a diagnostic output.

    7. The type of ground truth used:

    • Expert Consensus: The ground truth for the 205 patient nDNA images was established by a single technologist at Immuno Concepts who manually reviewed the slides using a bright light microscope. While it's expert opinion, it's explicitly a single expert's determination, not a consensus among multiple experts.

    8. The sample size for the training set:

    • The document does not provide information about a training set. The study described is a performance comparison, implying the MICRO21 device was already developed.

    9. How the ground truth for the training set was established:

    • Since there's no mention of a training set, there's no information on how its ground truth was established.
    Ask a Question

    Ask a specific question about this device

    K Number
    K960774
    Device Name
    MICRO21 WITH ANA
    Date Cleared
    1996-05-22

    (86 days)

    Product Code
    Regulation Number
    864.5260
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Reference Devices :

    K925670/A

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The MICRO21 with ANA is a new MICRO21 intended use that follows the same process as the MICRO21 with WBC Diff, but instead locates, digitally stores and displays ANA Images to aid the technologist in performing an ANA Screen for Positive or Negative results. An ANA Screen is a microscopic exam of a patient serum sample that has been set-up using an indirect enzyme antibody test for the semi-quantitative detection of antinuclear antibody (ANA) which is an aid in the detection of systemic thematic disease.

    Device Description

    The MICRO21™ with WBC Diff (White Blood Cell Differential) Ref. No. K925670/A is an automated microscopic system that locates WBCs, stores digital images of the cells and displays the images in an organized manner to aid technologists in performing the WBC Diff procedure. The MICRO21 process is substantially equivalent to the manual microscopic process.

    The MICRO21 with ANA is a new MICRO21 intended use that follows the same process as the MICRO21 with WBC Diff, but instead locates, digitally stores and displays ANA Images to aid the technologist in performing an ANA Screen for Positive or Negative results.

    AI/ML Overview

    Here's an analysis of the provided text, focusing on the acceptance criteria and study details for the MICRO21 with ANA device:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implicit)Reported Device Performance
    Equivalence of ANA image presentation on MICRO21 to manual bright light microscopyThe study's conclusion explicitly states: "The image comparison performed in the Test Method confirms the safety and effectiveness of the MICRO21 with ANA for the intended use of location, storage and display of ANA images to aid the technologist in performing the ANA Test Screen." While specific metrics like sensitivity, specificity, or agreement rates are not provided in this summary, the core finding is that the MICRO21's image presentation is considered equivalent to the manual method for allowing technologists to perform the ANA screen.
    Aid technologist in performing an ANA Screen for Positive or Negative results effectively.The device does enable technologists to review images and make Positive/Negative determinations, and the study's conclusion supports its effectiveness in this role. The process involves the technologist confirming or changing the MICRO21's initial determination, implying it functions as an aid. No quantitative metrics for "aid effectiveness" are given.
    Locate, digitally store, and display ANA Images accurately.The system successfully locates well centers, captures four images per well, stores them, and displays them on a monitor for review. The test method used captured and stored images from 204 patient ANA tests, which were then displayed for review. The positive outcome of the study implies this function was performed adequately.
    Substantial equivalence to the manual microscopic process for the specified ANA screening task.The overall conclusion of the study, that the image comparison "confirms the safety and effectiveness for the intended use," supports the claim of substantial equivalence as it relates to the presentation of images for ANA screening.

    Important Note: The provided text is a summary of the 510(k) submission. It focuses on the equivalence of image presentation rather than providing detailed performance metrics (e.g., sensitivity, specificity, accuracy) that would typically be expected from a device making a diagnostic determination. The device's role is described as an "aid" to the technologist, implying human-in-the-loop performance is the ultimate measure.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size: 204 patient ANA images.
    • Data Provenance:
      • Country of Origin: Not explicitly stated, but the mention of "Immuno Concepts®" suggests a US-based company, which (if it's the test site) implies US data. Could be from a clinical lab in any country where Immuno Concepts® operates.
      • Retrospective or Prospective: Appears to be retrospective in nature, as the 204 patient ANA images were "identified by a technologist at Immuno Concepts® who manually read the 204 ANA patient tests using a bright light microscope" before being loaded onto the MICRO21. This suggests existing patient samples were used.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts

    • Number of Experts: One technologist.
    • Qualifications: "a technologist at Immuno Concepts® who manually read the 204 ANA patient tests using a bright light microscope." No further detail (e.g., years of experience, certification) is provided.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not applicable/explicitly stated as a multi-reader adjudication process. The ground truth was established by a single technologist's manual reading. The MICRO21 then displayed images, and a technologist (not necessarily the same one, but possibly) reviewed them and made a determination. The wording suggests a comparison to this single technologist's ground truth, rather than an adjudication of multiple expert opinions to establish ground truth for the study.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done

    • No, an MRMC comparative effectiveness study was not explicitly described. The study compares the image presentation of the device to the manual method, based on a single technologist's manual ground truth. While a technologist reviews the device's output, the design doesn't appear to be a formal MRMC study evaluating human reader performance with and without AI assistance with an effect size analysis. It's more of a usability/equivalence study for the displayed images.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • No, a standalone performance study was not the primary focus or conclusion. The device is described as an "aid" to the technologist. While the MICRO21 makes an initial "Positive/Negative determination," the process explicitly states: "The technologist reviews the images and confirms that the MICRO21's Positive/Negative determination is correct. If not correct, the technologist changes the determination." Therefore, the final reported result is a human-in-the-loop performance, with the technologist retaining ultimate decision-making authority. The study confirms the image presentation for their review.

    7. The Type of Ground Truth Used

    • Expert Consensus: Specifically, the single expert opinion/manual reading of a technologist at Immuno Concepts® using a bright light microscope.

    8. The Sample Size for the Training Set

    • Not explicitly stated/provided. The document discusses the test method and samples used for testing, but does not give any details about a separate training set used for developing the MICRO21's initial determination capabilities. Given the era (1996) and the device's function (locating, storing, displaying images, with the technologist making the final call), it's possible the "determination" logic was rule-based or trained on a much smaller, internal dataset not detailed here.

    9. How the Ground Truth for the Training Set was Established

    • Not explicitly stated/provided. Without information on a specific training set, the method for establishing its ground truth cannot be determined from this document.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1