Search Results
Found 1 results
510(k) Data Aggregation
(580 days)
BD Kiestra B.V.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is an in-vitro diagnostic software program that requires the BD Kiestra™ Laboratory Automation Solution in order to operate.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is applied to digital images of BD BBL™ CHROMagar™ MRSA II culture plates inoculated with anterior nares samples.
Algorithms are applied to digital images to provide a qualitative assessment of colony growth and colorimetric detection of target colonies for the detection of nasal colonization by MRSA and to serve as an aid in the prevention and control of MRSA infection. Applied algorithms provide the following results:
- "No growth", which will be manually released individually or as a batch (with other no growth samples) by . a trained microbiologist upon review of the digital plate images.
- . "Growth - other" (growth without mauve color), which digital plate images will be manually reviewed by a trained microbiologist.
- "Growth MRSA Mauve" (growth with mauve color), which digital plate images will be manually reviewed ● by a trained microbiologist.
The assay is not intended to guide, diagnose, or monitor treatment for MRSA infections. It is not intended to provide results of susceptibility to oxacillin/methicillin.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is indicated for use in the clinical laboratory.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application will be optional for the BD Kiestra™ Laboratory Automation Solution and will support laboratory technologists in batching no growth on the BD BBL™ CHROMagar™ MRSA II, growth with no key colony color detected for MRSA ("Growth – other"), and growth with key colony color detected for MRSA ("Growth MRSA Mauve"). These classifications will be characterized as "no growth" and "growth with mauve color" from BD BBLTM CHROMagar™ MRSA II media, from anterior nares samples.
The technologist has the ability to create work lists in BD Synapsys™ informatics solution based on the classifications (growth, no growth or growth with mauve color). These work lists will be used for followup work and batching of results, at the sample level.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application will apply Image Algorithms to the digital images to determine if the plate contains "growth" or "no growth". At the individual plate level when the Image Algorithms detects colony growth and potential mauve color the classification will be "growth with mauve color".
When the BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is not capable of automatically generating the outputs (visual attributes: growth with or without mauve color/no growth), the laboratory technologist will be required to read the digital image of the plate on the computer screen and decide on follow-up action as is the current standard laboratory practice.
Here's a summary of the acceptance criteria and the study details for the BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as distinct numerical targets in the provided document. However, the studies demonstrate performance metrics related to agreement with manual interpretation and reproducibility.
Performance Metric | Acceptance Criteria (Implied by study results being presented) | Reported Device Performance (Combined) |
---|---|---|
Digital Quality Image Study | ||
Agreement: No Growth | High percentage agreement with manual reading | 93.1% (148/159) |
Agreement: Non-Mauve Growth | High percentage agreement with manual reading | 98.3% (169/172) |
Agreement: Mauve Growth | High percentage agreement with manual reading | 98.9% (188/190) |
Digital Image Reproducibility Study | ||
Reproducibility: No Growth | High percentage agreement among microbiologists | 98.0% (150/153) |
Reproducibility: Non-Mauve Growth | High percentage agreement among microbiologists | 100.0% (175/175) |
Reproducibility: Mauve Growth | High percentage agreement among microbiologists | 98.9% (188/190) |
Reproducibility Study (Seeded Samples) | ||
Combined Growth (Saline) | High percentage detection of no growth | 99.7% (2082/2089) (99.3%, 99.8% CI) |
Combined Color (Saline) | High percentage detection of no growth | 99.7% (2082/2089) (99.3%, 99.8% CI) |
Combined Growth (MRSA strains) | 100% detection of growth for most dilutions | 100% (most dilutions) |
Combined Color (MRSA strains) | 100% detection of mauve color for most dilutions | 100% (most dilutions) |
Combined Growth (S. haemolyticus (non-mauve)) | High percentage detection of growth (without mauve) | 94.4% - 100% |
Combined Color (S. haemolyticus (non-mauve)) | High percentage detection of growth (without mauve) | 94.4% - 100% |
Clinical Performance Studies (Against Manual Read at Clinical Sites) | ||
No Growth Percent Agreement | High percentage agreement with manual reading | 75.6% (773/1023) |
Non-Mauve Percent Agreement | High percentage agreement with manual reading | 84.5% (207/245) |
Mauve Percent Agreement | High percentage agreement with manual reading | 98.2% (319/325) |
2. Sample sizes used for the test set and the data provenance
- Digital Quality Image Study:
- Sample Size: 521 plate images (across 3 microbiologists, so 174, 172, and 175 plates respectively for each microbiologist's manual review).
- Data Provenance: Internal digital image quality study, using simulated surveillance samples (MRSA, non-MRSA, saline controls). Implies data was generated specifically for this study. The location or country of origin is not explicitly stated, but it's an "internal" study. The nature (simulated surveillance samples) suggests it was a prospective generation of samples for the study.
- Digital Image Reproducibility Study:
- Sample Size: 518 plate images (3 images excluded due to invalid results from at least one microbiologist).
- Data Provenance: Same as the Digital Quality Image Study, as it re-analyzed the results from that study. "Internal" study, likely newly generated for this purpose.
- Reproducibility Study (Seeded Samples):
- Sample Size: Variable, ranging from 55 to 1056 individual observations per dilution/organism combination across two sites. Total observations are in the thousands (e.g., 2089 for saline controls).
- Data Provenance: Internal reproducibility study using seeded samples (bacterial strains grown in saline). Conducted at two internal sites (BD Sparks, MD location). This is also prospective generation of data.
- Clinical Performance Studies:
- Sample Size: Approximately 1,800 clinical anterior nares specimens.
- Data Provenance: Clinical anterior nares specimens. Collected at "three clinical sites." The text does not specify the country of origin, but "clinical sites" generally refer to real-world healthcare settings. This is a prospective collection of real patient samples.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Digital Quality Image Study & Digital Image Reproducibility Study:
- Number of Experts: 3.
- Qualifications: "Three trained clinical microbiologists." Specific years of experience or board certifications are not provided.
- Reproducibility Study (Seeded Samples):
- The ground truth for seeded samples is inherently defined by the known organisms and dilutions used to create the samples. The study then assesses the automated system's agreement with this known "ground truth" for growth and color. Human experts here are likely involved in verifying the initial seeding and later in the interpretation of the results, but the definition of "mauve" or "non-mauve" for specific strains is pre-defined.
- Clinical Performance Studies:
- Number of Experts: Not explicitly stated as a fixed number, but the images were "manually read by trained microbiologists at those sites." This implies multiple microbiologists across the three clinical sites.
- Qualifications: "Trained microbiologists." Specific years of experience or board certifications are not provided.
4. Adjudication method for the test set
- Digital Image Reproducibility Study: The "final digital image result" was "determined by 2/3 majority microbiologist result." This indicates a form of consensus-based adjudication, specifically a majority vote among the three microbiologists.
- Other studies (Digital Quality Image, Reproducibility with Seeded Samples, Clinical Performance): The primary comparison for these studies appears to be between the device's output and individual manual reads by microbiologists or known characteristics of seeded samples. Explicit adjudication methods for generating a single "ground truth" reference for these specific studies are not detailed, though for the clinical study, the manual read at each site serves as the reference.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study, where human readers' performance with and without AI assistance is quantitatively measured and compared, is not explicitly described in the provided text.
- The studies focus on the agreement of the device with manual reads (Digital Quality Image, Clinical Performance) and the reproducibility of human reads of digital images (Digital Image Reproducibility). While these demonstrate the device's capability to produce similar results to human reads, they don't directly quantify the improvement of human readers due to AI assistance. The device's stated purpose is to "aid in the prevention and control of MRSA infection" and to support laboratory technologists in "batching" and "streamline and optimize the reading workflow," suggesting an assistive role. However, the magnitude of this assistance in terms of effect size on human reader performance is not presented.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, a standalone performance analysis was conducted. The tables in the "Analytical Performance" and "Clinical Performance Studies" sections (e.g., Table 1, Table 6, and the final clinical performance table on page 17) directly compare the BD Kiestra™ MRSA App's output to the manual plate reads or ground truth. The "Percent Agreement" values (e.g., "No Growth Percent Agreement 75.6% (773/1023)") represent the algorithm's performance against those references.
- It's important to note that even when the app provides results, the indications for use state that "No growth", "Growth - other", and "Growth MRSA Mauve" classifications "will be manually reviewed by a trained microbiologist." This means that while the algorithm provides a classification, a human-in-the-loop is always part of the final release process. However, the data presented directly assesses the algorithm's standalone accuracy in classifying the images.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Expert Consensus: Used in the "Digital Image Reproducibility Study," where a 2/3 majority microbiologist result formed the ground truth for comparing individual microbiologist reproducibility.
- Manual Expert Reading: For the "Digital Quality Image Study" and the "Clinical Performance Studies," the ground truth was established by the manual interpretation of the plates (or digital images of plates) by trained microbiologists. This acts as the "gold standard" for comparison.
- Known Sample Characteristics: For the "Reproducibility Study (Seeded Samples)," the ground truth was defined by the known bacterial strains, dilutions, and expected growth/color characteristics of the seeded samples.
8. The sample size for the training set
- The document does not provide information on the sample size used for the training set. It focuses solely on the performance of the already-trained and developed algorithm.
9. How the ground truth for the training set was established
- The document does not describe how the ground truth for the training set was established. Information regarding the training data, annotation process, or expert involvement in labeling training data is not included in the provided text.
Ask a specific question about this device
Page 1 of 1