Search Results
Found 6 results
510(k) Data Aggregation
(191 days)
Ask a specific question about this device
(191 days)
Ask a specific question about this device
(225 days)
To determine gram-negative and gram-positive bacterial susceptibility against the antimicrobial agent Cefdinir.
Organisms with indications for testing* include:
Gram-Negative Bacteria: None**
Gram-Positive Bacteria: Staphylococcus aureus *** (Including methicillin-susceptible, β-lactamase producing strains), Streptococcus pyogenes
*As taken from the indications and Usage section of the manufacturer's package insert (Issued: March 1998).
**No gram-negative organisms for which Microscan is seeking clearance are indicated for testing in the Indications and Usage section of the manufacturer's package insert.
***Cefdinir is inactive against methicillin-resistant Staphylococcus
The following secondary organisms included for the testing of MicroScan® panels with Cefdinir have in vitro data but the safety and effectiveness in treating clinical infections have not been established: Citrobacter (diversus) koseri, Escherichia coli, Klebsiella pneumoniae, Proteus mirabilis, Staphylococcus epidermidis (methicillin-susceptible), Streptococcus agalactiae
Microdilution Minimum Inhibitory Concentration (MIC) Panels, specifically MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Cefdinir.
Here's an analysis of the provided text, focusing on the acceptance criteria and study details for the MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Cefdinir:
The document describes the regulatory submission (510(k)) for a new antimicrobial agent, Cefdinir, to be used with MicroScan MIC (Minimum Inhibitory Concentration) panels. The study aims to demonstrate that the performance of these new panels is substantially equivalent to a legally marketed predicate device, the NCCLS Frozen Cefdinir Reference Panels.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria for Essential Agreement (Based on FDA DRAFT document "Review Criteria for Assessment of Antimicrobial Susceptibility Devices" dated May 31, 1991): While the exact numerical acceptance criteria are not explicitly stated within this document fragment, it is implied that an "acceptable performance" is defined by Essential Agreement. Based on similar FDA guidances for antimicrobial susceptibility testing devices, an Essential Agreement (EA) of ≥ 90% and/or ≥ 95% is typically required for new antimicrobial-device combinations. Given the reported performance, the presumed acceptance criterion for Essential Agreement would be in this range.
Metric | Acceptance Criteria (Implied) | Reported Device Performance (Gram-Negative) | Reported Device Performance (Gram-Positive) |
---|---|---|---|
Essential Agreement | Acceptable Performance (e.g., ≥ 90-95%) | 97.9% | 94.0% |
Reproducibility | Acceptable Reproducibility | Acceptable | Acceptable |
Precision | Acceptable Precision | Acceptable | Acceptable |
Quality Control | Acceptable Performance | Acceptable | Acceptable |
Notes on Essential Agreement: Essential Agreement refers to agreement between the MIC values of the test device and the reference method within a specified range (typically ±1 doubling dilution).
2. Sample Size and Data Provenance
- Test Set Sample Size: The exact number of isolates (sample size) for the external evaluations is not explicitly stated in the provided text. It mentions "fresh and stock Efficacy isolates and stock Challenge strains" were used.
- Data Provenance: The studies were described as "external evaluations," implying they were conducted at sites outside the manufacturer's primary facility. The country of origin is not specified, but given the manufacturer (Dade MicroScan Inc., located in West Sacramento, CA) and the FDA submission, it is highly likely the evaluations were conducted in the United States. The data is prospective, as it was generated specifically for this 510(k) submission to compare the new device against a reference.
3. Number of Experts and Qualifications for Ground Truth
- The document does not explicitly mention the number of experts or their specific qualifications (e.g., years of experience) used to establish the ground truth for the test set.
- The ground truth was established by the NCCLS Frozen Cefdinir Reference Panel. This implies that the 'experts' would be the individuals or laboratories meticulously performing and interpreting the reference method according to NCCLS (now CLSI) guidelines, which are established by consensus among microbiologists and clinicians.
4. Adjudication Method for the Test Set
- The document does not specify an explicit adjudication method (e.g., 2+1, 3+1). The comparison is described as a direct performance comparison (Essential Agreement) between the MicroScan device and the NCCLS reference panel. Any discrepancies would typically be resolved by retesting or further analysis in accordance with standard microbiology practices for AST performance studies, though this is not detailed.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not performed or described for this device. This type of study is more common in imaging diagnostics where human interpretation is a primary component of the diagnostic workflow. For automated microbiology susceptibility testing panels, the focus is on the agreement of the device's output (MIC values) with a gold standard, rather than how human readers' performance is affected by the device.
6. Standalone Performance Study
- Yes, a standalone performance study was done. The reported Essential Agreement (97.9% for gram-negative and 94.0% for gram-positive) directly reflects the algorithm-only/device-only performance of the MicroScan panels when compared to the NCCLS reference method. There is no mention of a human-in-the-loop component for interpreting the raw results beyond what is standard for any AST device (e.g., technician reading a panel, entering data, etc.). The study evaluated the device's ability to accurately determine MIC values on its own.
7. Type of Ground Truth Used
- The ground truth used was expert consensus / reference method. Specifically, the "NCCLS Frozen Cefdinir Reference Panel" served as the gold standard. NCCLS (National Committee for Clinical Laboratory Standards, now CLSI) methods are widely regarded as the gold standard for antimicrobial susceptibility testing, representing a consensus of expert microbiological and clinical knowledge.
8. Sample Size for the Training Set
- The document does not provide any information regarding a "training set" or its sample size. This is typical for a device like this, which likely uses a pre-defined algorithm and interpretive rules for MIC determination rather than a machine learning model that requires a dedicated training phase with labeled data. The development of the panel itself (e.g., concentrations of antimicrobials) would be based on extensive prior knowledge and calibration, but not in the sense of a machine learning "training set."
9. How the Ground Truth for the Training Set was Established
- As no training set is described or implied for a machine learning context, the method for establishing its ground truth is not applicable here. The overall "training" for such a device relies on established microbiological principles, extensive historical data on drug-bug interactions, and adherence to reference methods (like those from NCCLS/CLSI) during its development and validation.
Ask a specific question about this device
(224 days)
To determine gram-negative and gram-positive bacterial susceptibility against the antimicrobial agent Trovafloxacin.
Organisms with indications for testing* include:
Gram-Negative Bacteria Escherichia coli Klebsiella pneumoniae Proteus mirabilis Pseudomonas aeruginosa Gram-Positive Bacteria Methicillin susceptible Staphylococcus aureus Methicillin susceptible Staphylococcus epidermidis Enterococcus faecalis Streptococcus agalactiae (Gp.B) Streptococcus pyogenes (Gp.A)
- As taken from the Indications and Usage section of the manufacturer's package insert (Issued: December 1997).
The MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Trovafloxacin are not intended for use with Streptococcus pneumoniae and viridans streptococci.
Microdilution Minimum Inhibitory Concentration (MIC) Panels. MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels. The manufacturing process for both Test panels called for Trovafloxacin dehydrated in Mueller-Hinton broth. During testing each well was inoculated/rehydrated with the organisms suspended in distilled water with Pluronic. Test panels were read visually after 16-20 hours of incubation at 35° C in a non-CO2 incubator.
Here's an analysis of the acceptance criteria and study details based on the provided text:
Acceptance Criteria and Device Performance Study
The device in question is the MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Trovafloxacin, intended to determine antimicrobial agent susceptibility. The study aimed to demonstrate substantial equivalence to NCCLS frozen Trovafloxacin Reference Panels.
1. Table of Acceptance Criteria and Reported Device Performance
The core acceptance criterion is "Essential Agreement" with the predicate device.
Acceptance Criterion | Reported Device Performance (Gram-Negative) | Reported Device Performance (Gram-Positive) |
---|---|---|
Overall Essential Agreement | 99.7% | 98.7% |
Note: The document refers to FDA DRAFT document "Review Criteria for Assessment of Antimicrobial Susceptibility Devices" (dated May 31, 1991) for defining substantial equivalence and acceptability. While the specific numerical threshold for "acceptable performance" is not explicitly stated in the provided text as a percentage, the reported values exceeding 98% are presented as demonstrating "acceptable performance" in comparison to the reference panels.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: The document does not explicitly state the total number of isolates (sample size) used for the clinical trial. It does refer to "clinical isolates with a broad range of susceptibilities" and states that "The type and number of isolates tested was in compliance with the FDA guidance 'Review Criteria for Assessment of Antimicrobial Susceptibility Devices: Draft May 1991'". This implies that the sample size adhered to FDA recommendations for such devices at the time.
- Data Provenance: The study was a prospective clinical trial. The text mentions "fresh and stock Efficacy isolates and stock Challenge strains" were used for the external evaluations, implying a mix of recently isolated clinical samples and established laboratory strains. The specific country of origin is not mentioned.
3. Number of Experts Used to Establish Ground Truth and Qualifications
The document does not specify the number of experts or their qualifications. The ground truth was established by "Reference panels were made according to NCCLS recommendations found in NCCLS Document M7-A4 (Methods for Dilution Antimicrobial Susceptibility Tests for Bacteria that Grow Aerobically, fourth edition; Approved Standard. Pennsylvania, NCCLS, 1997), using cation adjusted Mueller-Hinton broth." This process likely involves trained laboratory personnel following standardized protocols, rather than individual expert adjudication in the traditional sense of image analysis.
4. Adjudication Method for the Test Set
The adjudication method involved a specific protocol for resolving discrepancies:
- Initial Data Analysis: Comparing initial Test MIC results with initial Reference MIC results.
- Discrepancy Resolution: Isolates that exhibited a ≥ 2 dilution error (discrepancy) between the Test and the Reference panel were repeated in triplicate.
- Final Data Analysis: Comparing the initial Test results to the repeat Reference results. Discrepancies were considered resolved when the repeat Reference result was in Essential Agreement with the initial Test result.
- Additional Data Analysis: An additional analysis was performed using repeat results from both the Reference and the Test panels.
This method functions as a form of internal adjudication or re-testing protocol, aiming to confirm the stability of the reference measurement when a significant disagreement occurs. It is not a 2+1 or 3+1 expert consensus model as seen in image interpretation.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC comparative effectiveness study was done. This device is an automated antimicrobial susceptibility testing panel, not a diagnostic imaging device that involves human readers interpreting results with or without AI assistance. The performance is gauged by direct comparison to a reference standard (NCCLS frozen panels), not by measuring improvements in human reader performance.
6. Standalone Performance Study
Yes, a standalone (algorithm only without human-in-the-loop performance) study was performed. The entire clinical investigation focuses on comparing the output of the MicroScan® Dried panels (the device being evaluated) directly against the NCCLS frozen reference panels. While visual reading is mentioned for both (after 16-20 hours of incubation), the core of the evaluation is the intrinsic performance of the device in generating MIC values compared to the established reference standard, without involving a human interpretation step that requires "assistance." The "visual reading" refers to the method of determining the MIC endpoint from the wells, which is a standard procedure in microbiology, not an interpretation with expert variability.
7. Type of Ground Truth Used
The ground truth used was expert consensus / reference standard based on established laboratory methods. Specifically, the reference panels were "made according to NCCLS recommendations found in NCCLS Document M7-A4 (Methods for Dilution Antimicrobial Susceptibility Tests for Bacteria that Grow Aerobically, fourth edition; Approved Standard. Pennsylvania, NCCLS, 1997), using cation adjusted Mueller-Hinton broth." The NCCLS (now CLSI) standards represent the gold standard for antimicrobial susceptibility testing.
8. Sample Size for the Training Set
The document does not mention a training set or its sample size. This type of susceptibility testing device typically does not involve a "training set" in the machine learning sense. Instead, the device's design and manufacturing process are developed, and then its performance is validated against established methods using a test set.
9. How the Ground Truth for the Training Set Was Established
As there is no mention of a training set, there is no information on how its ground truth was established.
Ask a specific question about this device
(188 days)
To determine antimicrobial agent susceptibility
To determine gram-negative and gram-positive bacterial susceptibility against the antimicrobial agent Grepafloxacin.
There are no organisms for which MicroScan® panels are intended for testing, that are included in the 'Indications and Usage' as stated in the FDA approval of Grepafloxacin.
The following secondary organisms included for the testing of MicroScan® panels have in vitro data but the safety and effectiveness in treating clinical infections have not been established: Citrobacter freundii, Citrobacter (diversus) koseri, Enterobacter aerogenes, Enterobacter cloacae, Escherichia coli, Klebsiella oxytoca, Klebsiella pneumoniae, Morganella morganii, Proteus mirabilis, Proteus vulgaris, Staphylococcus aureus (methicillin-susceptible), Staphylococcus epidermidis (methicillin-susceptible), Streptococcus agalactiae, Streptococcus pyogenes.
The MicroScan® Dried Gram-Positive MIC/Combo Panels with Grepafloxacin are not intended for use with Streptococcus pneumoniae and viridans streptococci.
Microdilution Minimum Inhibitory Concentration (MIC) Panels
MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels
Acceptance Criteria and Device Performance Study
The provided document describes the 510(k) submission for the MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Grepafloxacin, intended to determine antimicrobial agent susceptibility. The study aims to demonstrate substantial equivalence to an NCCLS frozen Grepafloxacin Reference Panel.
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly defined by the statement that the device demonstrated "acceptable performance" and "substantially equivalent" performance when compared to the predicate device, as per the FDA DRAFT document "Review Criteria for Assessment of Antimicrobial Susceptibility Devices" (dated May 31, 1991). The primary metric for performance appears to be "Essential Agreement" with the reference panel.
Metric | Acceptance Criteria (Implicit from "acceptable" performance vs. predicate) | Reported Device Performance (Gram-Negative) | Reported Device Performance (Gram-Positive) |
---|---|---|---|
Overall Essential Agreement | Not explicitly stated numerically, but implied to be high for "acceptable performance" and "substantially equivalent." | 98.8% | 97.5% |
Reproducibility & Precision | Acceptable | Acceptable | Acceptable |
Quality Control | Acceptable | Acceptable | Acceptable |
Note: The document does not provide specific numerical thresholds for "acceptable" Essential Agreement, reproducibility, precision, or quality control. Such thresholds would typically be found in the referenced FDA DRAFT document or the study protocol itself. However, the reported values are presented as meeting these unspecified "acceptable" criteria.
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Description: The external evaluations were conducted with "fresh and stock Efficacy isolates and stock Challenge strains."
- Sample Size: The document does not explicitly state the total number of isolates (sample size) used for either the gram-negative or gram-positive evaluations.
- Data Provenance: The country of origin for the data is not specified. The study is described as "external evaluations," which typically implies a prospective design where the new device's performance is compared against a reference standard using various new samples. Given the nature of antimicrobial susceptibility testing, this would be a prospective study.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This type of study (antimicrobial susceptibility testing) does not typically involve human experts establishing a "ground truth" through visual assessment or clinical judgment in the same way as, for example, an imaging study.
Instead, the "ground truth" (or reference standard) is established by the NCCLS frozen Grepafloxacin Reference Panels. The performance of this reference panel itself is established through a standardized, controlled laboratory procedure, not through expert consensus in the traditional sense. Therefore, the concept of "number of experts" and "qualifications of experts" for ground truth establishment is not directly applicable here.
4. Adjudication Method for the Test Set
Adjudication methods (e.g., 2+1, 3+1) are typically used in studies where multiple human readers independently assess data and discrepancies need to be resolved. This is not directly applicable to an antimicrobial susceptibility testing study comparing results against a laboratory-defined reference standard (the NCCLS panel). The comparison is numerical and algorithmic, not based on human interpretation that requires adjudication.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study is not mentioned and would not be relevant for this type of device. This device is a diagnostic testing panel designed to determine antimicrobial susceptibility, not an AI-assisted interpretation tool for human readers.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, the study primarily reflects the "standalone" performance of the MicroScan® Dried panels. The comparison is directly between the results generated by the MicroScan® panels and the NCCLS frozen reference panels. While a human technician operates the panels, the "performance" being evaluated is the accuracy of the panel's determination of MIC values and susceptibility categories compared to the reference, not the human interpretation of ambiguous signals. The document mentions instrument reproducibility testing with "autoScan-4 and WalkAway®," suggesting automated reading, which further supports the "standalone" nature of the performance assessment.
7. The Type of Ground Truth Used
The ground truth used is a reference standard, specifically the NCCLS frozen Grepafloxacin Reference Panels. This is a laboratory-established standard for determining minimum inhibitory concentrations (MICs) of antimicrobial agents.
8. The Sample Size for the Training Set
The document does not explicitly mention a "training set" in the context of this device. Antimicrobial susceptibility panels like this are typically developed based on established microbiological principles and validated against known reference methods, rather than being "trained" like a machine learning algorithm. The study described focuses on validation (testing set performance).
9. How the Ground Truth for the Training Set was Established
As no "training set" for an algorithm is mentioned in the application, this question is not directly applicable. If one were to consider the broader development of the MicroScan® technology, the "ground truth" for establishing the design and performance characteristics of such panels would be rooted in decades of microbiological research, standardized methods (like those from NCCLS/CLSI), and clinical correlation of MIC values with treatment outcomes, but this is beyond the scope of this specific 510(k) submission.
Ask a specific question about this device
(73 days)
To determine antimicrobial agent susceptibility
Microdilution Minimum Inhibitory Concentration (MIC) Panels
Here's an analysis of the provided text regarding the acceptance criteria and study for the MicroScan® Dried Gram-Negative and Gram-Positive MIC/Combo Panels with Cefepime:
1. Table of Acceptance Criteria and Reported Device Performance
Performance Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Gram-Negative) | Reported Device Performance (Gram-Positive) |
---|---|---|---|
Overall Essential Agreement (compared to NCCLS Frozen Reference Panel) | Acceptable performance | 98.6% | 95.9% |
Reproducibility & Precision | Acceptable | Acceptable (regardless of inoculum method or instrument) | Acceptable (regardless of inoculum method or instrument) |
Quality Control Performance | Acceptable | Acceptable | Acceptable |
Note on Acceptance Criteria: The document states that the device "demonstrated substantially equivalent performance with an NCCLS frozen Cefepime Reference Panel, as defined in the FDA DRAFT document 'Review Criteria for Assessment of Antimicrobial Susceptibility Devices' (dated May 31, 1991)." While the exact numerical thresholds for acceptable performance are not explicitly stated in this summary, the phrasing "acceptable performance" implies that the achieved percentages and findings met the criteria outlined in that FDA draft document. For Essential Agreement in antimicrobial susceptibility devices, thresholds typically refer to agreement within 1-2 dilutions.
Detailed Study Information:
2. Sample size used for the test set and the data provenance
- Sample Size: Not explicitly stated as a numerical value for each panel (Gram-Negative and Gram-Positive). The text mentions "fresh and stock Efficacy isolates and stock Challenge strains" for the external evaluations, but the total number of isolates/strains used is not provided.
- Data Provenance: Not explicitly stated. The studies were "external evaluations," suggesting they were conducted outside of Dade MicroScan's internal labs, but the country of origin or specific institutions are not mentioned. It is implied to be retrospective, as it uses existing "isolates" and "strains."
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not applicable in the conventional sense. For antimicrobial susceptibility testing, the "ground truth" is typically established by a reference method (in this case, the NCCLS frozen Cefepime Reference Panel), not by human expert interpretation of images or other data.
4. Adjudication method for the test set
- Not applicable. The comparison is between the device's output and the reference method's output. There isn't an adjudication process involving multiple human readers as would be seen in interpretive tasks (e.g., radiology).
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study was not done. This type of study is relevant for diagnostic devices where human interpretation is a key component, often assisted by AI. This device directly outputs MIC values, and its performance is compared to a reference method, not to human readers.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, implicitly. The performance of the MicroScan® panels (which are essentially automated or semi-automated systems for determining MICs) was assessed in a standalone manner against a reference method. While human technicians operate the instruments and read results, the "performance" described refers to the device's ability to accurately determine MICs, independent of human interpretive judgment on the MIC value itself. The "Essential Agreement" is purely a comparison of the derived MIC values.
7. The type of ground truth used
- Reference Method: The ground truth was established by the NCCLS frozen Cefepime Reference Panel. This is a well-established and standardized laboratory method for determining antimicrobial susceptibility, considered the gold standard for comparison in these types of studies.
8. The sample size for the training set
- Not explicitly mentioned. The summary focuses on the "external evaluations" which served as the test set for regulatory submission. It does not provide details about any internal development/training sets that might have been used during the product's development phase.
9. How the ground truth for the training set was established
- Not explicitly mentioned. Given the nature of the device, it's highly probable that if a training set were used, its ground truth would also have been established using the NCCLS reference method or a similar gold standard.
Ask a specific question about this device
Page 1 of 1